I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

Long time lurker, first time poster - be nice please! :) I was searching for summary data of EA funding trends, but couldn't find anything more recent than Tyler's post from 2022. So I decided to update it. If this analysis is done properly anywhere, please let me know. The spreadsheet is here (some things might look weird due to importing from Excel to sheets) Observations EA grantmaking appears on a steady downward trend since 2022 / FTX. The squeeze on GH funding to support AI / other longtermist priorities appears to be really taking effect this year (though 2025 is a rough estimate and has significant uncertainty.) I am really interested in particular about the apparent drop in GW grants this year. I suspect that it is wrong or at least misleading - the metrics report suggests they are raising ~$300m p.a. from non OP donors. Not sure if I have made an error (missing direct to charity donations?) or if they are just sitting on funding with the ongoing USAID disruption. Methodology I compiled the latest grants databases from EA Funds, GiveWell, OpenPhilanthropy, and SFF. I added summary level data from ACE. To remove [...] ---Outline:(00:41) Observations(01:26) Methodology(02:12) Notes --- First published: November 14th, 2025 Source: https://forum.effectivealtruism.org/posts/NWHb4nsnXRxDDFGLy/historical-ea-funding-data-2025-update --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Author's note: This is an adapted version of my recent talk at EA Global NYC (I'll add a link when it's available). The content has been adjusted to reflect things I learned from talking to people after my talk. If you saw the talk, you might still be interested in the “some objections” section at the end. Summary Wild animal welfare faces frequent tractability concerns, amounting to the idea that ecosystems are too complex to intervene in without causing harm. However, I suspect these concerns reflect inconsistent justification standards rather than unique intractability. To explore this idea: I provide some context about why people sometimes have tractability concerns about wild animal welfare, providing a concrete example using bird-window collisions. I then describe four approaches to handling uncertainty about indirect effects: spotlighting (focusing on target beneficiaries while ignoring broader impacts), ignoring cluelessness (acting on knowable effects only), assigning precise probabilities to all outcomes, and seeking ecologically inert interventions. I argue that, when applied consistently across cause areas, none of these approaches suggest wild animal welfare is distinctively intractable compared to global health or AI safety. Rather, the apparent difference most commonly stems from arbitrarily wide "spotlights" applied to [...] ---Outline:(00:31) Summary(02:15) Consequentialism + impartial altruism → hard to do good(03:43) The challenge: Deep uncertainty and backfire risk(04:41) Example: Bird-window collisions(05:22) We don't actually understand the welfare consequences of bird-window collisions on birds(06:08) We don't know how birds would die otherwise(07:06) The effects on other animals are even more uncertain(09:16) Four approaches to handling uncertainty(10:08) Spotlighting(15:31) Set aside that which you are clueless about(18:31) Assign precise probabilities(20:06) Seek ecologically inert interventions(22:04) Some objections & questions(22:17) The global health comparison: Spotlighting hasnt backfired (for humans)(23:22) Action-inaction distinctions(25:01) Why should justification standards be the same?(26:53) Conclusion --- First published: November 14th, 2025 Source: https://forum.effectivealtruism.org/posts/2YjqfYktNGcx6YNRy/if-wild-animal-welfare-is-intractable-everything-is --- Narrated by TYPE III AUDIO.

This is a crosspost from my Substack, where people have been liking and commenting a bunch. I'm too busy during my self-imposed version of Inkhaven to engage much – yes, pity me, I have to blog – but I don't want to leave Forum folks out of the loop! I've been following Effective Altruism discourse since 2014 and involved with the Effective Altruist community since 2015. My credentials are having run Harvard Law School and Harvard University (pan-grad schools) EA, donating $45,000 to EA causes (eep, not 10%), working at 80,000 Hours for three years, and working at a safety-oriented AI org for 10 months after that. I'm also proud of the public comms I've done for EA on this blog (here, here, and here), through my 80k podcast series, current podcast series, and through EA career advice talks I've given at EAGs and smaller events. With that background, you can at least be confident that I am familiar with my subject matter in the takes that follow. As before, let me know which of these seems interesting or wrong and there's a good chance I'll write them up with you the commenter very much in mind as [...] --- First published: November 6th, 2025 Source: https://forum.effectivealtruism.org/posts/s8aNPnrGH2fF3Hkpi/12-theses-on-ea --- Narrated by TYPE III AUDIO.

Cross-post from Good Structures. Over the last few years, I helped run several dozen hiring rounds for around 15 high-impact organizations. I've also spent the last few months talking with organizations about their recruitment. I've noticed three recurring themes: Candidates generally have a terrible time Work tests are often unpleasant (and the best candidates have to complete many of them), there are hundreds or thousands of candidates for each role, and generally, people can't get the jobs they've been told are the best path to impact. Organizations are often somewhat to moderately unhappy with their candidate pools Organizations really struggle to find the talent they want, despite the number of candidates who apply. Organizations can't find or retain the recruiting talent they want It's extremely hard to find people to do recruitment in this space. Talented recruiters rarely want to stay in their roles. I think the first two points need more discussion, but I haven't seen much discussion about the last. I think this is a major issue: recruitment is probably the most important function for a growing organization, and a skilled recruiter has a fairly large counterfactual impact for the organization they support. So why is it [...] ---Outline:(01:33) Recruitment is high leverage and high impact(03:33) Organizations struggle to hire recruiters(07:52) Many of the people applying to recruitment roles emphasize their experience in recruitment. This isnt the background organizations need(08:44) Almost no one is appropriately obsessed with hiring(10:29) The state of evidence on hiring practices is bad(13:22) Retaining strong recruiters is really hard(14:51) Why might this be less important than I think?(16:40) Im trying to find people interested in this kind of approach to hiring. If this is you, please reach out. --- First published: November 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/HLktkw5LXeqSLCchH/recruitment-is-extremely-important-and-impactful-some-people --- Narrated by TYPE III AUDIO.

16 minute read We update our list of Recommended Charities annually. This year, we announced recommendations on November 4. Each year, hundreds of billions of animals are trapped in the food industry and killed for food —that is more than all the humans who have ever walked on the face of the Earth.1 When faced with such a magnitude of suffering, it can feel overwhelming and hard to know how to help. One of the most impactful things you can do to help animals is to donate to effective animal charities—even a small donation can have a big impact. Our goal is to help you do the most good for animals by providing you with effective giving opportunities that greatly reduce their suffering. Following our comprehensive charity evaluations, we are pleased to announce our Recommended Charities!Charities awarded the status in 2025Charities retaining the status from 2024Animal Welfare ObservatoryAquatic Life InstituteShrimp Welfare ProjectÇiftlik Hayvanlarını Koruma DerneğiSociedade Vegetariana BrasileiraDansk Vegetarisk ForeningThe Humane LeagueGood Food FundWild Animal InitiativeSinergia Animal The Humane League (working globally), Shrimp Welfare Project (in Central and South America, Southeast Asia, and India), and Wild Animal Initiative (global) have continued to work on the most important issues for animals [...] ---Outline:(03:54) Charities Recommended in 2025(03:59) Animal Welfare Observatory(05:44) Shrimp Welfare Project(07:38) Sociedade Vegetariana Brasileira(09:41) The Humane League(11:22) Wild Animal Initiative(13:15) Charities Recommended in 2024(13:20) Aquatic Life Institute(15:25) Çiftlik Hayvanlarını Koruma Derneği(17:34) Dansk Vegetarisk Forening(19:18) The Good Food Fund(21:19) Sinergia Animal(23:20) Support our Recommended Charities The original text contained 2 footnotes which were omitted from this narration. --- First published: November 4th, 2025 Source: https://forum.effectivealtruism.org/posts/waL3iwczrjNt8PreZ/announcing-ace-s-2025-charity-recommendations --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

(Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.) Last Friday was my last day at Open Philanthropy. I'll be starting a new role at Anthropic in mid-November, helping with the design of Claude's character/constitution/spec. This post reflects on my time at Open Philanthropy, and it goes into more detail about my perspective and intentions with respect to Anthropic – including some of my takes on AI-safety-focused people working at frontier AI companies. (I shared this post with Open Phil and Anthropic comms before publishing, but I'm speaking only for myself and not for Open Phil or Anthropic.)On my time at Open PhilanthropyI joined Open Philanthropy full-time at the beginning of 2019.[1] At the time, the organization was starting to spin up a new “Worldview Investigations” team, aimed at investigating and documenting key beliefs driving the organization's cause prioritization – and with a special focus on how the organization should think about the potential impact at stake in work on transformatively powerful AI systems.[2] I joined (and eventually: led) the team devoted to this effort, and it's been an amazing project to be a part of. I remember [...] ---Outline:(00:51) On my time at Open Philanthropy(08:11) On going to Anthropic --- First published: November 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/EFF6wSRm9h7Xc6RMt/leaving-open-philanthropy-going-to-anthropic --- Narrated by TYPE III AUDIO.

This is the latest in a series of essays on AI Scaling. You can find the others on my site. Summary: RL-training for LLMs scales surprisingly poorly. Most of its gains are from allowing LLMs to productively use longer chains of thought, allowing them to think longer about a problem. There is some improvement for a fixed length of answer, but not enough to drive AI progress. Given the scaling up of pre-training compute also stalled, we'll see less AI progress via compute scaling than you might have thought, and more of it will come from inference scaling (which has different effects on the world). That lengthens timelines and affects strategies for AI governance and safety. The current era of improving AI capabilities using reinforcement learning (from verifiable rewards) involves two key types of scaling: Scaling the amount of compute used for RL during training Scaling [...] ---Outline:(09:12) How do these compare to pre-training scaling?(13:42) Conclusion --- First published: October 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/TysuCdgwDnQjH3LyY/how-well-does-rl-scale --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

TL;DR: I took the

Many thanks to @Felix_Werdermann

Summary As part of our ongoing work to study how to best frame EA, we experimentally tested different phrases and sentences that CEA were considering using on effectivealtruism.org. Doing Good Better taglines We observed a consistent pattern where taglines that included the phrase ‘do[ing] good better' received less support from respondents and inspired less interest in learning about EA. We replicated these results in a second experiment, where we confirmed that taglines referring to “do[ing] good better” performed less well than those referring to “do[ing] the most good”. Nouns and sentences Nouns: The effect of using different nouns to refer to EA was small, but referring to EA as a ‘philosophy' or ‘movement' inspired the most curiosity compared to options including ‘project' and ‘research field'. Sentences: “Find the most effective ways to do good with your time, money, and career” and “Effective altruism asks the question of how we [...] ---Outline:(00:12) Summary(01:23) Method(02:18) Taglines (Study 1)(03:40) Doing Good Better replication (Study 2)(05:23) Sentences (Study 1)(06:45) Nouns (Study 1)(07:41) Effectiveness focus(07:55) Conclusion(08:56) Acknowledgments --- First published: October 27th, 2025 Source: https://forum.effectivealtruism.org/posts/Y6zMpdwkkAQ8rF56w/framing-ea-doing-good-better-did-worse --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a link post. In Ugandan villages where non-governmental organisations (NGOs) hired away the existing government health worker, infant mortality went up. This happened in 39%[1] of villages that already had a government worker. The NGO arrived with funding and good intentions, but the likelihood that villagers received care from any health worker declined by ~23%. Brain Misallocation “Brain drain”, - the movement of people from poorer countries to wealthier ones, has been extensively discussed for decades[2]. But there's a different dynamic that gets far less attention: “brain misallocation”. In many low- and middle-income countries (LMICs), the brightest talents are being incentivised towards organisations that don't utilise their potential for national development. They're learning how to get grants from multilateral alphabet organisations rather than build businesses or make good policy. This isn't about talent leaving the country. It's about talent being misdirected and mistrained within it. Examples Nick Laing [...] ---Outline:(00:36) Brain Misallocation(01:16) Examples(05:37) The Incentive Trap(07:48) When Help Becomes Harm(08:48) Conclusion --- First published: October 23rd, 2025 Source: https://forum.effectivealtruism.org/posts/6rmdyddEateJFWb4L/the-charity-trap-brain-misallocation Linkpost URL:https://gdea.substack.com/p/the-charity-trap-brain-misallocation --- Narrated by TYPE III AUDIO.

This is a link post. Biological risks are more severe than has been widely appreciated. Recent discussions of mirror bacteria highlight an extreme scenario: a single organism that could infect and kill humans, plants, and animals, exhibits environmental persistence in soil or dust, and might be capable of spreading worldwide within several months. In the worst-case scenario, this could pose an existential risk to humanity, especially if the responses/countermeasures were inadequate. Less severe pandemic pathogens could still cause hundreds of millions (or billions) of casualties if they were engineered to cause harm. Preventing such catastrophes should be a top priority for humanity. However, if prevention fails, it would also be prudent to have a backup plan. One way of doing this would be to enumerate the types of pathogens that might be threatening (e.g. viruses, bacteria, fungi, etc), enumerate the subtypes (e.g. adenoviruses, coronaviruses, paramyxoviruses, etc), analyze the [...] ---Outline:(04:20) PPE(09:56) Biohardening(14:36) Detection(17:00) Expression of interest and acknowledgements The original text contained 34 footnotes which were omitted from this narration. --- First published: October 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/33t5jPzxEcFXLCPjq/the-four-pillars-a-hypothesis-for-countering-catastrophic Linkpost URL:https://defensesindepth.bio/the-four-pillars-a-hypothesis-for-countering-catastrophic-biological-risk/ --- Narrated by TYPE III AUDIO.

I've used the phrase “entertainment for EAs” a bunch to describe a failure mode that I'm trying to avoid with my career. Maybe it'd be useful for other people working in meta-EA, so I'm sharing it here as a quick draft amnesty post. There's a motivational issue in meta-work where it's easy to start treating the existing EA community as stakeholders. The real stakeholders in my work (and meta-work in general) are the ultimate beneficiaries — the minds (animal, human, digital?) that could benefit from work I help to initiate. But those beneficiaries aren't present to me — they aren't my friends, they don't work in the same building as me. To keep your eyes on the real prize takes constant work. When that work slips, you could end up working on ‘entertainment for EAs', i.e. something which gets great feedback from EAs, but only hazily, if [...] --- First published: October 17th, 2025 Source: https://forum.effectivealtruism.org/posts/AkSDhiPuvnRNbjXAf/entertainment-for-eas --- Narrated by TYPE III AUDIO.

All quotes are from their blog post "Why we chose to invest another $100 million in cash transfers", highlights are my own: Today, we're announcing a new $100 million USD commitment over the next four years to expand our partnership with GiveDirectly and help empower an additional 185,000 people living in extreme poverty. We're also funding new research, and pilot variants, to further understand how we can maximize the impact of each dollar. This is on top of another $50 million USD they gave to GiveDirectly before: We started partnering with GiveDirectly in 2021. Since then, we've donated $50 million USD to support their work across Malawi, through direct cash transfers to those living in extreme poverty. We've already reached more than 85,000 people, helping to provide life changing resources and the dignity of choice. For context, the Cash for Poverty Relief program by Give Directly [...] ---Outline:(01:24) About their founding-to-give model(02:15) Other Engagement--- First published: October 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/ktFpWLkvRAAygbbtH/canva-to-donate-usd100m-over-4-years-to-givedirectly --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

I have some claim to be an “old hand” EA:[1] I was in the room when the creation Giving What We Can was announced (although I vacillated about joining for quite a while) I first went to EA Global in 2015 I worked on a not-very successful EA project for a while But I have not really been much involved in the community since about 2020. The interesting thing about this is that my withdrawal from the community has nothing to do with disagreements, personal conflicts, or FTX. I still pretty much agree with most “orthodox EA” positions, and I think that both the idea of EA and the movement remain straightforwardly good and relevant. Hence why I describe the process as “senescence”: intellectually and philosophically I am still on board and I still donate, I just… don't particularly want to participate beyond that. Boredom I won't sugar-coat [...] ---Outline:(01:00) Boredom(04:05) What do I have to offer?--- First published: October 19th, 2025 Source: https://forum.effectivealtruism.org/posts/rJqQGD2z2DaupCbZE/my-ea-senescence --- Narrated by TYPE III AUDIO.

TLDR EA is a community where time tracking is already very common and yet most people I talk to don't because It's too much work (when using toggl, clockify, ...) It's not accurate enough (when using RescueTime, rize, ...) I built https://donethat.ai that solves both of these with AI as part of AIM's Founding to Give program. It's live on Product Hunt today, please support it. You should probably track your time I'd argue that for most people, your time is your most valuable resource.[1] Even though your day has 24 hours, eight of those are already used up for sleep, another eight probably for social life, gym, food prep and eating, life admin, commute, leaving max eight hours to have impact. Oliver Burkeman argues in his recent book Meditations for Mortals that eight is still too high - most high impact work gets done in four hours [...] ---Outline:(00:11) TLDR(00:40) You should probably track your time(02:21) It just got easier--- First published: October 14th, 2025 Source: https://forum.effectivealtruism.org/posts/wt8gKaH9usKy3LQmK/you-should-probably-track-your-time-and-it-just-got-easier --- Narrated by TYPE III AUDIO.

The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that this is the case (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now. Before looking at the forecasts you might first ask yourself: What probability would I put on authoritarian capture?, and At what probability of authoritarian capture would I think that more concern and effort is warranted? Forecasts[1] The US won't be a democracy by 2030: 25% - Metaculus Will Trump 2.0 be the end of Democracy as we know it?: 48% - Manifold If Trump is elected, will the US still be a liberal democracy at the end of his term? (V-DEM): 61% [...] ---Outline:(00:45) Forecasts(01:50) Quotes from experts & commentators(03:20) Some relevant research--- First published: October 8th, 2025 Source: https://forum.effectivealtruism.org/posts/eJNH2CikC4scTsqYs/experts-and-markets-think-authoritarian-capture-of-the-us --- Narrated by TYPE III AUDIO.

or Maximizing Good Within Your Personal Constraints Note: The specific numbers and examples below are approximations meant to illustrate the framework. Your actual calculations will vary based on your situation, values, and cause area. The goal isn't precision—it's to start thinking explicitly about impact per unit of sacrifice rather than assuming certain actions are inherently virtuous. You're at an EA meetup. Two people are discussing their impact: Alice: "I went vegan, buy only secondhand, bike everywhere, and donate 5% of my nonprofit salary to animal charities." Bob: "I work in finance, eat whatever, and donate 40% of my income to animal charities." Who gets more social approval? Alice. Who prevents more animal suffering? Bob—by orders of magnitude. Alice's choices improve welfare for hundreds of animal-years annually through diet change and her $2,500 donation. Bob's $80,000 donation improves tens of thousands of animal-years through corporate campaigns. Yet Alice is [...] ---Outline:(00:11) or Maximizing Good Within Your Personal Constraints(01:31) The Personal Constraint Framework(02:26) Return on Sacrifice (RoS): The Core Metric(03:05) Case Studies: Where Good Intentions Go Wrong(03:10) Career: The Counterfactual Question(04:32) Environmental Action: Personal vs. Systemic(05:13) Information and Influence(05:45) Truth vs. Reach(06:17) The Uncomfortable Truth About Offsets(07:43) When Personal Practice Actually Matters(08:22) Your Personal Impact Portfolio(09:38) The Reallocation Exercise(10:40) Addressing the Predictable Objections(11:41) The Call to Action(12:10) The Bottom Line--- First published: September 10th, 2025 Source: https://forum.effectivealtruism.org/posts/u9WzAcyZkBhgWAew5/your-sacrifice-portfolio-is-probably-terrible --- Narrated by TYPE III AUDIO.

This post is based on a memo I wrote for this year's Meta Coordination Forum. See also Arden Koehler's recent post, which hits a lot of similar notes. Summary The EA movement stands at a crossroads. In light of AI's very rapid progress, and the rise of the AI safety movement, some people view EA as a legacy movement set to fade away; others think we should refocus much more on “classic” cause areas like global health and animal welfare. I argue for a third way: EA should embrace the mission of making the transition to a post-AGI society go well, significantly expanding our cause area focus beyond traditional AI safety. This means working on neglected areas like AI welfare, AI character, AI persuasion and epistemic disruption, human power concentration, space governance, and more (while continuing work on global health, animal welfare, AI safety, and biorisk). These additional [...] ---Outline:(00:20) Summary(02:38) Three possible futures for the EA movement(07:07) Reason #1: Neglected cause areas(10:49) Reason #2: EA is currently intellectually adrift(13:08) Reason #3: The benefits of EA mindset for AI safety and biorisk(14:53) This isn't particularly Will-idiosyncratic(15:57) Some related issues(16:10) Principles-first EA(17:30) Cultivating vs growing EA(21:27) PR mentality(24:48) What I'm not saying(28:31) What to do?(29:00) Local groups(31:26) Online(35:18) Conferences(36:05) Conclusion--- First published: October 10th, 2025 Source: https://forum.effectivealtruism.org/posts/R8AAG4QBZi5puvogR/effective-altruism-in-the-age-of-agi --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Here's a talk I gave at an EA university group organizers' retreat recently, which I've been strongly encouraged to share on the forum. I'd like to make it clear I don't recommend or endorse everything discussed in this talk (one example in particular which hopefully will be self-evident), but do think serious shifts in how we engage with ethics and EA would be quite beneficial for the world. Part 1: Taking ethics seriously To set context for this talk, I want to go through an Our World in Data style birds-eye view of how things are trending across key issues often discussed in EA. This is to help get better intuitions for questions like “How well will the future go by default?” and “Is the world on track to eventually solve the most pressing problems?” - which can inform high-level strategy questions like “Should we generally be doing more [...] ---Outline:(00:32) Part 1: Taking ethics seriously(04:26) Incentive shifts and moral progress(05:07) What is incentivized by society?(07:08) Heroic Responsibility(11:30) Excerpts from Strangers drowning(14:37) Opening our eyes to what is unbearable(18:07) Increasing effectiveness vs. increasing altruism(20:20) Cognitive dissonance(21:27) Paragons of moral courage(23:15) The monk who set himself on fire to protect Buddhism, and didn't flinch an inch(27:46) What do I most deeply want to honour in this life?(29:43) Moral Courage and defending EA(31:55) Acknowledging opportunity cost and grappling with guilt(33:33) Part 2: Enjoying the process(33:38) Celebrating what's really beautiful - what our hearts care about(42:08) Enjoying effective altruism(44:43) Training our minds to cultivate the qualities we endorse(46:54) Meditation isnt a silver bullet(52:35) The timeless words of MLK--- First published: October 4th, 2025 Source: https://forum.effectivealtruism.org/posts/gWyvAQztk75xQvRxD/taking-ethics-seriously-and-enjoying-the-process --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

TL;DR - AIM's applicants skew towards global health & development. We've recommended four new animal welfare charities, have the capacity to launch all four, but expect to struggle to find the talent to do so. If you've considered moving into animal welfare work, applying to Charity Entrepreneurship to launch a new charity in the space could be of huge counterfactual value. Part 1: Why you should launch an animal welfare charity Our existing animal charities have had a lot of impact—improving the lives of over 1 billion animals worldwide. - from Shrimp Welfare Project securing corporate commitments globally and featuring on the Daily Show, to FarmKind's recent success coordinating a $2 million dollar fundraiser for the animal movement on the Dwarkshesh podcast, not to mention the progress of the 40 person army at the Fish Welfare Initiative, Scale Welfare's direct hand-on work at fish farms, and Animal Policy [...] ---Outline:(00:37) Part 1: Why you should launch an animal welfare charity(02:07) A few notes on counterfactual founder value(05:57) Part 2 - The Charity Entrepreneurship Program & Our Latest Animal Welfare Ideas(06:04) What is the Charity Entrepreneurship Incubation Program?(06:47) Our recommended animal welfare ideas for 2026(07:10) 1. Driving supermarket commitments to shift diets away from meat(07:58) 2. Securing scale-up funding for the alternative protein industry(08:51) 3. Cage-free farming in the Middle East(09:30) 4. Preventing painful injuries in laying hens(10:02) Applications close on October 5th: Apply here.--- First published: September 29th, 2025 Source: https://forum.effectivealtruism.org/posts/aeky2EWd32bjjPJqf/charity-entrepreneurship-is-bottlenecked-by-a-lack-of-great --- Narrated by TYPE III AUDIO.

Summary: Consumers rejected genetically modified crops, and I expect they will do the same for cultivated meat. The meat lobby will fight to discredit the new technology, and as consumers are already primed to believe it's unnatural, it won't be difficult to persuade them. When I hear people talk about cultivated meat (i.e. lab-grown meat) and how it will replace traditional animal agriculture, I find it depressingly reminiscent of the techno-optimists of the 1980s and ‘90s speculating about how genetic modification will solve all our food problems. The optimism of the time was understandable: in 1994 the first GMO product was introduced to supermarkets, and the benefits of the technology promised incredible rewards. GMOs were predicted to bring about the end of world hunger, all while requiring less water, pesticides, and land.Today, thirty years later, in the EU GM foods are so regulated that they are [...] ---Outline:(01:56) Why did GMOs fail to be widely adopted?(02:44) A Bad First Impression(05:54) Unpopular Corporate Concentration(07:22) Cultivated Meat IS GMO(08:45) What timeline are we in?(10:24) What can be done to prevent cultivated meat from becoming irrelevant?(10:30) Expect incredible opposition(11:46) Be ready to tell a clear story about the benefits.(13:17) A proactive PR Effort(15:01) First impressions matter(17:16) Labeling(19:35) Be ready to discuss concerns about unnaturalness(21:56) Limitations of the comparison(23:07) Conclusion--- First published: September 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/rMQA9w7ZM7ioZpaN6/cultivated-meat-a-wakeup-call-for-optimists --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Note: I am the web programme director at 80,000 Hours and the view expressed here currently helps shape the web team's strategy. However, this shouldn't be taken to be expressing something on behalf of 80k as a whole, and writing and posting this memo was not undertaken as an 80k project. 80,000 Hours, where I work, has made helping people make AI go well [1]its focus. As part of this work, I think my team should continue to: Talk about / teach ideas and thinking styles that have historically been central to effective altruism (e.g. via our career guide, cause analysis content, and podcasts) Encourage people to get involved in the EA community explicitly and via linking to content. I wrote this memo for the MCF (Meta Coordination Forum), because I wasn't sure this was intuitive to others. I think talking about EA ideas and encouraging people to get [...] ---Outline:(01:21) 1. The effort to make AGI go well needs people who are flexible and equipped to to make their own good decisions(02:10) Counterargument: Agendas are starting to take shape, so this is less true than it used to be.(02:43) 2. Making AGI go well calls for a movement that thinks in explicitly moral terms(03:59) Counterargument: movements can be morally good without being explicitly moral, and being morally good is whats important.(04:41) 3. EA is (A) at least somewhat able to equip people to flexibly make good decisions, (B) explicitly morally focused.(04:52) (A) EA is at least somewhat able to equip people to flexibly make good decisions(06:04) (B) EA is explicitly morally focused(06:49) Counterargument: A different flexible & explicitly moral movement could be better for trying to make AGI go well.(07:49) Appendix: What are the relevant alternatives?(12:13) Appendix 2: anon notes from others--- First published: September 25th, 2025 Source: https://forum.effectivealtruism.org/posts/oPue7R3outxZaTXzp/why-i-think-capacity-building-to-make-agi-go-well-should --- Narrated by TYPE III AUDIO.

Intro and summary “How many chickens spared from cages is worth not being with my parents as they get older?!” - Me, exasperated (September 18, 2021) This post is about something I haven't seen discussed on the EA forum but I often talk about with my friends in their mid 30s. It's about something I wish I'd understood better ten years ago: if you are ~25 and debating whether to move to an EA Hub, you are probably underestimating how much the calculus will change when you're ~35, largely related to having kids and aging parents. Since this is underappreciated, moving to an EA Hub, and building a life there, can lead to tougher decisions later that can sneak up on you. If you're living in an EA hub, or thinking about moving, this post explores reasons you might want to head home as you get older, different ways [...] ---Outline:(00:11) Intro and summary(01:49) Why move to an EA Hub in the first place?(02:57) How things change as you get older(05:33) Why YOU might be more likely to feel the pull to head home(06:49) How did I decide? How should you decide?(08:38) Consolation prize - moving to a Hub isn't all or nothing(09:38) Conclusion--- First published: September 23rd, 2025 Source: https://forum.effectivealtruism.org/posts/ZEWE6K74dmzv7kXHP/moving-to-a-hub-getting-older-and-heading-home --- Narrated by TYPE III AUDIO.

It's been several years since I was an EA student group organiser, so please forgive any part of this post which feels out of touch (& correct me in comments!) Wow, student group organising is hard. A few structural things that make it hard to be an organiser: You maybe haven't had a job before, or have only had kind of informal jobs. So, you might not have learned a lot of stuff about how to accomplish things at work. You're probably trying to do a degree at the same time, which is hard enough on its own! You don't have the structure and benefits provided by a regular 9-5 job at an organisation, like: A manager An office Operational support People you can ask for help & advice A network You have, at most, a year or so to skill up before you might be responsible [...] --- First published: September 12th, 2025 Source: https://forum.effectivealtruism.org/posts/zMBFSesYeyfDp6Fj4/student-group-organising-is-hard-and-important --- Narrated by TYPE III AUDIO.

Hi, have you been rejected from all the 80K listed EA jobs you've applied for? It sucks, right? Welcome to the club. What might be comforting is that you (and I) are not alone. EA Job listings are extremely competitive, and in the classic EA career path, you just get rejected over and over. Many others have written about their rejection experience, here, here, and here. Even if it is quite normal for very smart, hardworking, proactive, and highly motivated EAs to get rejected from high-impact positions, it still sucks. It sucks because we sincerely want to make the world a radically better place. We've read everything, planned accordingly, gone through fellowships, rejected other options, and worked very hard just to get the following message: "Thank you for your interest in [Insert EA Org Name]... we have decided to move forward with other candidates for this role... we're unfortunately [...] ---Outline:(06:13) A note on AI timelines(08:51) Time to go forward--- First published: September 5th, 2025 Source: https://forum.effectivealtruism.org/posts/pzbtpZvL2bYfssdkr/rejected-from-all-the-ea-jobs-you-applied-for-what-to-do-now --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Early work on ”GiveWell for AI Safety” Intro EA was founded on the principle of cost-effectiveness. We should fund projects that do more with less, and more generally, spend resources as efficiently as possible. And yet, while much interest, funding, and resources in EA have shifted towards AI safety, it's rare to see any cost-effectiveness calculations. The focus on AI safety is based on vague philosophical arguments that the future could be very large and valuable, and thus whatever is done towards this end is worth orders of magnitude more than most short-term effects. Even if AI safety is the most important problem, you should still strive to optimize how resources are spent to achieve maximum impact, since there are limited resources. Global health organizations and animal welfare organizations work hard to measure cost-effectiveness, evaluate charities, make sure effects are counterfactual, run RCTs, estimate moral weights, scope out interventions [...] ---Outline:(00:11) Early work on GiveWell for AI Safety(00:16) Intro(02:43) Step 1: Gathering data(03:00) Viewer minutes(03:35) Costs and revenue(04:49) Results(05:08) Step 2: Quality-adjusting(05:40) Quality of Audience (Qa)(06:58) Fidelity of Message (Qf)(08:05) Alignment of Message (Qm)(08:53) Results(09:37) Observations(12:37) How to help(13:36) Appendix: Examples of Data Collection(13:42) Rob Miles(14:18) AI Species (Drew Spartz)(14:56) Rational Animations(15:32) AI in Context(15:52) Cognitive Revolution--- First published: September 12th, 2025 Source: https://forum.effectivealtruism.org/posts/SBsGCwkoAemPawfJz/how-cost-effective-are-ai-safety-youtubers --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

There's a huge amount of energy spent on how to get the most QALYs/$. And a good amount of energy spent on how to increase total $. And you might think that across those efforts, we are succeeding in maximizing total QALYs. I think a third avenue is under investigated: marginally improving the effectiveness of ineffective capital. That's to say, improving outcomes, only somewhat, for the pool of money that is not at all EA-aligned. This cash is not being spent optimally, and likely never will be. But the sheer volume could make up for the lack of efficacy. Say you have the option to work for the foundation of one of two donors: Donor A only has an annual giving budget of $100,000, but will do with that money whatever you suggest. If you say “bed nets” he says “how many”. Donor B has a much larger [...] ---Outline:(01:34) Most money is not EA money(04:32) How much money is there?(05:49) Effective Everything?--- First published: September 8th, 2025 Source: https://forum.effectivealtruism.org/posts/o5LBbv9bfNjKxFeHm/marginally-more-effective-altruism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. How I decided what to say — and what not to I'm excited to share my TED talk. Here I want to share the story of how the talk came to be, and the three biggest decisions I struggled with in drafting it. The backstory Last fall, I posted on X about Trump's new Secretary of Agriculture, Brooke Rollins, vowing to undo state bans on the sale of pork from crated pigs. I included an image of a pig in a crate. Liv Boeree, a poker champion and past TED speaker, saw that post and was haunted by it. She told me that she couldn't get the image of the crated pig out of her [...] --- First published: September 5th, 2025 Source: https://forum.effectivealtruism.org/posts/XjQr52eDkBPLrLHB3/my-ted-talk --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

TL;DR: If a (meta) org had a meaningful impact on you (in line with what they hope to achieve), you should probably tell them. It is essential for their impact reporting, which is essential for them to continue operating. You are likely underestimating just how valuable your story is to them. It could be thousands of dollars worth. Thanks to Toby Tremlett, Lauren Mee and Sofia Balderson for reviewing a draft version of this post. All mistakes are my own. 1. Many organisations shaped my career — yet I usually only shared my story when prompted. In reflecting on my career journey, I was reminded of all the organizations who led me to where I am. I believe I reported their counterfactual contribution back to them, but this was not usually by my own doing. In two cases, I was personally reached out to - in one case, I [...] --- First published: August 8th, 2025 Source: https://forum.effectivealtruism.org/posts/3v6kghxMttEhbK3dT/consider-thanking-whoever-helped-you --- Narrated by TYPE III AUDIO.

Context: I'm a senior fellow at Conservation X Labs (CXL), and I'm seeking support as I attempt to establish a program on humane rodent fertility control in partnership with the Wild Animal Initiative (WAI) and the Botstiber Institute for Wildlife Fertility Control (BIWFC). CXL is a biodiversity conservation organization working in sustainable technologies, not an animal welfare organization. However, CXL leadership is interested in simultaneously promoting biodiversity conservation and animal welfare, and they are excited about the possibility of advancing applied research that make it possible to ethically limit rodent populations to protect biodiversity. I think this represents the wild animal welfare community's first realistic opportunity to bring conservation organizations into wild animal welfare work while securing substantial non-EA funding for welfare-improving interventions. Background Rodenticides cause immense suffering to (likely) hundreds of millions of rats and mice annually through anticoagulation-induced death over several days, while causing significant non-target [...] ---Outline:(01:08) Background(02:20) Why this approach?(03:49) Why CXL?(06:03) Why now, and why me?(06:59) Budget(07:52) Next steps--- First published: August 27th, 2025 Source: https://forum.effectivealtruism.org/posts/EcBjr4Q2AtoTLcKXp/high-impact-and-urgent-funding-opportunity-rodent-fertility --- Narrated by TYPE III AUDIO.

I told someone recently I would respect them if they only worked 40 hours a week, instead of their current 50-60. What I really meant was stronger than that. I respect people who do the most impactful work they can — whether they work 70 hours a week because they can, 30 hours so they can be home with their kid, or 15 hours because of illness or burnout. I admire those who go above and beyond. But I don't expect that of everyone. Working long hours isn't required to earn my respect, nor do I think it should be the standard that we hold as a community. I want it to be okay to say "that doesn't work for me". It feels like donations: I admire people who give away 50%, but I don't expect it. I still deeply respect someone who gives 10% to the [...] --- First published: August 26th, 2025 Source: https://forum.effectivealtruism.org/posts/qFsqawmgRjxXkA7eF/you-re-enough --- Narrated by TYPE III AUDIO.

How to prevent infighting, mitigate status races, and keep your people focused. Cross-posted from my Substack. Organizational culture changes rapidly at scale. When you add new people to an org, they'll bring in their own priors about how to operate, how to communicate, and what sort of behavior is looked-up to. Despite rapid changes, in this post I explain how you can implement anti-fragile cultural principles—principles that help your team fix their own problems, often arising from growth and scale, and help the org continue to do what made it successful in the first place. This is based partially on my experience at Wave, which grew to 2000+ people, but also tons of other reading (top recommendations: Peopleware by DeMarco and Lister, Swarmwise by Rick Falkvinge, High Growth Handbook by Elad Gil, The Secret of Our Success by Henrich, Antifragile by Nassim Nicholas Taleb, as well as Brian [...] ---Outline:(01:13) Common Problems(05:00) Write down your culture(06:25) That said, you don't have to write everything down(08:37) Anti-fragile values I recommend(09:02) Mission First(10:51) Focus(11:32) Fire Fast(12:58) Feedback for everything(13:50) Mutual Trust(15:48) Work sustainably and avoid burnout(17:42) Write only what's new & helpful--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/mLonxtAiuvvkjXiwq/the-anti-fragile-culture --- Narrated by TYPE III AUDIO.

This is a link post. There are some moments of your life when the reality of suffering really hits home. Visiting desperately poor parts of the world for the first time. Discovering what factory farming actually looks like after a childhood surrounded by relatively idyllic rural farming. Realising too late that you shouldn't have clicked on that video of someone experiencing a cluster headache. Or, more unexpectedly, having a baby. One of 10^20 Birth Stories This Year With my relaxed and glowing pregnant wife in her 34th week, I expect things to go smoothly. There have been a few warning signs: some slightly anomalous results in the early tests, the baby in breech position, and some bleeding. But everything still seems to be going relatively well. Then, suddenly, while walking on an idyllic French seafront, she says: "I think my waters have broken". "Really? It's probably nothing, let's [...] ---Outline:(00:39) One of 10^20 Birth Stories This Year(03:50) The Beginning of Experience(05:43) Is This Almost Everything?(08:22) Schrödinger's baby(13:03) On Feeling the Right Things(15:04) Into The Fifth Trimester(16:50) The Most Beautiful Case For Net-Negativity--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/6PuBTer69ZJvTDNQk/most-of-the-world-is-an-adorably-suffering-debatably-1 Linkpost URL:https://torchestogether.substack.com/p/most-of-the-world-is-an-adorably --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a link post. There are some moments of your life when the reality of suffering really hits home. Visiting desperately poor parts of the world for the first time. Discovering what factory farming actually looks like after a childhood surrounded by relatively idyllic rural farming. Realising too late that you shouldn't have clicked on that video of someone experiencing a cluster headache. Or, more unexpectedly, having a baby. One of 10^20 Birth Stories This Year With my relaxed and glowing pregnant wife in her 34th week, I expect things to go smoothly. There have been a few warning signs: some slightly anomalous results in the early tests, the baby in breech position, and some bleeding. But everything still seems to be going relatively well. Then, suddenly, while walking on an idyllic French seafront, she says: "I think my waters have broken". "Really? It's probably nothing, let's [...] ---Outline:(00:39) One of 10^20 Birth Stories This Year(03:51) The Beginning of Experience(05:38) Is This Almost Everything?(08:17) Schrödinger's baby(12:58) On Feeling the Right Things(14:59) Into The Fifth Trimester(16:40) The Most Beautiful Case For Net-Negativity--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/6PuBTer69ZJvTDNQk/most-of-the-world-is-an-adorably-suffering-debatably-1 Linkpost URL:https://torchestogether.substack.com/p/most-of-the-world-is-an-adorably --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

My new book, Altruismo racional, is now on presale. It is my attempt at presenting a compelling case for a particular strand of "classical EA"[1]: one that emphasizes caring deeply about global health and poverty, a rational approach to giving, the importance of cost-effectiveness, and the

Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Why ending the worst abuses of factory farming is an issue ripe for moral reform I recently joined Dwarkesh Patel's podcast to discuss factory farming. I hope you'll give it a listen — and consider supporting his fundraiser for FarmKind's Impact Fund. (Dwarkesh is matching all donations up to $250K; use the code “dwarkesh”.) We discuss two contradictory views about factory farming that produce the same conclusion: that its end is either inevitable or impossible. Some techno-optimists assume factory farming will vanish in the wake of AGI. Some pessimists see reforming it as a hopeless cause. Both camps arrive at the same conclusion: fatalism. If factory farming is destined to end, or persist, then what's [...] --- First published: August 8th, 2025 Source: https://forum.effectivealtruism.org/posts/HiGmRwq4YiDzggRLH/not-inevitable-not-impossible --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

I'm a long-time GiveWell donor and an ethical vegan. In a recent GiveWell podcast on livelihoods programs, providing animals as “productive assets” was mentioned as a possible program type. After reaching out to GiveWell directly to voice my objection, I was informed that because GiveWell's moral weights currently don't include nonhuman animals, animal-based aid is not categorically off the table if it surpasses their cost-effectiveness bar. Older posts on the GiveWell website similarly do not rule out animal donations from an ethical lens. In response to some of the rationale GiveWell shared with me, I also want to proactively address a core ethical distinction: Animal-aid programs involve certain, programmatic harm to animals (breeding, confinement, separation of families, slaughter). Human-health programs like malaria prevention have, at most, indirect and uncertain effects on animal consumption (by saving human lives), which can change over time (e.g., cultural shifts, plant-based/cultivated options). Constructive [...] --- First published: August 14th, 2025 Source: https://forum.effectivealtruism.org/posts/YnL6prYQbaLz22mxe/psa-for-vegan-donors-givewell-not-ruling-out-animal-based --- Narrated by TYPE III AUDIO.

Giving What We Can has reached 10,000

This is a link post. This is a personal essay about my failed attempt to convince effective altruists to become socialists. I started as a convinced socialist who thought EA ignored the 'root causes' of poverty by focusing on charity instead of structural change. After studying sociology and economics to build a rigorous case for socialism, the project completely backfired as I realized my political beliefs were largely psychological coping mechanisms. Here are the key points: Understanding the "root cause" of a problem doesn't necessarily lead to better solutions - Even if capitalism causes poverty, understanding "dynamics of capitalism" won't necessarily help you solve it Abstract sociological theories are mostly obscurantist bullshit - Academic sociology suffers from either unrealistic mathematical models or vague, unfalsifiable claims that don't help you understand or change the world The world is better understood as misaligned incentives rather than coordinated oppression - Most social [...] --- First published: August 10th, 2025 Source: https://forum.effectivealtruism.org/posts/AcPw55oF3reBiW4FX/of-marx-and-moloch-how-my-attempt-to-convince-effective Linkpost URL:https://honestsignals.substack.com/p/of-marx-and-moloch-or-my-misguided --- Narrated by TYPE III AUDIO.

Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It's been something like eight years in the making, so I'm pretty happy it's finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead. Why? Well, even if we survive, we probably just get a future that's a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future. That is, I think there's more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series [...] --- First published: August 4th, 2025 Source: https://forum.effectivealtruism.org/posts/mzT2ZQGNce8AywAx3/should-we-aim-for-flourishing-over-mere-survival-the-better --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs thought of his arguments. This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It [...] ---Outline:(02:31) Alcohol is a much bigger problem than you may think(06:59) Why you should stop drinking even if alcohol will not harm you personally(14:41) Conclusion--- First published: August 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/dnbpKkjnw3v6JkaDa/alcohol-is-so-bad-for-society-that-you-should-probably-stop --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia." This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning. It's hard to find data on the scale of [...] --- First published: August 5th, 2025 Source: https://forum.effectivealtruism.org/posts/wCcWyqyvYgF3ozNnS/frog-welfare --- Narrated by TYPE III AUDIO.

This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia." This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning. It's hard to find data on the scale of [...] --- First published: August 5th, 2025 Source: https://forum.effectivealtruism.org/posts/wCcWyqyvYgF3ozNnS/frog-welfare --- Narrated by TYPE III AUDIO.

Confidence Level: I've been an organizer at UChicago for over a year now with my co-organizer, Avik. I also started the UChicago Rationality Group, co-organized a 50-person Midwest EA Retreat, and have spoken to many EA organizers from other universities. A lot of this post is based on vibes and conversations with other organizers, so while it's grounded in experience, some parts are more speculative than others. I'll try to flag the more speculative points when I can (the * indicates points that I'm less certain about). I think it's really important to make sure that EA principles persist in the future. To give one framing for why I believe this: if you think EA is likely to significantly reduce the chances of existential risks, you should think that losing EA is itself a factor significantly contributing to existential risks. Therefore, I also think one of the [...] ---Outline:(01:12) Impact Through Force Multiplication(04:19) Individual Benefits(04:23) Personal Impact(06:27) Professional(07:34) Social(08:10) Counters--- First published: July 29th, 2025 Source: https://forum.effectivealtruism.org/posts/3aPCKsHdJqwKo2Dmt/why-you-should-become-a-university-group-organizer --- Narrated by TYPE III AUDIO.

And other ways to make event content more valuable. I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. When you imagine a session at an event going wrong, you're probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there's another big way that sessions go wrong that is sorely neglected: wasting everyone's time, often without people noticing. Let's give talks a break. They often suck, but event organizers are mostly doing the right things to make them [...] ---Outline:(01:11) Panels(03:40) The group brainstorm(04:27) Your session attendees do not have the answers.(05:26) Ideas are easy. Bandwidth is low.(06:28) The ideas are not worth the time cost.(07:50) Choosing more valuable content: fidelity per person-minute--- First published: July 28th, 2025 Source: https://forum.effectivealtruism.org/posts/LaMDxRqEo8sZnoBXf/please-no-more-group-brainstorming --- Narrated by TYPE III AUDIO.

This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. Reframing these from unfair barriers to data about my specific career path has helped me a lot. When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints [...] ---Outline:(00:21) TL;DR:(01:27) Introduction(02:25) My EA journey so far(03:18) Sometimes my passport mattered more than my competencies, and thats okay(04:43) Everyone has their own passport(06:19) Realistic opportunities often outweigh idealistic ones(08:04) Importance of a fail-safe(08:37) Playing the long game(09:44) Adversity quotient seems underrated(10:13) Building resilience through adversity(11:22) Pivot into recruiting(12:11) Building AQ over time(14:02) Why AQ matters in EA-aligned work(15:01) Closing thoughts--- First published: July 28th, 2025 Source: https://forum.effectivealtruism.org/posts/3Hh839MaiWCPzyB3M/building-an-ea-aligned-career-from-an-lmic --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar. Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn't seem deterred, he offered me an internship. I [...] --- First published: July 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/SmiXeQcnMD7qmAfgS/why-you-should-build-your-own-ea-internship-abroad --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar. Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn't seem deterred, he offered me an internship. I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks: Made ~20 visits to fish farms Wrote a recommendation on next steps for FWI's stunning project Conducted data analysis in Python on the efficacy of the Alliance for Responsible [...] --- First published: July 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/SmiXeQcnMD7qmAfgS/why-you-should-build-your-own-ea-internship-abroad --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a link post. Tl;dr: In this post, I introduce a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general [...] ---Outline:(00:15) Tl;dr:(01:19) Why I Wrote This(02:30) When Applying Feels Like a Lottery(04:14) What Surface Area for Serendipity Means(07:21) What It Looks Like (with Examples)(09:02) Case Study: Kevin's Path to Becoming Hive's Managing Director(10:27) Common Pitfalls to Avoid(12:00) Share Your JourneyThe original text contained 4 footnotes which were omitted from this narration. --- First published: July 1st, 2025 Source: https://forum.effectivealtruism.org/posts/5iqTPsrGtz8EYi9r9/how-unofficial-work-gets-you-hired-building-your-surface Linkpost URL:https://notingthemargin.substack.com/p/how-unofficial-work-gets-you-hired --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Since January I've applied to ~25 EA-aligned roles. Every listing attracted hundreds of candidates (one passed 1,200). It seems we already have a very deep bench of motivated, values-aligned people, yet orgs still run long, resource-heavy hiring rounds. That raises three things: Cost-effectiveness: Are months-long searches and bespoke work-tests still worth the staff time and applicant burnout when shortlist-first approaches might fill 80% of roles faster with decent candidates? Sure, there can be differences in talent, but the question ought to be... how tangible is this difference and does it justify the cost of hiring? Coordination: Why aren't orgs leaning harder on shared talent pools (e.g. HIP's database) to bypass public rounds? HIP is currently running an open search. Messaging: From the outside, repeated calls to 'consider an impactful EA career' could start to look pyramid-schemey if the movement can't absorb the talent [...] --- First published: July 14th, 2025 Source: https://forum.effectivealtruism.org/posts/ufjgCrtxhrEwxkdCH/is-ea-still-talent-constrained --- Narrated by TYPE III AUDIO.