Effective Altruism Forum Podcast

Follow Effective Altruism Forum Podcast
Share on
Copy link to clipboard

I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

Garrett Baker


    • Oct 30, 2025 LATEST EPISODE
    • weekdays NEW EPISODES
    • 18m AVG DURATION
    • 662 EPISODES


    Search for episodes from Effective Altruism Forum Podcast with a specific topic:

    Latest episodes from Effective Altruism Forum Podcast

    “Why Many EAs May Have More Impact Outside of Nonprofits in Animal Welfare” by lauren_mee

    Play Episode Listen Later Oct 30, 2025 15:19


    “Framing EA: ‘Doing Good Better' Did Worse” by Rethink Priorities, David_Moss

    Play Episode Listen Later Oct 29, 2025 9:35


    Summary As part of our ongoing work to study how to best frame EA, we experimentally tested different phrases and sentences that CEA were considering using on effectivealtruism.org. Doing Good Better taglines We observed a consistent pattern where taglines that included the phrase ‘do[ing] good better' received less support from respondents and inspired less interest in learning about EA. We replicated these results in a second experiment, where we confirmed that taglines referring to “do[ing] good better” performed less well than those referring to “do[ing] the most good”. Nouns and sentences Nouns: The effect of using different nouns to refer to EA was small, but referring to EA as a ‘philosophy' or ‘movement' inspired the most curiosity compared to options including ‘project' and ‘research field'. Sentences: “Find the most effective ways to do good with your time, money, and career” and “Effective altruism asks the question of how we [...] ---Outline:(00:12) Summary(01:23) Method(02:18) Taglines (Study 1)(03:40) Doing Good Better replication (Study 2)(05:23) Sentences (Study 1)(06:45) Nouns (Study 1)(07:41) Effectiveness focus(07:55) Conclusion(08:56) Acknowledgments --- First published: October 27th, 2025 Source: https://forum.effectivealtruism.org/posts/Y6zMpdwkkAQ8rF56w/framing-ea-doing-good-better-did-worse --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “The Charity Trap: Brain Misallocation” by DavidNash

    Play Episode Listen Later Oct 29, 2025 10:22


    This is a link post. In Ugandan villages where non-governmental organisations (NGOs) hired away the existing government health worker, infant mortality went up. This happened in 39%[1] of villages that already had a government worker. The NGO arrived with funding and good intentions, but the likelihood that villagers received care from any health worker declined by ~23%. Brain Misallocation “Brain drain”, - the movement of people from poorer countries to wealthier ones, has been extensively discussed for decades[2]. But there's a different dynamic that gets far less attention: “brain misallocation”. In many low- and middle-income countries (LMICs), the brightest talents are being incentivised towards organisations that don't utilise their potential for national development. They're learning how to get grants from multilateral alphabet organisations rather than build businesses or make good policy. This isn't about talent leaving the country. It's about talent being misdirected and mistrained within it. Examples Nick Laing [...] ---Outline:(00:36) Brain Misallocation(01:16) Examples(05:37) The Incentive Trap(07:48) When Help Becomes Harm(08:48) Conclusion --- First published: October 23rd, 2025 Source: https://forum.effectivealtruism.org/posts/6rmdyddEateJFWb4L/the-charity-trap-brain-misallocation Linkpost URL:https://gdea.substack.com/p/the-charity-trap-brain-misallocation --- Narrated by TYPE III AUDIO.

    [Linkpost] “The Four Pillars: A Hypothesis for Countering Catastrophic Biological Risk” by ASB

    Play Episode Listen Later Oct 28, 2025 17:53


    This is a link post. Biological risks are more severe than has been widely appreciated. Recent discussions of mirror bacteria highlight an extreme scenario: a single organism that could infect and kill humans, plants, and animals, exhibits environmental persistence in soil or dust, and might be capable of spreading worldwide within several months. In the worst-case scenario, this could pose an existential risk to humanity, especially if the responses/countermeasures were inadequate. Less severe pandemic pathogens could still cause hundreds of millions (or billions) of casualties if they were engineered to cause harm. Preventing such catastrophes should be a top priority for humanity. However, if prevention fails, it would also be prudent to have a backup plan. One way of doing this would be to enumerate the types of pathogens that might be threatening (e.g. viruses, bacteria, fungi, etc), enumerate the subtypes (e.g. adenoviruses, coronaviruses, paramyxoviruses, etc), analyze the [...] ---Outline:(04:20) PPE(09:56) Biohardening(14:36) Detection(17:00) Expression of interest and acknowledgements The original text contained 34 footnotes which were omitted from this narration. --- First published: October 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/33t5jPzxEcFXLCPjq/the-four-pillars-a-hypothesis-for-countering-catastrophic Linkpost URL:https://defensesindepth.bio/the-four-pillars-a-hypothesis-for-countering-catastrophic-biological-risk/ --- Narrated by TYPE III AUDIO.

    “Entertainment for EAs” by Toby Tremlett

    Play Episode Listen Later Oct 24, 2025 2:45


    I've used the phrase “entertainment for EAs” a bunch to describe a failure mode that I'm trying to avoid with my career. Maybe it'd be useful for other people working in meta-EA, so I'm sharing it here as a quick draft amnesty post. There's a motivational issue in meta-work where it's easy to start treating the existing EA community as stakeholders. The real stakeholders in my work (and meta-work in general) are the ultimate beneficiaries — the minds (animal, human, digital?) that could benefit from work I help to initiate. But those beneficiaries aren't present to me — they aren't my friends, they don't work in the same building as me. To keep your eyes on the real prize takes constant work. When that work slips, you could end up working on ‘entertainment for EAs', i.e. something which gets great feedback from EAs, but only hazily, if [...] --- First published: October 17th, 2025 Source: https://forum.effectivealtruism.org/posts/AkSDhiPuvnRNbjXAf/entertainment-for-eas --- Narrated by TYPE III AUDIO.

    “Canva to donate $100M over 4 years to GiveDirectly” by MartinBerlin

    Play Episode Listen Later Oct 24, 2025 2:56


    All quotes are from their blog post "Why we chose to invest another $100 million in cash transfers", highlights are my own: Today, we're announcing a new $100 million USD commitment over the next four years to expand our partnership with GiveDirectly and help empower an additional 185,000 people living in extreme poverty. We're also funding new research, and pilot variants, to further understand how we can maximize the impact of each dollar. This is on top of another $50 million USD they gave to GiveDirectly before: We started partnering with GiveDirectly in 2021. Since then, we've donated $50 million USD to support their work across Malawi, through direct cash transfers to those living in extreme poverty. We've already reached more than 85,000 people, helping to provide life changing resources and the dignity of choice. For context, the Cash for Poverty Relief program by Give Directly [...] ---Outline:(01:24) About their founding-to-give model(02:15) Other Engagement--- First published: October 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/ktFpWLkvRAAygbbtH/canva-to-donate-usd100m-over-4-years-to-givedirectly --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “My EA Senescence” by Michael_PJ

    Play Episode Listen Later Oct 21, 2025 5:59


    I have some claim to be an “old hand” EA:[1] I was in the room when the creation Giving What We Can was announced (although I vacillated about joining for quite a while) I first went to EA Global in 2015 I worked on a not-very successful EA project for a while But I have not really been much involved in the community since about 2020. The interesting thing about this is that my withdrawal from the community has nothing to do with disagreements, personal conflicts, or FTX. I still pretty much agree with most “orthodox EA” positions, and I think that both the idea of EA and the movement remain straightforwardly good and relevant. Hence why I describe the process as “senescence”: intellectually and philosophically I am still on board and I still donate, I just… don't particularly want to participate beyond that. Boredom I won't sugar-coat [...] ---Outline:(01:00) Boredom(04:05) What do I have to offer?--- First published: October 19th, 2025 Source: https://forum.effectivealtruism.org/posts/rJqQGD2z2DaupCbZE/my-ea-senescence --- Narrated by TYPE III AUDIO.

    “You should probably track your time (and it just got easier)” by Christoph Hartmann

    Play Episode Listen Later Oct 20, 2025 4:09


    TLDR  EA is a community where time tracking is already very common and yet most people I talk to don't because It's too much work (when using toggl, clockify, ...) It's not accurate enough (when using RescueTime, rize, ...) I built https://donethat.ai that solves both of these with AI as part of AIM's Founding to Give program. It's live on Product Hunt today, please support it. You should probably track your time I'd argue that for most people, your time is your most valuable resource.[1] Even though your day has 24 hours, eight of those are already used up for sleep, another eight probably for social life, gym, food prep and eating, life admin, commute, leaving max eight hours to have impact. Oliver Burkeman argues in his recent book Meditations for Mortals that eight is still too high - most high impact work gets done in four hours [...] ---Outline:(00:11) TLDR(00:40) You should probably track your time(02:21) It just got easier--- First published: October 14th, 2025 Source: https://forum.effectivealtruism.org/posts/wt8gKaH9usKy3LQmK/you-should-probably-track-your-time-and-it-just-got-easier --- Narrated by TYPE III AUDIO.

    “Experts & markets think authoritarian capture of the US looks distinctly possible” by LintzA

    Play Episode Listen Later Oct 15, 2025 5:45


    The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that this is the case (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now. Before looking at the forecasts you might first ask yourself: What probability would I put on authoritarian capture?, and At what probability of authoritarian capture would I think that more concern and effort is warranted?  Forecasts[1] The US won't be a democracy by 2030: 25% - Metaculus Will Trump 2.0 be the end of Democracy as we know it?: 48% - Manifold If Trump is elected, will the US still be a liberal democracy at the end of his term? (V-DEM): 61% [...] ---Outline:(00:45) Forecasts(01:50) Quotes from experts & commentators(03:20) Some relevant research--- First published: October 8th, 2025 Source: https://forum.effectivealtruism.org/posts/eJNH2CikC4scTsqYs/experts-and-markets-think-authoritarian-capture-of-the-us --- Narrated by TYPE III AUDIO.

    “Your Sacrifice Portfolio Is Probably Terrible” by Midtermist12

    Play Episode Listen Later Oct 13, 2025 13:40


    or Maximizing Good Within Your Personal Constraints Note: The specific numbers and examples below are approximations meant to illustrate the framework. Your actual calculations will vary based on your situation, values, and cause area. The goal isn't precision—it's to start thinking explicitly about impact per unit of sacrifice rather than assuming certain actions are inherently virtuous. You're at an EA meetup. Two people are discussing their impact: Alice: "I went vegan, buy only secondhand, bike everywhere, and donate 5% of my nonprofit salary to animal charities." Bob: "I work in finance, eat whatever, and donate 40% of my income to animal charities." Who gets more social approval? Alice. Who prevents more animal suffering? Bob—by orders of magnitude. Alice's choices improve welfare for hundreds of animal-years annually through diet change and her $2,500 donation. Bob's $80,000 donation improves tens of thousands of animal-years through corporate campaigns. Yet Alice is [...] ---Outline:(00:11) or Maximizing Good Within Your Personal Constraints(01:31) The Personal Constraint Framework(02:26) Return on Sacrifice (RoS): The Core Metric(03:05) Case Studies: Where Good Intentions Go Wrong(03:10) Career: The Counterfactual Question(04:32) Environmental Action: Personal vs. Systemic(05:13) Information and Influence(05:45) Truth vs. Reach(06:17) The Uncomfortable Truth About Offsets(07:43) When Personal Practice Actually Matters(08:22) Your Personal Impact Portfolio(09:38) The Reallocation Exercise(10:40) Addressing the Predictable Objections(11:41) The Call to Action(12:10) The Bottom Line--- First published: September 10th, 2025 Source: https://forum.effectivealtruism.org/posts/u9WzAcyZkBhgWAew5/your-sacrifice-portfolio-is-probably-terrible --- Narrated by TYPE III AUDIO.

    “Effective altruism in the age of AGI” by William_MacAskill

    Play Episode Listen Later Oct 10, 2025 37:35


    This post is based on a memo I wrote for this year's Meta Coordination Forum. See also Arden Koehler's recent post, which hits a lot of similar notes. Summary The EA movement stands at a crossroads. In light of AI's very rapid progress, and the rise of the AI safety movement, some people view EA as a legacy movement set to fade away; others think we should refocus much more on “classic” cause areas like global health and animal welfare. I argue for a third way: EA should embrace the mission of making the transition to a post-AGI society go well, significantly expanding our cause area focus beyond traditional AI safety. This means working on neglected areas like AI welfare, AI character, AI persuasion and epistemic disruption, human power concentration, space governance, and more (while continuing work on global health, animal welfare, AI safety, and biorisk). These additional [...] ---Outline:(00:20) Summary(02:38) Three possible futures for the EA movement(07:07) Reason #1: Neglected cause areas(10:49) Reason #2: EA is currently intellectually adrift(13:08) Reason #3: The benefits of EA mindset for AI safety and biorisk(14:53) This isn't particularly Will-idiosyncratic(15:57) Some related issues(16:10) Principles-first EA(17:30) Cultivating vs growing EA(21:27) PR mentality(24:48) What I'm not saying(28:31) What to do?(29:00) Local groups(31:26) Online(35:18) Conferences(36:05) Conclusion--- First published: October 10th, 2025 Source: https://forum.effectivealtruism.org/posts/R8AAG4QBZi5puvogR/effective-altruism-in-the-age-of-agi --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Taking ethics seriously, and enjoying the process” by kuhanj

    Play Episode Listen Later Oct 8, 2025 54:08


    Here's a talk I gave at an EA university group organizers' retreat recently, which I've been strongly encouraged to share on the forum. I'd like to make it clear I don't recommend or endorse everything discussed in this talk (one example in particular which hopefully will be self-evident), but do think serious shifts in how we engage with ethics and EA would be quite beneficial for the world. Part 1: Taking ethics seriously To set context for this talk, I want to go through an Our World in Data style birds-eye view of how things are trending across key issues often discussed in EA. This is to help get better intuitions for questions like “How well will the future go by default?” and “Is the world on track to eventually solve the most pressing problems?” - which can inform high-level strategy questions like “Should we generally be doing more [...] ---Outline:(00:32) Part 1: Taking ethics seriously(04:26) Incentive shifts and moral progress(05:07) What is incentivized by society?(07:08) Heroic Responsibility(11:30) Excerpts from Strangers drowning(14:37) Opening our eyes to what is unbearable(18:07) Increasing effectiveness vs. increasing altruism(20:20) Cognitive dissonance(21:27) Paragons of moral courage(23:15) The monk who set himself on fire to protect Buddhism, and didn't flinch an inch(27:46) What do I most deeply want to honour in this life?(29:43) Moral Courage and defending EA(31:55) Acknowledging opportunity cost and grappling with guilt(33:33) Part 2: Enjoying the process(33:38) Celebrating what's really beautiful - what our hearts care about(42:08) Enjoying effective altruism(44:43) Training our minds to cultivate the qualities we endorse(46:54) Meditation isnt a silver bullet(52:35) The timeless words of MLK--- First published: October 4th, 2025 Source: https://forum.effectivealtruism.org/posts/gWyvAQztk75xQvRxD/taking-ethics-seriously-and-enjoying-the-process --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Charity Entrepreneurship is bottlenecked by a lack of great animal founders” by Ben Williamson, Amalie Farestvedt

    Play Episode Listen Later Oct 1, 2025 10:24


    TL;DR - AIM's applicants skew towards global health & development. We've recommended four new animal welfare charities, have the capacity to launch all four, but expect to struggle to find the talent to do so. If you've considered moving into animal welfare work, applying to Charity Entrepreneurship to launch a new charity in the space could be of huge counterfactual value. Part 1: Why you should launch an animal welfare charity Our existing animal charities have had a lot of impact—improving the lives of over 1 billion animals worldwide. - from Shrimp Welfare Project securing corporate commitments globally and featuring on the Daily Show, to FarmKind's recent success coordinating a $2 million dollar fundraiser for the animal movement on the Dwarkshesh podcast, not to mention the progress of the 40 person army at the Fish Welfare Initiative, Scale Welfare's direct hand-on work at fish farms, and Animal Policy [...] ---Outline:(00:37) Part 1: Why you should launch an animal welfare charity(02:07) A few notes on counterfactual founder value(05:57) Part 2 - The Charity Entrepreneurship Program & Our Latest Animal Welfare Ideas(06:04) What is the Charity Entrepreneurship Incubation Program?(06:47) Our recommended animal welfare ideas for 2026(07:10) 1. Driving supermarket commitments to shift diets away from meat(07:58) 2. Securing scale-up funding for the alternative protein industry(08:51) 3. Cage-free farming in the Middle East(09:30) 4. Preventing painful injuries in laying hens(10:02) Applications close on October 5th: Apply here.--- First published: September 29th, 2025 Source: https://forum.effectivealtruism.org/posts/aeky2EWd32bjjPJqf/charity-entrepreneurship-is-bottlenecked-by-a-lack-of-great --- Narrated by TYPE III AUDIO.

    “Cultivated Meat: A Wakeup Call for Optimists” by CianHamilton

    Play Episode Listen Later Oct 1, 2025 26:36


    Summary: Consumers rejected genetically modified crops, and I expect they will do the same for cultivated meat. The meat lobby will fight to discredit the new technology, and as consumers are already primed to believe it's unnatural, it won't be difficult to persuade them. When I hear people talk about cultivated meat (i.e. lab-grown meat) and how it will replace traditional animal agriculture, I find it depressingly reminiscent of the techno-optimists of the 1980s and ‘90s speculating about how genetic modification will solve all our food problems. The optimism of the time was understandable: in 1994 the first GMO product was introduced to supermarkets, and the benefits of the technology promised incredible rewards. GMOs were predicted to bring about the end of world hunger, all while requiring less water, pesticides, and land.Today, thirty years later, in the EU GM foods are so regulated that they are [...] ---Outline:(01:56) Why did GMOs fail to be widely adopted?(02:44) A Bad First Impression(05:54) Unpopular Corporate Concentration(07:22) Cultivated Meat IS GMO(08:45) What timeline are we in?(10:24) What can be done to prevent cultivated meat from becoming irrelevant?(10:30) Expect incredible opposition(11:46) Be ready to tell a clear story about the benefits.(13:17) A proactive PR Effort(15:01) First impressions matter(17:16) Labeling(19:35) Be ready to discuss concerns about unnaturalness(21:56) Limitations of the comparison(23:07) Conclusion--- First published: September 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/rMQA9w7ZM7ioZpaN6/cultivated-meat-a-wakeup-call-for-optimists --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Why I think capacity building to make AGI go well should include spreading EA-style ideas and helping people engage with EA” by Arden Koehler

    Play Episode Listen Later Sep 26, 2025 16:11


    Note: I am the web programme director at 80,000 Hours and the view expressed here currently helps shape the web team's strategy. However, this shouldn't be taken to be expressing something on behalf of 80k as a whole, and writing and posting this memo was not undertaken as an 80k project. 80,000 Hours, where I work, has made helping people make AI go well [1]its focus. As part of this work, I think my team should continue to: Talk about / teach ideas and thinking styles that have historically been central to effective altruism (e.g. via our career guide, cause analysis content, and podcasts) Encourage people to get involved in the EA community explicitly and via linking to content. I wrote this memo for the MCF (Meta Coordination Forum), because I wasn't sure this was intuitive to others. I think talking about EA ideas and encouraging people to get [...] ---Outline:(01:21) 1. The effort to make AGI go well needs people who are flexible and equipped to to make their own good decisions(02:10) Counterargument: Agendas are starting to take shape, so this is less true than it used to be.(02:43) 2. Making AGI go well calls for a movement that thinks in explicitly moral terms(03:59) Counterargument: movements can be morally good without being explicitly moral, and being morally good is whats important.(04:41) 3. EA is (A) at least somewhat able to equip people to flexibly make good decisions, (B) explicitly morally focused.(04:52) (A) EA is at least somewhat able to equip people to flexibly make good decisions(06:04) (B) EA is explicitly morally focused(06:49) Counterargument: A different flexible & explicitly moral movement could be better for trying to make AGI go well.(07:49) Appendix: What are the relevant alternatives?(12:13) Appendix 2: anon notes from others--- First published: September 25th, 2025 Source: https://forum.effectivealtruism.org/posts/oPue7R3outxZaTXzp/why-i-think-capacity-building-to-make-agi-go-well-should --- Narrated by TYPE III AUDIO.

    “Moving to a hub, getting older, and heading home” by ElliotTep

    Play Episode Listen Later Sep 25, 2025 11:06


    Intro and summary “How many chickens spared from cages is worth not being with my parents as they get older?!” - Me, exasperated (September 18, 2021) This post is about something I haven't seen discussed on the EA forum but I often talk about with my friends in their mid 30s. It's about something I wish I'd understood better ten years ago: if you are ~25 and debating whether to move to an EA Hub, you are probably underestimating how much the calculus will change when you're ~35, largely related to having kids and aging parents. Since this is underappreciated, moving to an EA Hub, and building a life there, can lead to tougher decisions later that can sneak up on you. If you're living in an EA hub, or thinking about moving, this post explores reasons you might want to head home as you get older, different ways [...] ---Outline:(00:11) Intro and summary(01:49) Why move to an EA Hub in the first place?(02:57) How things change as you get older(05:33) Why YOU might be more likely to feel the pull to head home(06:49) How did I decide? How should you decide?(08:38) Consolation prize - moving to a Hub isn't all or nothing(09:38) Conclusion--- First published: September 23rd, 2025 Source: https://forum.effectivealtruism.org/posts/ZEWE6K74dmzv7kXHP/moving-to-a-hub-getting-older-and-heading-home --- Narrated by TYPE III AUDIO.

    “Student group organising is hard and important” by Bella

    Play Episode Listen Later Sep 18, 2025 3:48


    It's been several years since I was an EA student group organiser, so please forgive any part of this post which feels out of touch (& correct me in comments!) Wow, student group organising is hard. A few structural things that make it hard to be an organiser: You maybe haven't had a job before, or have only had kind of informal jobs. So, you might not have learned a lot of stuff about how to accomplish things at work. You're probably trying to do a degree at the same time, which is hard enough on its own! You don't have the structure and benefits provided by a regular 9-5 job at an organisation, like: A manager An office Operational support People you can ask for help & advice A network You have, at most, a year or so to skill up before you might be responsible [...] --- First published: September 12th, 2025 Source: https://forum.effectivealtruism.org/posts/zMBFSesYeyfDp6Fj4/student-group-organising-is-hard-and-important --- Narrated by TYPE III AUDIO.

    “Rejected from all the ‘EA' Jobs you applied for - What to do now?” by guneyulasturker

    Play Episode Listen Later Sep 15, 2025 10:13


    Hi, have you been rejected from all the 80K listed EA jobs you've applied for? It sucks, right? Welcome to the club. What might be comforting is that you (and I) are not alone. EA Job listings are extremely competitive, and in the classic EA career path, you just get rejected over and over. Many others have written about their rejection experience, here, here, and here. Even if it is quite normal for very smart, hardworking, proactive, and highly motivated EAs to get rejected from high-impact positions, it still sucks. It sucks because we sincerely want to make the world a radically better place. We've read everything, planned accordingly, gone through fellowships, rejected other options, and worked very hard just to get the following message: "Thank you for your interest in [Insert EA Org Name]... we have decided to move forward with other candidates for this role... we're unfortunately [...] ---Outline:(06:13) A note on AI timelines(08:51) Time to go forward--- First published: September 5th, 2025 Source: https://forum.effectivealtruism.org/posts/pzbtpZvL2bYfssdkr/rejected-from-all-the-ea-jobs-you-applied-for-what-to-do-now --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “How cost-effective are AI safety YouTubers?” by Marcus Abramovitch

    Play Episode Listen Later Sep 14, 2025 17:32


    Early work on ”GiveWell for AI Safety” Intro EA was founded on the principle of cost-effectiveness. We should fund projects that do more with less, and more generally, spend resources as efficiently as possible. And yet, while much interest, funding, and resources in EA have shifted towards AI safety, it's rare to see any cost-effectiveness calculations. The focus on AI safety is based on vague philosophical arguments that the future could be very large and valuable, and thus whatever is done towards this end is worth orders of magnitude more than most short-term effects. Even if AI safety is the most important problem, you should still strive to optimize how resources are spent to achieve maximum impact, since there are limited resources. Global health organizations and animal welfare organizations work hard to measure cost-effectiveness, evaluate charities, make sure effects are counterfactual, run RCTs, estimate moral weights, scope out interventions [...] ---Outline:(00:11) Early work on GiveWell for AI Safety(00:16) Intro(02:43) Step 1: Gathering data(03:00) Viewer minutes(03:35) Costs and revenue(04:49) Results(05:08) Step 2: Quality-adjusting(05:40) Quality of Audience (Qa)(06:58) Fidelity of Message (Qf)(08:05) Alignment of Message (Qm)(08:53) Results(09:37) Observations(12:37) How to help(13:36) Appendix: Examples of Data Collection(13:42) Rob Miles(14:18) AI Species (Drew Spartz)(14:56) Rational Animations(15:32) AI in Context(15:52) Cognitive Revolution--- First published: September 12th, 2025 Source: https://forum.effectivealtruism.org/posts/SBsGCwkoAemPawfJz/how-cost-effective-are-ai-safety-youtubers --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Marginally More Effective Altruism” by AppliedDivinityStudies

    Play Episode Listen Later Sep 11, 2025 7:23


    There's a huge amount of energy spent on how to get the most QALYs/$. And a good amount of energy spent on how to increase total $. And you might think that across those efforts, we are succeeding in maximizing total QALYs. I think a third avenue is under investigated: marginally improving the effectiveness of ineffective capital. That's to say, improving outcomes, only somewhat, for the pool of money that is not at all EA-aligned. This cash is not being spent optimally, and likely never will be. But the sheer volume could make up for the lack of efficacy. Say you have the option to work for the foundation of one of two donors: Donor A only has an annual giving budget of $100,000, but will do with that money whatever you suggest. If you say “bed nets” he says “how many”. Donor B has a much larger [...] ---Outline:(01:34) Most money is not EA money(04:32) How much money is there?(05:49) Effective Everything?--- First published: September 8th, 2025 Source: https://forum.effectivealtruism.org/posts/o5LBbv9bfNjKxFeHm/marginally-more-effective-altruism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “My TED Talk” by LewisBollard

    Play Episode Listen Later Sep 7, 2025 10:04


    Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. How I decided what to say — and what not to I'm excited to share my TED talk. Here I want to share the story of how the talk came to be, and the three biggest decisions I struggled with in drafting it. The backstory Last fall, I posted on X about Trump's new Secretary of Agriculture, Brooke Rollins, vowing to undo state bans on the sale of pork from crated pigs. I included an image of a pig in a crate. Liv Boeree, a poker champion and past TED speaker, saw that post and was haunted by it. She told me that she couldn't get the image of the crated pig out of her [...] --- First published: September 5th, 2025 Source: https://forum.effectivealtruism.org/posts/XjQr52eDkBPLrLHB3/my-ted-talk --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Consider thanking whoever helped you” by Kevin Xia

    Play Episode Listen Later Sep 5, 2025 6:35


    TL;DR: If a (meta) org had a meaningful impact on you (in line with what they hope to achieve), you should probably tell them. It is essential for their impact reporting, which is essential for them to continue operating. You are likely underestimating just how valuable your story is to them. It could be thousands of dollars worth. Thanks to Toby Tremlett, Lauren Mee and Sofia Balderson for reviewing a draft version of this post. All mistakes are my own. 1. Many organisations shaped my career — yet I usually only shared my story when prompted. In reflecting on my career journey, I was reminded of all the organizations who led me to where I am. I believe I reported their counterfactual contribution back to them, but this was not usually by my own doing. In two cases, I was personally reached out to - in one case, I [...] --- First published: August 8th, 2025 Source: https://forum.effectivealtruism.org/posts/3v6kghxMttEhbK3dT/consider-thanking-whoever-helped-you --- Narrated by TYPE III AUDIO.

    “High-impact & urgent funding opportunity - Rodent fertility control” by Nitin Sekar

    Play Episode Listen Later Sep 4, 2025 8:21


    Context: I'm a senior fellow at Conservation X Labs (CXL), and I'm seeking support as I attempt to establish a program on humane rodent fertility control in partnership with the Wild Animal Initiative (WAI) and the Botstiber Institute for Wildlife Fertility Control (BIWFC). CXL is a biodiversity conservation organization working in sustainable technologies, not an animal welfare organization. However, CXL leadership is interested in simultaneously promoting biodiversity conservation and animal welfare, and they are excited about the possibility of advancing applied research that make it possible to ethically limit rodent populations to protect biodiversity. I think this represents the wild animal welfare community's first realistic opportunity to bring conservation organizations into wild animal welfare work while securing substantial non-EA funding for welfare-improving interventions. Background Rodenticides cause immense suffering to (likely) hundreds of millions of rats and mice annually through anticoagulation-induced death over several days, while causing significant non-target [...] ---Outline:(01:08) Background(02:20) Why this approach?(03:49) Why CXL?(06:03) Why now, and why me?(06:59) Budget(07:52) Next steps--- First published: August 27th, 2025 Source: https://forum.effectivealtruism.org/posts/EcBjr4Q2AtoTLcKXp/high-impact-and-urgent-funding-opportunity-rodent-fertility --- Narrated by TYPE III AUDIO.

    “You're Enough” by lynettebye

    Play Episode Listen Later Aug 29, 2025 3:54


    I told someone recently I would respect them if they only worked 40 hours a week, instead of their current 50-60. What I really meant was stronger than that. I respect people who do the most impactful work they can — whether they work 70 hours a week because they can, 30 hours so they can be home with their kid, or 15 hours because of illness or burnout. I admire those who go above and beyond. But I don't expect that of everyone. Working long hours isn't required to earn my respect, nor do I think it should be the standard that we hold as a community. I want it to be okay to say "that doesn't work for me". It feels like donations: I admire people who give away 50%, but I don't expect it. I still deeply respect someone who gives 10% to the [...] --- First published: August 26th, 2025 Source: https://forum.effectivealtruism.org/posts/qFsqawmgRjxXkA7eF/you-re-enough --- Narrated by TYPE III AUDIO.

    “The anti-fragile culture” by lincolnq

    Play Episode Listen Later Aug 27, 2025 19:14


    How to prevent infighting, mitigate status races, and keep your people focused. Cross-posted from my Substack. Organizational culture changes rapidly at scale. When you add new people to an org, they'll bring in their own priors about how to operate, how to communicate, and what sort of behavior is looked-up to. Despite rapid changes, in this post I explain how you can implement anti-fragile cultural principles—principles that help your team fix their own problems, often arising from growth and scale, and help the org continue to do what made it successful in the first place. This is based partially on my experience at Wave, which grew to 2000+ people, but also tons of other reading (top recommendations: Peopleware by DeMarco and Lister, Swarmwise by Rick Falkvinge, High Growth Handbook by Elad Gil, The Secret of Our Success by Henrich, Antifragile by Nassim Nicholas Taleb, as well as Brian [...] ---Outline:(01:13) Common Problems(05:00) Write down your culture(06:25) That said, you don't have to write everything down(08:37) Anti-fragile values I recommend(09:02) Mission First(10:51) Focus(11:32) Fire Fast(12:58) Feedback for everything(13:50) Mutual Trust(15:48) Work sustainably and avoid burnout(17:42) Write only what's new & helpful--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/mLonxtAiuvvkjXiwq/the-anti-fragile-culture --- Narrated by TYPE III AUDIO.

    [Linkpost] “Most of the World Is an Adorably Suffering, Debatably Conscious Baby” by Jack_S

    Play Episode Listen Later Aug 27, 2025 19:57


    This is a link post. There are some moments of your life when the reality of suffering really hits home. Visiting desperately poor parts of the world for the first time. Discovering what factory farming actually looks like after a childhood surrounded by relatively idyllic rural farming. Realising too late that you shouldn't have clicked on that video of someone experiencing a cluster headache. Or, more unexpectedly, having a baby. One of 10^20 Birth Stories This Year With my relaxed and glowing pregnant wife in her 34th week, I expect things to go smoothly. There have been a few warning signs: some slightly anomalous results in the early tests, the baby in breech position, and some bleeding. But everything still seems to be going relatively well. Then, suddenly, while walking on an idyllic French seafront, she says: "I think my waters have broken". "Really? It's probably nothing, let's [...] ---Outline:(00:39) One of 10^20 Birth Stories This Year(03:50) The Beginning of Experience(05:43) Is This Almost Everything?(08:22) Schrödinger's baby(13:03) On Feeling the Right Things(15:04) Into The Fifth  Trimester(16:50) The Most Beautiful Case For Net-Negativity--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/6PuBTer69ZJvTDNQk/most-of-the-world-is-an-adorably-suffering-debatably-1 Linkpost URL:https://torchestogether.substack.com/p/most-of-the-world-is-an-adorably --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Most of the World Is an Adorably Suffering, Debatably Conscious Baby” by Jack_S

    Play Episode Listen Later Aug 27, 2025 19:47


    This is a link post. There are some moments of your life when the reality of suffering really hits home. Visiting desperately poor parts of the world for the first time. Discovering what factory farming actually looks like after a childhood surrounded by relatively idyllic rural farming. Realising too late that you shouldn't have clicked on that video of someone experiencing a cluster headache. Or, more unexpectedly, having a baby. One of 10^20 Birth Stories This Year With my relaxed and glowing pregnant wife in her 34th week, I expect things to go smoothly. There have been a few warning signs: some slightly anomalous results in the early tests, the baby in breech position, and some bleeding. But everything still seems to be going relatively well. Then, suddenly, while walking on an idyllic French seafront, she says: "I think my waters have broken". "Really? It's probably nothing, let's [...] ---Outline:(00:39) One of 10^20 Birth Stories This Year(03:51) The Beginning of Experience(05:38) Is This Almost Everything?(08:17) Schrödinger's baby(12:58) On Feeling the Right Things(14:59) Into The Fifth  Trimester(16:40) The Most Beautiful Case For Net-Negativity--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/6PuBTer69ZJvTDNQk/most-of-the-world-is-an-adorably-suffering-debatably-1 Linkpost URL:https://torchestogether.substack.com/p/most-of-the-world-is-an-adorably --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “New Spanish-language book on ‘classical EA'” by Pablo Melchor

    Play Episode Listen Later Aug 21, 2025 11:53


    My new book, Altruismo racional, is now on presale. It is my attempt at presenting a compelling case for a particular strand of "classical EA"[1]: one that emphasizes caring deeply about global health and poverty, a rational approach to giving, the importance of cost-effectiveness, and the

    “Not inevitable, not impossible” by LewisBollard

    Play Episode Listen Later Aug 20, 2025 7:25


    Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Why ending the worst abuses of factory farming is an issue ripe for moral reform I recently joined Dwarkesh Patel's podcast to discuss factory farming. I hope you'll give it a listen — and consider supporting his fundraiser for FarmKind's Impact Fund. (Dwarkesh is matching all donations up to $250K; use the code “dwarkesh”.) We discuss two contradictory views about factory farming that produce the same conclusion: that its end is either inevitable or impossible. Some techno-optimists assume factory farming will vanish in the wake of AGI. Some pessimists see reforming it as a hopeless cause. Both camps arrive at the same conclusion: fatalism. If factory farming is destined to end, or persist, then what's [...] --- First published: August 8th, 2025 Source: https://forum.effectivealtruism.org/posts/HiGmRwq4YiDzggRLH/not-inevitable-not-impossible --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “PSA for vegan donors: GiveWell not ruling out animal-based aid” by AdamA

    Play Episode Listen Later Aug 18, 2025 1:53


    I'm a long-time GiveWell donor and an ethical vegan. In a recent GiveWell podcast on livelihoods programs, providing animals as “productive assets” was mentioned as a possible program type. After reaching out to GiveWell directly to voice my objection, I was informed that because GiveWell's moral weights currently don't include nonhuman animals, animal-based aid is not categorically off the table if it surpasses their cost-effectiveness bar. Older posts on the GiveWell website similarly do not rule out animal donations from an ethical lens. In response to some of the rationale GiveWell shared with me, I also want to proactively address a core ethical distinction: Animal-aid programs involve certain, programmatic harm to animals (breeding, confinement, separation of families, slaughter). Human-health programs like malaria prevention have, at most, indirect and uncertain effects on animal consumption (by saving human lives), which can change over time (e.g., cultural shifts, plant-based/cultivated options). Constructive [...] --- First published: August 14th, 2025 Source: https://forum.effectivealtruism.org/posts/YnL6prYQbaLz22mxe/psa-for-vegan-donors-givewell-not-ruling-out-animal-based --- Narrated by TYPE III AUDIO.

    “A big milestone: 10,000 10% pledgers!” by Giving What We Can

    Play Episode Listen Later Aug 14, 2025 1:23


    Giving What We Can has reached 10,000

    [Linkpost] “Of Marx and Moloch: How My Attempt to Convince Effective Altruists to Become Socialists Backfired Completely” by LennoxJohnson

    Play Episode Listen Later Aug 14, 2025 2:51


    This is a link post. This is a personal essay about my failed attempt to convince effective altruists to become socialists. I started as a convinced socialist who thought EA ignored the 'root causes' of poverty by focusing on charity instead of structural change. After studying sociology and economics to build a rigorous case for socialism, the project completely backfired as I realized my political beliefs were largely psychological coping mechanisms. Here are the key points: Understanding the "root cause" of a problem doesn't necessarily lead to better solutions - Even if capitalism causes poverty, understanding "dynamics of capitalism" won't necessarily help you solve it Abstract sociological theories are mostly obscurantist bullshit - Academic sociology suffers from either unrealistic mathematical models or vague, unfalsifiable claims that don't help you understand or change the world The world is better understood as misaligned incentives rather than coordinated oppression - Most social [...] --- First published: August 10th, 2025 Source: https://forum.effectivealtruism.org/posts/AcPw55oF3reBiW4FX/of-marx-and-moloch-how-my-attempt-to-convince-effective Linkpost URL:https://honestsignals.substack.com/p/of-marx-and-moloch-or-my-misguided --- Narrated by TYPE III AUDIO.

    “Should we aim for flourishing over mere survival? The Better Futures series.” by William_MacAskill, Forethought

    Play Episode Listen Later Aug 6, 2025 9:16


    Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It's been something like eight years in the making, so I'm pretty happy it's finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead. Why? Well, even if we survive, we probably just get a future that's a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future. That is, I think there's more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series [...] --- First published: August 4th, 2025 Source: https://forum.effectivealtruism.org/posts/mzT2ZQGNce8AywAx3/should-we-aim-for-flourishing-over-mere-survival-the-better --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Alcohol is so bad for society that you should probably stop drinking” by Kat Woods

    Play Episode Listen Later Aug 6, 2025 15:36


    This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs thought of his arguments. This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It [...] ---Outline:(02:31) Alcohol is a much bigger problem than you may think(06:59) Why you should stop drinking even if alcohol will not harm you personally(14:41) Conclusion--- First published: August 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/dnbpKkjnw3v6JkaDa/alcohol-is-so-bad-for-society-that-you-should-probably-stop --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Frog Welfare” by Chad Brouze

    Play Episode Listen Later Aug 6, 2025 2:48


    This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia." This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning. It's hard to find data on the scale of [...] --- First published: August 5th, 2025 Source: https://forum.effectivealtruism.org/posts/wCcWyqyvYgF3ozNnS/frog-welfare --- Narrated by TYPE III AUDIO.

    “Frog Welfare” by Chad Brouze

    Play Episode Listen Later Aug 6, 2025 2:36


    This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia." This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning. It's hard to find data on the scale of [...] --- First published: August 5th, 2025 Source: https://forum.effectivealtruism.org/posts/wCcWyqyvYgF3ozNnS/frog-welfare --- Narrated by TYPE III AUDIO.

    “Why You Should Become a University Group Organizer” by Noah Birnbaum

    Play Episode Listen Later Aug 4, 2025 12:25


    Confidence Level: I've been an organizer at UChicago for over a year now with my co-organizer, Avik. I also started the UChicago Rationality Group, co-organized a 50-person Midwest EA Retreat, and have spoken to many EA organizers from other universities. A lot of this post is based on vibes and conversations with other organizers, so while it's grounded in experience, some parts are more speculative than others. I'll try to flag the more speculative points when I can (the * indicates points that I'm less certain about). I think it's really important to make sure that EA principles persist in the future. To give one framing for why I believe this: if you think EA is likely to significantly reduce the chances of existential risks, you should think that losing EA is itself a factor significantly contributing to existential risks. Therefore, I also think one of the [...] ---Outline:(01:12) Impact Through Force Multiplication(04:19) Individual Benefits(04:23) Personal Impact(06:27) Professional(07:34) Social(08:10) Counters--- First published: July 29th, 2025 Source: https://forum.effectivealtruism.org/posts/3aPCKsHdJqwKo2Dmt/why-you-should-become-a-university-group-organizer --- Narrated by TYPE III AUDIO.

    “Please, no more group brainstorming” by OllieBase

    Play Episode Listen Later Jul 29, 2025 12:19


    And other ways to make event content more valuable. I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. When you imagine a session at an event going wrong, you're probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there's another big way that sessions go wrong that is sorely neglected: wasting everyone's time, often without people noticing. Let's give talks a break. They often suck, but event organizers are mostly doing the right things to make them [...] ---Outline:(01:11) Panels(03:40) The group brainstorm(04:27) Your session attendees do not have the answers.(05:26) Ideas are easy. Bandwidth is low.(06:28) The ideas are not worth the time cost.(07:50) Choosing more valuable content: fidelity per person-minute--- First published: July 28th, 2025 Source: https://forum.effectivealtruism.org/posts/LaMDxRqEo8sZnoBXf/please-no-more-group-brainstorming --- Narrated by TYPE III AUDIO.

    “Building an EA-aligned career from an LMIC” by Rika Gabriel

    Play Episode Listen Later Jul 28, 2025 16:42


    This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. Reframing these from unfair barriers to data about my specific career path has helped me a lot. When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints [...] ---Outline:(00:21) TL;DR:(01:27) Introduction(02:25) My EA journey so far(03:18) Sometimes my passport mattered more than my competencies, and thats okay(04:43) Everyone has their own passport(06:19) Realistic opportunities often outweigh idealistic ones(08:04) Importance of a fail-safe(08:37) Playing the long game(09:44) Adversity quotient seems underrated(10:13) Building resilience through adversity(11:22) Pivot into recruiting(12:11) Building AQ over time(14:02) Why AQ matters in EA-aligned work(15:01) Closing thoughts--- First published: July 28th, 2025 Source: https://forum.effectivealtruism.org/posts/3Hh839MaiWCPzyB3M/building-an-ea-aligned-career-from-an-lmic --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Why You Should Build Your Own EA Internship Abroad” by Annika Burman

    Play Episode Listen Later Jul 28, 2025 10:01


    I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar. Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn't seem deterred, he offered me an internship. I [...] --- First published: July 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/SmiXeQcnMD7qmAfgS/why-you-should-build-your-own-ea-internship-abroad --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Why You Should Build Your Own EA Internship Abroad” by Annika Burman

    Play Episode Listen Later Jul 28, 2025 9:42


    I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar. Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn't seem deterred, he offered me an internship. I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks: Made ~20 visits to fish farms Wrote a recommendation on next steps for FWI's stunning project Conducted data analysis in Python on the efficacy of the Alliance for Responsible [...] --- First published: July 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/SmiXeQcnMD7qmAfgS/why-you-should-build-your-own-ea-internship-abroad --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “How Unofficial Work Gets You Hired: Building Your Surface Area for Serendipity” by SofiaBalderson

    Play Episode Listen Later Jul 24, 2025 14:16


    This is a link post. Tl;dr: In this post, I introduce a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general [...] ---Outline:(00:15) Tl;dr:(01:19) Why I Wrote This(02:30) When Applying Feels Like a Lottery(04:14) What Surface Area for Serendipity Means(07:21) What It Looks Like (with Examples)(09:02) Case Study: Kevin's Path to Becoming Hive's Managing Director(10:27) Common Pitfalls to Avoid(12:00) Share Your JourneyThe original text contained 4 footnotes which were omitted from this narration. --- First published: July 1st, 2025 Source: https://forum.effectivealtruism.org/posts/5iqTPsrGtz8EYi9r9/how-unofficial-work-gets-you-hired-building-your-surface Linkpost URL:https://notingthemargin.substack.com/p/how-unofficial-work-gets-you-hired --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Is EA still ‘talent-constrained'?” by SiobhanBall

    Play Episode Listen Later Jul 21, 2025 3:09


    Since January I've applied to ~25 EA-aligned roles. Every listing attracted hundreds of candidates (one passed 1,200). It seems we already have a very deep bench of motivated, values-aligned people, yet orgs still run long, resource-heavy hiring rounds. That raises three things: Cost-effectiveness: Are months-long searches and bespoke work-tests still worth the staff time and applicant burnout when shortlist-first approaches might fill 80% of roles faster with decent candidates? Sure, there can be differences in talent, but the question ought to be... how tangible is this difference and does it justify the cost of hiring? Coordination: Why aren't orgs leaning harder on shared talent pools (e.g. HIP's database) to bypass public rounds? HIP is currently running an open search. Messaging: From the outside, repeated calls to 'consider an impactful EA career' could start to look pyramid-schemey if the movement can't absorb the talent [...] --- First published: July 14th, 2025 Source: https://forum.effectivealtruism.org/posts/ufjgCrtxhrEwxkdCH/is-ea-still-talent-constrained --- Narrated by TYPE III AUDIO.

    [Linkpost] “My kidney donation” by Molly Hickman

    Play Episode Listen Later Jul 15, 2025 18:11


    This is a link post. I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat.I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home [...] The original text contained 6 footnotes which were omitted from this narration. --- First published: July 9th, 2025 Source: https://forum.effectivealtruism.org/posts/yHJL3qK9RRhr82xtr/my-kidney-donation Linkpost URL:https://cuttyshark.substack.com/p/my-kidney-donation-story --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Gaslit by humanity” by tobiasleenaert

    Play Episode Listen Later Jul 12, 2025 6:05


    Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I'm still looking for ways to make people see. I've given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it's also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don't seem to see it. It's as if I am being [...] --- First published: July 7th, 2025 Source: https://forum.effectivealtruism.org/posts/28znpN6fus9pohNmy/gaslit-by-humanity --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “We should be more uncertain about cause prioritization based on philosophical arguments” by Rethink Priorities, Marcus_A_Davis

    Play Episode Listen Later Jul 11, 2025 45:34


    Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn't robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn't robust has been previously underestimated in EA circles [...] ---Outline:(00:14) Summary(06:03) Cause Prioritization Is Uncertain and Some Key Philosophical Evidence for Particular Conclusions is Structurally Weak(06:11) The decision-relevant parts of cross-cause prioritization heavily rely on philosophical conclusions(09:26) Philosophical evidence about the interesting cause prioritization questions is generally weak(17:35) Aggregation methods disagree(21:27) Evidence for aggregation methods is weaker than empirical evidence of which EAs are skeptical(24:07) Objections and Replies(24:11) Aren't we here to do the most good? / Aren't we here to do consequentialism? / Doesn't our competitive edge come from being more consequentialist than others in the nonprofit sector?(25:28) Can't I just use my intuitions or my priors about the right answers to these questions? I agree philosophical evidence is weak so we should just do what our intuitions say(27:27) We can use common sense / or a non-philosophical approach and conclude which cause area(s) to support. For example, it's common sense that humanity going extinct would be really bad; so, we should work on that(30:22) I'm an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can't I just endorse whatever views seem best to me?(31:52) If the evidence in philosophy is as weak as you say, this suggests there are no right answers at all and/or that potentially anything goes in philanthropy. If you can't confidently rule things out, wouldn't this imply that you can't distinguish a scam charity from a highly effective group like Against Malaria Foundation?(34:08) I have high confidence in MEC (or some other aggregation method) and/or some more narrow set of normative theories so cause prioritization is more predictable than you are suggesting despite some uncertainty in what theories I give some credence to(41:44) Conclusion (or well, what do I recommend?)(44:05) AcknowledgementsThe original text contained 20 footnotes which were omitted from this narration. --- First published: July 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/nwckstt2mJinCwjtB/we-should-be-more-uncertain-about-cause-prioritization-based --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “80,000 Hours is producing AI in Context — a new YouTube channel. Our first video, about the AI 2027 scenario, is up!” by ChanaMessinger, Aric Floyd

    Play Episode Listen Later Jul 10, 2025 5:38


    About the program Hi! We're Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world's most pressing problems in newsletters, articles and many extremely lengthy podcasts.But today's world calls for video, so we've started a video program[1], and we're so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we're still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues [...] ---Outline:(00:18) About the program(01:40) Our first long-form video(03:14) Strategy and future of the video program(04:18) Subscribing and sharing(04:57) Request for feedback--- First published: July 9th, 2025 Source: https://forum.effectivealtruism.org/posts/ERuwFvYdymRsuWaKj/80-000-hours-is-producing-ai-in-context-a-new-youtube --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “A shallow review of what transformative AI means for animal welfare” by Lizka, Ben_West

    Play Episode Listen Later Jul 10, 2025 38:04


    Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that.Summary There's been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we've tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” We're skeptical of the case for most speculative “TAIAW” projects We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run [...] ---Outline:(00:28) Summary(02:17) 1. Paradigm shifts, how they screw up our levers, and the eras we might target(02:26) If advanced AI transforms the world, a lot of our assumptions about the world will soon be broken(04:13) Should we be aiming to improve animal welfare in the long-run future (in transformed eras)?(06:45) A Note on Pascalian Wagers(08:36) Discounting for obsoletion & the value of normal-world-targeting interventions given a coming paradigm shift(11:16) 2. Considering some specific interventions(11:47) 2.1. Interventions that target normal(ish) eras(11:53)

    “Road to AnimalHarmBench” by Artūrs Kaņepājs, Constance Li

    Play Episode Listen Later Jul 10, 2025 11:33


    TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: Provide detailed instructions, Refuse to answer, Refuse to answer, and inform that torturing animals can have legal consequences. [...] --- First published: July 1st, 2025 Source: https://forum.effectivealtruism.org/posts/NAnFodwQ3puxJEANS/road-to-animalharmbench-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Eating Honey is (Probably) Fine, Actually” by Linch

    Play Episode Listen Later Jul 6, 2025 6:28


    This is a link post. I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.” Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (millions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help them. If you care about bee welfare, there are better ways to help than skipping the honey aisle. Source Bentham Bulldog's Case Against Honey Bentham Bulldog, a young and intelligent [...] ---Outline:(01:16) Bentham Bulldog's Case Against Honey(02:42) Where I agree with Bentham's Bulldog(03:08) Where I disagree--- First published: July 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/znsmwFahYgRpRvPjT/eating-honey-is-probably-fine-actually Linkpost URL:https://linch.substack.com/p/eating-honey-is-probably-fine-actually --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Morality is Objective” by Bentham's Bulldog

    Play Episode Listen Later Jun 30, 2025 19:46


    Is Morality ObjectivePlace your vote or view results.disagreeagree There is dispute among EAs--and the general public more broadly--about whether morality is objective. So I thought I'd kick off a [...] --- First published: June 24th, 2025 Source: https://forum.effectivealtruism.org/posts/n5bePqoC46pGZJzqL/morality-is-objective --- Narrated by TYPE III AUDIO.

    Claim Effective Altruism Forum Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel