Effective Altruism Forum Podcast

Follow Effective Altruism Forum Podcast
Share on
Copy link to clipboard

I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

Garrett Baker


    • Feb 22, 2026 LATEST EPISODE
    • weekdays NEW EPISODES
    • 18m AVG DURATION
    • 721 EPISODES


    Search for episodes from Effective Altruism Forum Podcast with a specific topic:

    Latest episodes from Effective Altruism Forum Podcast

    “500k mid-career professionals want to do more good with their careers. Can we help them?” by Dom Jackman

    Play Episode Listen Later Feb 22, 2026 5:45


    I'm Dom Jackman. I founded Escape the City in 2010 to help people leave corporate jobs and find work that matters. 16 years later, 500k+ professionals have used the platform - mostly people 5-15 years into careers at places like McKinsey, Deloitte, Google, the big banks - who feel a growing gap between what they do all day and what they actually care about. I'm not from the EA community. I'm writing this because I think there's a real overlap between the people I work with and what the EA talent ecosystem actually needs. I want to test that before investing serious time in it. What I've noticed Reading through talent discussions on this forum, there's a consistent theme: the pipeline is strongest for early-career people. 80,000 Hours does great work for students and recent grads. Probably Good provides broad guidance. BlueDot, MATS, Talos build skills for specific cause areas. But mid-career professionals with real commercial experience keep coming up as underserved. The "Gaps and opportunities in the EA talent & recruiting landscape" post nails it: these people "don't have 'EA capital,' may be poorly networked and might feel alienated by current messaging." The post calls for "custom entry [...] ---Outline:(00:51) What Ive noticed(01:40) What I see every day(02:28) What Im thinking about building(03:24) Honest questions(04:39) Not looking for funding(04:58) Artifacts --- First published: February 11th, 2026 Source: https://forum.effectivealtruism.org/posts/H9pb6DEasgzjCff9a/500k-mid-career-professionals-want-to-do-more-good-with --- Narrated by TYPE III AUDIO.

    “Our Levels of Ambition Should Match The Problems We're Solving” by Matt Beard

    Play Episode Listen Later Feb 19, 2026 26:29


    [I am a career advisor at 80,000 Hours. I've been thinking about something Will MacAskill said recently in an interview with my shrimp-friend Matt: "should people be more ambitious? I genuinely think yes. I think people systematically aren't ambitious enough, so the answer is almost always yes. Again, the ambition you have should match the scale of the problems that we're facing—and the scale of those problems is very large indeed." This post is my reflection on these ideas.] ************ My last post argued that if you want to have a great career, your goal should not be to get a job. Instead, you should choose an important problem to work on, then “get good and be known.” Building skills will allow you to solve problems and reap the benefits. In the ~500 career advising calls I've hosted in the past year, the most common response I've heard has been: “Okay, how good? How well known? How many hours of practice will get me there?” Most people want to calibrate their ambitions so that the time and energy they invest feels worth it to them. I empathize with this, but when I'm honest– with myself for my own [...] ---Outline:(06:28) Jensen Huang is more ambitious than you(12:58) Most extreme ambition is misplaced(17:45) Okay, how can altruistic people aim higher and work harder?(21:17) Ambition at the End of the Human Era(24:03) Closing Caveats - Efficiency, Burnout, and Choosing What Matters --- First published: February 12th, 2026 Source: https://forum.effectivealtruism.org/posts/7qsisgX3cwETJuPNz/our-levels-of-ambition-should-match-the-problems-we-re --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “The best cause will disappoint you: An intro to the optimisers curse” by titotal

    Play Episode Listen Later Feb 19, 2026 27:38


    This is a link post. I would like to thank David Thorstadt for looking over this. If you spot a factual error in this article please message me. The code used to generate the graphs in the article is available to view here. Introduction Say you are an organiser, tasked with achieving the best result on some metric, such as “trash picked up”, “GDP per capita”, or “lives saved by an effective charity”. There are several possible options of interventions you can take to try and achieve this. How do you choose between them? The obvious thing to do is look at each intervention in turn and make your best, unbiased estimate of how each intervention will perform on your metric, and pick the one that performs the best:Image taken from here Having done this ranking, you declare the top ranking program to be the best intervention and invest in it, expecting that that your top estimate will be the result that you get. This whole procedure is totally normal, and people all around the world, including people in the effective altruist community, do it all the time. In actuality, this procedure is not correct. The optimisers curse is [...] ---Outline:(00:26) Introduction(02:17) The optimisers curse explained simply(04:42) Introducing a toy model(08:45) Introducing speculative interventions(12:15) A simple bayesian correction(18:47) Obstacles to simple optimizer curse solutions.(22:08) How Givewell has reacted to the optimiser curse(25:18) Conclusion --- First published: February 11th, 2026 Source: https://forum.effectivealtruism.org/posts/q2TfTirvspCTH2vbZ/the-best-cause-will-disappoint-you-an-intro-to-the Linkpost URL:https://open.substack.com/pub/titotal/p/the-best-cause-will-disappoint-you?r=1e0is3&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “What is Love ft. Claude & VascoBot” by AgentMa

    Play Episode Listen Later Feb 18, 2026 6:28


    What is the highest form of love? According to the VascoBot Claude programmed for me: “Thanks for the great question, AgentMa

    “The reality of long-term EA community building: Lessons from 3 years of EA Barcelona” by Melanie Brennan

    Play Episode Listen Later Feb 17, 2026 23:12


    We are Melanie and Anthony, the two community builders at EA Barcelona. In this post, we share where the group stands today and reflect on key learnings from nearly three years of grant-funded community building. We hope these reflections are useful to other community builders, funders, and CEA, particularly around what it realistically takes to build and sustain EA communities over multiple years, from funding stability and feedback loops to the personal sustainability of professional community builders. TL;DR EA Barcelona was funded by the EA Infrastructure Fund between May 2023 and December 2025 (

    “Preparing for a flush future: work, giving, and conduct” by Sam Anschell

    Play Episode Listen Later Feb 17, 2026 7:25


    Note: opinions are all my own. Following Jeff Kaufman's Front-Load Giving Because of Anthropic Donors and Jenn's Funding Conversation We Left Unfinished, I think there is a real likelihood that impactful causes will receive significantly more funding in the near future. As background on where this new funding could come from: Coefficient Giving announced:  A recent NYT piece covered rumors of an Anthropic valuation at $350 billion. Many of Anthropic's cofounders and early employees have pledged to donate significant amounts of their equity, and it seems likely that an outsized share of these donations would go to effective causes. A handful of other sources have the potential to grow their giving: Founders Pledge has secured $12.8 billion in pledged funding, and significantly scaled the amount it directs.[1] The Gates Foundation has increased its giving following Bill Gates' announcement to spend down $200 billion by 2045. Other aligned funders such as Longview, Macroscopic, the Flourishing Fund, the Navigation Fund, GiveWell, Project Resource Optimization, Schmidt Futures/Renaissance Philanthropy, and the Livelihood Impacts Fund have increased their staffing and dollars directed in recent years. The OpenAI Foundation controls a 26% equity stake in the for-profit OpenAI Group PB. This stake is currently valued at $130 billion [...] ---Outline:(02:39) Work(03:50) Giving(04:53) Conduct --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/H8SqwbLxKkiJur3c4/preparing-for-a-flush-future-work-giving-and-conduct --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “EA Grants Database - a new website” by Brian Foerster

    Play Episode Listen Later Feb 17, 2026 1:43


    The EA Grants Database is a new site that neatly aggregates grant data from major EA funders who publish individual or total grant information. It is intended to be easy to maintain long term, entirely piggybacking off of existing data that is likely to be maintained. The website data is updated by a script that can be run in seconds, and I anticipate doing this for the foreseeable future. In creating the website, I tried to make things as clear and straightforward as possible. If your user experience is in any way impaired, I would appreciate hearing from you. I would also appreciate feedback on what features would actually be useful to people, although I am committed to avoiding bloat. In a funding landscape that seems poised to grow, I hope this site can serve as a resource to help grantmakers, grantees, and other interested parties make decisions while also providing perspective on what has come before. My post on matching credits and this website are both outgrowths of my thinking on how we might best financially coordinate as EA grows and becomes more difficult to understand.[1] Relatedly, I am also interested in the sort of mechanisms that [...] --- First published: February 8th, 2026 Source: https://forum.effectivealtruism.org/posts/rohYFGfiFjepLDnWC/ea-grants-database-a-new-website --- Narrated by TYPE III AUDIO.

    “Long-term risks from ideological fanaticism” by David_Althaus, Jamie_Harris, vanessa16, Clare_Diane, Will Aldred

    Play Episode Listen Later Feb 16, 2026 162:42


    Cross-posted to LessWrong.Summary History's most destructive ideologies—like Nazism, totalitarian communism, and religious fundamentalism—exhibited remarkably similar characteristics: epistemic and moral certainty extreme tribalism dividing humanity into a sacred “us” and an evil “them” a willingness to use whatever means necessary, including brutal violence. Such ideological fanaticism was a major driver of eight of the ten greatest atrocities since 1800, including the Taiping Rebellion, World War II, and the regimes of Stalin, Mao, and Hitler. We focus on ideological fanaticism over related concepts like totalitarianism partly because it better captures terminal preferences, which plausibly matter most as we approach superintelligent AI and technological maturity. Ideological fanaticism is considerably less influential than in the past, controlling only a small fraction of world GDP. Yet at least hundreds of millions still hold fanatical views, many regimes exhibit concerning ideological tendencies, and the past two decades have seen widespread democratic backsliding. The long-term influence of ideological fanaticism is uncertain. Fanaticism faces many disadvantages including a weak starting position, poor epistemics, and difficulty assembling broad coalitions. But it benefits from greater willingness to use extreme measures, fervent mass followings, and a historical tendency to survive and even thrive amid technological and societal upheaval. Beyond complete victory or defeat, multipolarity may [...] ---Outline:(00:16) Summary(05:19) What do we mean by ideological fanaticism?(08:40) I. Dogmatic certainty: epistemic and moral lock-in(10:02) II. Manichean tribalism: total devotion to us, total hatred for them(12:42) III. Unconstrained violence: any means necessary(14:33) Fanaticism as a multidimensional continuum(16:09) Ideological fanaticism drove most of recent historys worst atrocities(19:24) Death tolls dont capture all harm(20:55) Intentional versus natural or accidental harm(22:44) Why emphasize ideological fanaticism over political systems like totalitarianism?(25:07) Fanatical and totalitarian regimes have caused far more harm than all other regime types(26:29) Authoritarianism as a risk factor(27:19) Values change political systems: Ideological fanatics seek totalitarianism, not democracy(29:50) Terminal values may matter independently of political systems, especially with AGI(31:02) Fanaticisms connection to malevolence (dark personality traits)(34:22) The current influence of ideological fanaticism(34:42) Historical perspective: it was much worse, but we are sliding back(37:19) Estimating the global scale of ideological fanaticism(43:57) State actors(48:12) How much influence will ideological fanaticism have in the long-term future?(48:57) Reasons for optimism: Why ideological fanaticism will likely lose(49:45) A worse starting point and historical track record(50:33) Fanatics intolerance results in coalitional disadvantages(51:53) The epistemic penalty of irrational dogmatism(54:21) The marketplace of ideas and human preferences(55:57) Reasons for pessimism: Why ideological fanatics may gain power(56:04) The fragility of democratic leadership in AI(56:37) Fanatical actors may grab power via coups or revolutions(59:36) Fanatics have fewer moral constraints(01:01:13) Fanatics prioritize destructive capabilities(01:02:13) Some ideologies with fanatical elements have been remarkably resilient and successful(01:03:01) Novel fanatical ideologies could emerge--or existing ones could mutate(01:05:08) Fanatics may have longer time horizons, greater scope-sensitivity, and prioritize growth more(01:07:15) A possible middle ground: Persistent multipolar worlds(01:08:33) Why multipolar futures seem plausible(01:10:00) Why multipolar worlds might persist indefinitely(01:15:42) Ideological fanaticism increases existential and suffering risks(01:17:09) Ideological fanaticism increases the risk of war and conflict(01:17:44) Reasons for war and ideological fanaticism(01:26:27) Fanatical ideologies are non-democratic, which increases the risk of war(01:27:00) These risks are both time-sensitive and timeless(01:27:44) Fanatical retributivism may lead to astronomical suffering(01:29:50) Empirical evidence: how many people endorse eternal extreme punishment?(01:33:53) Religious fanatical retributivism(01:40:45) Secular fanatical retributivism(01:41:43) Ideological fanaticism could undermine long-reflection-style frameworks and AI alignment(01:42:33) Ideological fanaticism threatens collective moral deliberation(01:47:35) AI alignment may not solve the fanaticism problem either(01:53:33) Prevalence of reality-denying, anti-pluralistic, and punitive worldviews(01:55:44) Ideological fanaticism could worsen many other risks(01:55:49) Differential intellectual regress(01:56:51) Ideological fanaticism may give rise to extreme optimization and insatiable moral desires(01:59:21) Apocalyptic terrorism(02:00:05) S-risk-conducive propensities and reverse cooperative intelligence(02:01:28) More speculative dynamics: purity spirals and self-inflicted suffering(02:03:00) Unknown unknowns and navigating exotic scenarios(02:03:43) Interventions(02:05:31) Societal or political interventions(02:05:51) Safeguarding democracy(02:06:40) Reducing political polarization(02:10:26) Promoting anti-fanatical values: classical liberalism and Enlightenment principles(02:13:55) Growing the influence of liberal democracies(02:15:54) Encouraging reform in illiberal countries(02:16:51) Promoting international cooperation(02:22:36) Artificial intelligence-related interventions(02:22:41) Reducing the chance that transformative AI falls into the hands of fanatics(02:27:58) Making transformative AIs themselves less likely to be fanatical(02:36:14) Using AI to improve epistemics and deliberation(02:38:13) Fanaticism-resistant post-AGI governance(02:39:51) Addressing deeper causes of ideological fanaticism(02:41:26) Supplementary materials(02:41:39) Acknowledgments(02:42:22) References --- First published: February 12th, 2026 Source: https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “More EAs should consider working for the EU” by EU Policy Careers

    Play Episode Listen Later Feb 2, 2026 11:56


    Context: The authors are a few EAs who currently work or have previously worked at the European Commission. In this post, we make the case that more people[1] aiming for a high impact career should consider working for the EU institutions[2] using the Importance, Tractability, Neglectedness framework, and; briefly outline how one might get started on this, highlighting a currently open recruitment drive (deadline 10 March) that only comes along once every ~5 years.Why working at the EU can be extremely impactfulImportance The EU adopts binding legislation for a continent of 450 million people and has a significant budget, making it an important player across different EA cause areas.Animal welfare[3] The EU sets welfare standards for the over 10 billion farmed animals slaughtered across the continent each year. The issue suffered a major setback in 2023, when the Commission, in the final steps of the process, dropped the ‘world's most comprehensive farm animal welfare reforms to date', following massive farmers' protests in Brussels. The reform would have included ‘banning cages and crates for Europe's roughly 300 million caged animals, ending the routine mutilation of perhaps 500 million animals per year, stopping the [...] ---Outline:(00:43) Why working at the EU can be extremely impactful(00:49) Importance(05:30) Tractability(07:22) Neglectedness(09:00) Paths into the EU --- First published: February 1st, 2026 Source: https://forum.effectivealtruism.org/posts/t23ko3x2MoHekCKWC/more-eas-should-consider-working-for-the-eu --- Narrated by TYPE III AUDIO.

    “The Scaling Series Discussion Thread: with Toby Ord” by Toby Tremlett

    Play Episode Listen Later Feb 2, 2026 2:40


    We're trying something a bit new this week. Over the last year, Toby Ord has been writing about the implications of the fact that improvements in AI require exponentially more compute. Only one of these posts so far has been put on the EA forum. This week we've put the entire series on the Forum and made this thread for you to discuss your reactions to the posts. Toby Ord will check in once a day to respond to your comments[1]. Feel free to also comment directly on the individual posts that make up this sequence, but you can treat this as a central discussion space for both general takes and more specific questions. If you haven't read the series yet, we've created a page where you can, and you can see the summaries of each post below: Are the Costs of AI Agents Also Rising Exponentially? Agents can do longer and longer tasks, but their dollar cost to do these tasks may be growing even faster. How Well Does RL Scale? I show that RL-training for LLMs scales much worse than inference or pre-training. Evidence that Recent AI Gains are Mostly from Inference-Scaling I show how [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/JAcueP8Dh6db6knBK/the-scaling-series-discussion-thread-with-toby-ord --- Narrated by TYPE III AUDIO.

    [Linkpost] “Are the Costs of AI Agents Also Rising Exponentially?” by Toby_Ord

    Play Episode Listen Later Feb 2, 2026 15:16


    This is a link post. There is an extremely important question about the near-future of AI that almost no-one is asking. We've all seen the graphs from METR showing that the length of tasks AI agents can perform has been growing exponentially over the last 7 years. While GPT-2 could only do software engineering tasks that would take someone a few seconds, the latest models can (50% of the time) do tasks that would take a human a few hours. As this trend shows no signs of stopping, people have naturally taken to extrapolating it out, to forecast when we might expect AI to be able to do tasks that take an engineer a full work-day; or week; or year. But we are missing a key piece of information — the cost of performing this work. Over those 7 years AI systems have grown exponentially. The size of the models (parameter count) has grown by 4,000x and the number of times they are run in each task (tokens generated) has grown by about 100,000x. AI researchers have also found massive efficiencies, but it is eminently plausible that the cost for the peak performance measured by METR has been [...] ---Outline:(13:02) Conclusions(14:05) Appendix(14:08) METR has a similar graph on their page for GPT-5.1 codex. It includes more models and compares them by token counts rather than dollar costs: --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/AbHPpGTtAMyenWGX8/are-the-costs-of-ai-agents-also-rising-exponentially Linkpost URL:https://www.tobyord.com/writing/hourly-costs-for-ai-agents --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Evidence that Recent AI Gains are Mostly from Inference-Scaling” by Toby_Ord

    Play Episode Listen Later Feb 2, 2026 10:01


    This is a link post. In the last year or two, the most important trend in modern AI came to an end. The scaling-up of computational resources used to train ever-larger AI models through next-token prediction (pre-training) stalled out. Since late 2024, we've seen a new trend of using reinforcement learning (RL) in the second stage of training (post-training). Through RL, the AI models learn to do superior chain-of-thought reasoning about the problem they are being asked to solve. This new era involves scaling up two kinds of compute: the amount of compute used in RL post-training the amount of compute used every time the model answers a question Industry insiders are excited about the first new kind of scaling, because the amount of compute needed for RL post-training started off being small compared to the tremendous amounts already used in next-token prediction pre-training. Thus, one could scale the RL post-training up by a factor of 10 or 100 before even doubling the total compute used to train the model. But the second new kind of scaling is a problem. Major AI companies were already starting to spend more compute serving their models to customers than in the training [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/5zfubGrJnBuR5toiK/evidence-that-recent-ai-gains-are-mostly-from-inference Linkpost URL:https://www.tobyord.com/writing/mostly-inference-scaling --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “The Extreme Inefficiency of RL for Frontier Models” by Toby_Ord

    Play Episode Listen Later Feb 2, 2026 14:34


    This is a link post. The new scaling paradigm for AI reduces the amount of information a model can learn from per hour of training by a factor of 1,000 to 1,000,000. I explore what this means and its implications for scaling. The last year has seen a massive shift in how leading AI models are trained. 2018–2023 was the era of pre-training scaling. LLMs were primarily trained by next-token prediction (also known as pre-training). Much of OpenAI's progress from GPT-1 to GPT-4, came from scaling up the amount of pre-training by a factor of 1,000,000. New capabilities were unlocked not through scientific breakthroughs, but through doing more-or-less the same thing at ever-larger scales. Everyone was talking about the success of scaling, from AI labs to venture capitalists to policy makers. However, there's been markedly little progress in scaling up this kind of training since (GPT-4.5 added one more factor of 10, but was then quietly retired). Instead, there has been a shift to taking one of these pre-trained models and further training it with large amounts of Reinforcement Learning (RL). This has produced models like OpenAI's o1, o3, and GPT-5, with dramatic improvements in reasoning (such as solving [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/64iwgmMvGSTBHPdHg/the-extreme-inefficiency-of-rl-for-frontier-models Linkpost URL:https://www.tobyord.com/writing/inefficiency-of-reinforcement-learning --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Inference Scaling Reshapes AI Governance” by Toby_Ord

    Play Episode Listen Later Feb 2, 2026 34:49


    This is a link post. The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance. The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a more complex training programme within the lab. Rapid scaling of inference-at-deployment would: lower the importance of open-weight models (and of securing the weights of closed models), reduce the impact of the first human-level models, change the business model for frontier AI, reduce the need for power-intense data centres, and derail the current paradigm of AI governance via training compute thresholds. Rapid scaling of inference-during-training would have more ambiguous effects that range from a revitalisation of pre-training scaling to a form of recursive self-improvement via iterated distillation and amplification. The end of an era — for both training and governance The intense year-on-year scaling up of AI training runs has been one of the most dramatic and stable markers of the Large Language Model era. Indeed it had been widely taken to be a permanent fixture of the AI landscape and the basis of many approaches to [...] ---Outline:(01:06) The end of an era -- for both training and governance(05:24) Scaling inference-at-deployment(06:42) Reducing the number of simultaneously served copies of each new model(08:45) Reducing the value of securing model weights(09:30) Reducing the benefits and risks of open-weight models(10:05) Unequal performance for different tasks and for different users(12:08) Changing the business model and industry structure(12:50) Reducing the need for monolithic data centres(17:16) Scaling inference-during-training(28:07) Conclusions(30:17) Appendix. Comparing the costs of scaling pre-training vs inference-at-deployment --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/RnsgMzsnXcceFfKip/inference-scaling-reshapes-ai-governance Linkpost URL:https://www.tobyord.com/writing/inference-scaling-reshapes-ai-governance --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Is there a Half-Life for the Success Rates of AI Agents?” by Toby_Ord

    Play Episode Listen Later Feb 2, 2026 19:45


    This is a link post. Building on the recent empirical work of Kwa et al. (2025), I show that within their suite of research-engineering tasks the performance of AI agents on longer-duration tasks can be explained by an extremely simple mathematical model — a constant rate of failing during each minute a human would take to do the task. This implies an exponentially declining success rate with the length of the task and that each agent could be characterised by its own half-life. This empirical regularity allows us to estimate the success rate for an agent at different task lengths. And the fact that this model is a good fit for the data is suggestive of the underlying causes of failure on longer tasks — that they involve increasingly large sets of subtasks where failing any one fails the task. Whether this model applies more generally on other suites of tasks is unknown and an important subject for further work. METR's results on the length of tasks agents can reliably complete A recent paper by Kwa et al. (2025) from the research organisation METR has found an exponential trend in the duration of the tasks that frontier AI agents can [...] ---Outline:(05:33) Explaining these results via a constant hazard rate(14:54) Upshots of the constant hazard rate model(18:47) Further work(19:25) References --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/qz3xyqCeriFHeTAJs/is-there-a-half-life-for-the-success-rates-of-ai-agents-3 Linkpost URL:https://www.tobyord.com/writing/half-life --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Inference Scaling and the Log-x Chart” by Toby_Ord

    Play Episode Listen Later Feb 2, 2026 16:32


    This is a link post. Improving model performance by scaling up inference compute is the next big thing in frontier AI. But the charts being used to trumpet this new paradigm can be misleading. While they initially appear to show steady scaling and impressive performance for models like o1 and o3, they really show poor scaling (characteristic of brute force) and little evidence of improvement between o1 and o3. I explore how to interpret these new charts and what evidence for strong scaling and progress would look like. From scaling training to scaling inference The dominant trend in frontier AI over the last few years has been the rapid scale-up of training — using more and more compute to produce smarter and smarter models. Since GPT-4, this kind of scaling has run into challenges, so we haven't yet seen models much larger than GPT-4. But we have seen a recent shift towards scaling up the compute used during deployment (aka 'test-time compute' or ‘inference compute'), with more inference compute producing smarter models. You could think of this as a change in strategy from improving the quality of your employees' work via giving them more years of training in which acquire [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/zNymXezwySidkeRun/inference-scaling-and-the-log-x-chart Linkpost URL:https://www.tobyord.com/writing/inference-scaling-and-the-log-x-chart --- Narrated by TYPE III AUDIO. ---Images from the article:

    [Linkpost] “The Scaling Paradox” by Toby_Ord

    Play Episode Listen Later Jan 30, 2026 16:16


    This is a link post. AI capabilities have improved remarkably quickly, fuelled by the explosive scale-up of resources being used to train the leading models. But if you examine the scaling laws that inspired this rush, they actually show extremely poor returns to scale. What's going on? AI Scaling is Shockingly Impressive The era of LLMs has seen remarkable improvements in AI capabilities over a very short time. This is often attributed to the AI scaling laws — statistical relationships which govern how AI capabilities improve with more parameters, compute, or data. Indeed AI thought-leaders such as Ilya Sutskever and Dario Amodei have said that the discovery of these laws led them to the current paradigm of rapid AI progress via a dizzying increase in the size of frontier systems. Before the 2020s, most AI researchers were looking for architectural changes to push the frontiers of AI forwards. The idea that scale alone was sufficient to provide the entire range of faculties involved in intelligent thought was unfashionable and seen as simplistic. A key reason it worked was the tremendous versatility of text. As Turing had noted more than 60 years earlier, almost any challenge that one could pose to [...] --- First published: January 30th, 2026 Source: https://forum.effectivealtruism.org/posts/742xJNTqer2Dt9Cxx/the-scaling-paradox Linkpost URL:https://www.tobyord.com/writing/the-scaling-paradox --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Why Isn't EA at the Table When $121 Billion Gets Allocated to Biodiversity Every Year?” by David Goodman

    Play Episode Listen Later Jan 29, 2026 9:09


    There is an insane amount of money being thrown around by international organizations and agreements. Nobody with any kind of power over these agreements is asking basic EA questions like: "What are the problems we're trying to solve?" "What are the most neglected aspects of those problems?" and "What is the most cost-effective way to address those neglected areas?" As someone coming from an EA background reading through plans for $200-700 billion in annual funding commitments that focus on unimaginative and ineffective interventions, it makes you want to tear your hair out. So much good could be done with that money. EA focuses a lot on private philanthropy, earning-to-give (though less so post-SBF), and the usual pots of money. But why don't we have delegations who are knowledgeable in international diplomacy going to COPs and advocating for more investment in lab-grown meat, alternative proteins, or lithium recycling? It seems like there would be insane alpha in such a strategy. An example: The Global Biodiversity Framework The Kunming-Montreal Global Biodiversity Framework (GBF) was adopted in 2022 to halt biodiversity loss. It has 23 targets, commitments of $200 billion annually by 2030 and $700 billion by 2050, and near-universal adoption from [...] ---Outline:(01:12) An example: The Global Biodiversity Framework(02:13) What Is That Money Actually Being Spent On?(03:02) The Elephant in the Room Literally Nobody is Talking About: Beef(04:21) The Absolutely Insane Funding Gap(05:26) The Leverage Point Were Ignoring(06:47) What Would EA Engagement Look Like? --- First published: January 20th, 2026 Source: https://forum.effectivealtruism.org/posts/Peaq4HNhn8agsZY3z/why-isn-t-ea-at-the-table-when-usd121-billion-gets-allocated --- Narrated by TYPE III AUDIO.

    “If EA ruled the world, career advisors would tell some people to work for the postal service” by Toby Tremlett

    Play Episode Listen Later Jan 29, 2026 1:54


    EA thinking is thinking on the margin. When EAs prioritise causes, they are prioritising causes given the fact that they only control their one career, or, sometimes, given that they have some influence over a community of a few thousand people, and the distribution of some millions or billions of dollars. Some critiques of EA act as if statements about cause prioritisation are absolute rather than relative. I.e. that EAs are saying that literally everyone should be working on AI Safety, or, the flipside, that EAs are saying that no one should be working on [insert a problem which is pressing, but not among the most urgent to commit the next million dollars to]. In conversations that sound like this, I've often turned to the idea that, if EAs controlled all the resources in the world, career advisors at the hypothetical world government's version of 80,000 Hours would be advising some people to be postal workers. Given that the EA world government will have long ago filled the current areas of direct EA work, it could be the single most impactful thing a person could do with their skillset, given the comparative neglectedness of work in the [...] --- First published: January 16th, 2026 Source: https://forum.effectivealtruism.org/posts/MZ5g33fXuxd6bSgJW/if-ea-ruled-the-world-career-advisors-would-tell-some-people --- Narrated by TYPE III AUDIO.

    “5 ways to better charity work in 2026” by NickLaing

    Play Episode Listen Later Jan 27, 2026 15:20


    I've started a substack, so a few more people might encounter my spicy takes - I'll still mostly be here. USAID is gone. Direct country aid to low income countries is down 25%. So now's a great time to share five ways I think development charity can be done better in 2026. To state the obvious... none of these ideas will be the best approach all of the time, there's plenty of grey area and nuance. I start a little playful, then get a little more serious. 1. Ditch the Cars Close your eyes and picture the first thing that comes into your head when I say “NGO”. It might be………… a shiny white Landcruiser The view from the front window of my hut But owning cars doesn't usually make economic sense in low income countries. The ‘real' market makes this clear. Business rarely buy cars, instead they use public transport or motorbikes. When companies do own cars, its more Corolla than Landcruiser as well. Cars are often more expensive dollar-for-dollar than in richer countries, fuel cost are high and many NGOs hire drivers, all while public transport is dirt cheap. To move 100km in Uganda [...] ---Outline:(00:43) 1. Ditch the Cars(02:49) 2. Fund Solutions not Projects(07:07) 3. Fund cost effective solutions(08:06) 4. Fund Bimodal - Test and Scale(11:59) 5. Pay workers less --- First published: January 19th, 2026 Source: https://forum.effectivealtruism.org/posts/LvE3s6kCJk4Jck2ww/5-ways-to-better-charity-work-in-2026 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Reflections on FarmKind's January media campaign” by Aidan Alexander, ThomNorman

    Play Episode Listen Later Jan 24, 2026 31:47


    Summary In January 2025, FarmKind ran a provocative media campaign which used controversial media messaging and materials to promote ‘offsetting' as an option for individuals who are concerned about factory farming but are currently unwilling or unable to change their diet. The campaign raised an estimated $16,700--$59,300 (explained in our Results section below) and generated a number of media ‘hits' including TV and created some debate that many advocates have told us they found productive. However we made mistakes in its execution and generated unproductive controversy within the EA and animal advocacy movements. This post aims to explain our theory of change, what happened, what we got wrong, and what we learned. We still believe mobilizing the meat-eating majority to take action for farmed animals requires meeting them where they're at, which sometimes means provocative framing that distinguishes us from vegan advocacy -- though we understand many in the movement disagree. However, we regret specific execution failures, particularly our insufficient stakeholder consultation, which risks sparking infighting within the animal movement.Context FarmKind is a donation platform that aims to bring more money into the movement against factory farming. People donate through our platform directly to six highly effective farmed [...] ---Outline:(00:12) Summary(01:23) Context(02:06) The goals of our campaign(02:51) Primary goals(03:16) Secondary goals(04:06) How we envisaged it working(05:24) Launching the campaign(07:18) Coordination with Veganuary(07:22) Did you tell Veganuary about the campaign in advance?(08:21) Did Veganuary object to the campaign?(09:07) Is there bad blood between you and Veganuary?(09:50) Does Veganuary endorse this campaign?(10:13) What we got wrong(10:16) 1) Underestimating the risk of movement infighting(12:29) 2) Insufficient stakeholder consultation(13:11) 3) Internal coordination failures(13:52) How we responded to concerns(17:20) Results(20:12) FAQs(20:15) Are you anti-vegan?(22:35) Aren't you concerned about dissuading people from being vegan?(26:07) Have you measured whether you're dissuading people from being vegan or supporting animal advocacy?(29:03) Why not just do something much more nuanced?(29:45) Why did you pitch to tabloids and right-wing outlets?(30:50) Conclusion --- First published: January 23rd, 2026 Source: https://forum.effectivealtruism.org/posts/c2buSr3oatKQJZi6F/reflections-on-farmkind-s-january-media-campaign --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Announcing All the Lives You Can Change” by JDBauman, dominicroser, DavidZhang

    Play Episode Listen Later Jan 19, 2026 8:31


    Summary Our new book, All the Lives You Can Change: Effective Altruism for Christians, will be published in April 28 2026 The book introduces effective altruism–style thinking to a Christian audience, framing effectiveness, cause prioritization, and evidence-based action as expressions of loving God and loving one's neighbor (Matt. 22:37–39) Authored by @dominicroser, @DavidZhang and me (JD). You can best support this project by pre-ordering a copy or free intro here Praise for All the Lives You Can Change “Effective altruism asks us to extend our empathy beyond our immediate circle to include distant strangers and future generations. All the Lives You Can Change argues powerfully that this ‘radical empathy' is at the very core of the Christian faith. Inspiring, intellectually rigorous, and deeply practical, this is an essential guide for Christians who want to ensure their compassion translates into the greatest possible impact for the world's most vulnerable people. It's a beautiful, moving book.” — @William_MacAskill, author of What We Owe the Future and Doing Good Better “I couldn't put this book down. It manages to be both inspiring and practical. It blends cutting-edge research with careful theological discussion. . . . Essential reading for Christians who are [...] ---Outline:(00:12) Summary(00:49) Praise for All the Lives You Can Change(02:50) Longer Summary(04:30) About the book(04:55) Table of Contents (Overview)(06:31) Why This Might Be Relevant to the (Secular) EA Community --- First published: January 14th, 2026 Source: https://forum.effectivealtruism.org/posts/E7RqRc3fLNm2syzAh/announcing-all-the-lives-you-can-change --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Is EA underfunding animal advocacy according to our own preferences?” by ElliotTep

    Play Episode Listen Later Jan 14, 2026 7:37


    TL;DR When surveyed, the EA community and leaders think ~18-24% of resources should go towards animal advocacy. The actual figure is about 7%. We as the EA ecosystem are putting less resources (money and time) into animal advocacy than the movement thinks we should when surveyed. This disparity could be because of loss of message fidelity, it's a harder cause area to pitch donors, or the role of large funders, but I'm honestly not too sure. My job at Senterra Funders involves making the case to EA/EA adjacent prospective donors that they can do a tonne of good by donating to animal advocacy charities. As part of this work I've noticed a certain level of inconsistency in the EA ecosystem: I encounter a lot more people who want the animal advocacy movement to 'win' than people working in or donating to the space.The numbersIt turns out this intuition is backed up by survey data. Sources (see Appendix for extra details): Meta Coordination Forum (MCF; 2024) / Talent Need Survey on ideal allocation of financial resources EA Community survey data from 2023 on jobs by cause area I obtained in private correspondence with David Moss. Historical EA [...] ---Outline:(01:07) The numbers(02:37) Accounting for the disparity(05:04) Appendix 1. Data Sources --- First published: January 13th, 2026 Source: https://forum.effectivealtruism.org/posts/FxZdQJXs45fTFnMEe/is-ea-underfunding-animal-advocacy-according-to-our-own --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Why I Donate: A Personal Story” by Stien

    Play Episode Listen Later Jan 12, 2026 19:09


    Thank you At EAGx Amsterdam, I shared most of this as a talk. I was afraid I'd run out of time, so I decided to do things backwards and start with the thank you. I did not want to miss the most important thing. Since I might lose you halfway reading this long and personal piece, I decided to keep this order. The EA community creates a space that makes it easier to donate and to live my values—and to be okay with living in this world. It normalizes caring about effectiveness and spreadsheets, provides frameworks and research and feedback. This community makes me feel less alone in trying to navigate the absurdity and burden of existence. My Story, Not Yours I am assuming that anything I do is determined by luck and circumstance, nature and nurture. Therefore, one way to explain why I donate is to show you some of those things. This is personal; my story might not be applicable or relatable to you. I'm not sure there's anything practical you can learn from it. But maybe my experience raises questions that help you in your giving journey. First I'll tell you about my life [...] ---Outline:(00:10) Thank you(00:53) My Story, Not Yours(01:39) My life(08:01) I donate because it helps others(08:17) It's my responsibility to do something(09:23) I should do good responsibly(10:20) I donate because it helps me(10:30) Retail Therapy Donation Therapy Effective Giving(12:39) Convenience of effective giving(14:06) This is how I can live the lives I wont get to live(15:16) How I Donate --- First published: December 31st, 2026 Source: https://forum.effectivealtruism.org/posts/bRMQB85KXz6uzqXkf/why-i-donate-a-personal-story --- Narrated by TYPE III AUDIO. ---Images from the article:

    “Don't stop being an EA because you dislike EAs. You don't have to interact with most EAs. Just the ones you like.” by Kat Woods

    Play Episode Listen Later Jan 10, 2026 2:12


    An all too common reason I've seen to “quit EA” is disliking aspects of the community. Maybe “they're” too focused on the “wrong cause area” or are skeptical of yours. Maybe “they” annoy you. Maybe “they” publicly attacked you. I'm putting quotation marks around “they” to highlight an important thing: EA is composed of individuals. Some EAs may annoy you / focus on the "wrong cause area" / publicly attack you / [insert your reason here]. But you don't have to hang out with them! Imagine you decided you didn't like science because you didn't like some scientists. Or even most scientists! That might affect the frequency you go to science conferences. But that shouldn't affect your appreciation of science itself. Yes, science is a community, but it's also a practice, a goal, a method, an idea, results. EA is too. Not to mention - you don't have to interact with most scientists! Or EAs! You can just be picky. I only interact with “most EAs” when I post on the EA Forum or the EA subreddit. Otherwise I've found my favorite EAs and hangout with them regularly. I treat them as [...] --- First published: December 28th, 2025 Source: https://forum.effectivealtruism.org/posts/QPtimJrGBRyqiYzip/don-t-stop-being-an-ea-because-you-dislike-eas-you-don-t --- Narrated by TYPE III AUDIO.

    “Untitled Retrospective and Learnings from AI in Context's First Two VideosDraft” by ChanaMessinger

    Play Episode Listen Later Jan 6, 2026 18:52


    Note: I used LLMs to draft different parts of this. I've checked almost everything, but there might be some mistakes remaining. Apologies for posting this on Christmas Eve. I wanted to get this out the door before the end of the year. Questions welcome, and if it's easy to pull metrics to answer them, I will.Summary 80,000 Hours launched a video program in 2025 focused on longform, cinematic, personality-driven content about AI risks. Our first two longform releases were: We're Not Ready for Superintelligence (the "AI 2027" video): 8.9M views, ~1.4M watch hours If you remember one AI disaster, make it this one (the "MechaHitler" video): 2.7M views, ~419K watch hours Both videos significantly outperformed our expectations (we'd anticipated 15-50K views for the first). The cost per engagement hour ($0.11 and $0.39 respectively, including staff time) compares favorably to other 80,000 Hours programs. This post covers: what we spent, what we got, why we think it worked, and what we'd do differently.The numbersCostsCategoryAI 2027MechaHitlerDirect costs~$50K~$64KStaff hours~450 hrs~450 hrs (Note, I'm assuming it's about the same as for AI 2027, I didn't re-ask people how much time they spent.)Total cost (making some assumptions about we should incorporate staff [...] ---Outline:(00:34) Summary(01:33) The numbers(01:36) Costs(02:16) Timing(02:40) Results(03:46) How valuable is a video watch hour?(04:24) Qualitative Feedback(04:28) AI 2027(05:51) MechaHitler(06:12) YouTube commenters like:(06:52) What the comments don't like:(07:17) Qualitative Analysis(07:21) Why we think AI 2027 did well(09:56) Why MechaHitler did less well (but still well)(10:50) Lessons Learned(10:54) Overall what we think matters(11:25) Our guess at what's less important (though we're certainly unsure, maybe if we nailed these, we'd get more success)(12:24) How our production works(12:43) The timeline(13:32) Ideation(14:06) Scripting(14:57) Shooting(15:31) Reshoots / Voiceover(15:45) Editing(16:06) Launch(17:00) What we're still figuring out(17:36) Closing thoughts --- First published: December 24th, 2025 Source: https://forum.effectivealtruism.org/posts/RCRaBYSqBaMzHzTjF/untitled-retrospective-and-learnings-from-ai-in-context-s --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “I give because it's the most rational way to spend my money” by Lorenzo Buonanno

    Play Episode Listen Later Dec 31, 2025 8:46


    I really enjoyed reading the "why I donate" posts in the past week, so much so that I felt compelled to add my reflections, in case someone finds my reasons as interesting as I found theirs. 1. My money needs to be spent on something, might as well spend it on the most efficient things The core reason I give is something that I think is under-represented in the other posts: the money I have and earn will need to be spent on something, and it feels extremely inefficient and irrational to spend it on my future self when it can provide >100x as much to others. To me, it doesn't seem important whether I'm in the global top 10% or bottom 10%, or whether the money I have is due to my efforts or to the place I was born. If it can provide others 100x as much, it just seems inefficient/irrational to allocate it to myself. Honestly, the post could end here, but there are other secondary reasons/perspectives on why I personally donate that I haven't seen commonly discussed. 2. Spending money is voting on how the global economy allocates its resources In 2017, I read Wealth [...] ---Outline:(00:22) 1. My money needs to be spent on something, might as well spend it on the most efficient things(01:09) 2. Spending money is voting on how the global economy allocates its resources(04:11) 3. I dont think its as bad as some make it out to be(07:35) 4. I donate because Im an atheist (/s) --- First published: December 15th, 2025 Source: https://forum.effectivealtruism.org/posts/CSKob9hGmWM7f7yv8/i-give-because-it-s-the-most-rational-way-to-spend-my-money --- Narrated by TYPE III AUDIO.

    “Why I donate: some selfish reasons” by Kestrel

    Play Episode Listen Later Dec 28, 2025 5:52


    This year, I have given money to a range of EA cause areas. Most of it has either been towards global health and development, or EA infrastructure I believe does or could lead to effective fundraising for global health and development. The following are a list of very selfish personal reasons why I like to do this. I feel the selfless reasons have been adequately covered elsewhere, so I'm intentionally leaving them off. I get to ignore ineffective charity adverts. In order to genuinely convince myself that I am helping, I want to see things like well-regarded cost-effectiveness metrics. I do not like heartstring-tugging advertising or vague statements of "should", particularly to do with orphanages. They make me feel a bit ill. So I am glad that donating effectively gives me a very good justification to ignore them. It is a marker of my politics. I don't believe that poor people I don't know in rich countries are 100× more worthy of my help [i.e. worthy of help that's 100× less cost-efficient] than poor people in poor countries. This is because I don't believe anyone is 100× more worthy than anyone. Choosing to donate based on the cost-effectiveness of [...] ---Outline:(00:36) I get to ignore ineffective charity adverts.(01:02) It is a marker of my politics.(01:36) Giving expresses abundance.(02:32) Ive stopped valuing things by how expensive they are.(03:17) People have stopped (openly) judging me about some of my life choices.(03:56) I get to hang out with cool people and be in the cool kids club.(04:16) It helps me genuinely care about helping people.(04:37) It motivates me at my job.(05:01) By giving effectively, I can do great things. --- First published: December 12th, 2025 Source: https://forum.effectivealtruism.org/posts/84PYRzFCeqZGfgv3N/why-i-donate-some-selfish-reasons --- Narrated by TYPE III AUDIO.

    “Ten big wins in 2025 for farmed animals” by LewisBollard

    Play Episode Listen Later Dec 26, 2025 10:23


    Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. It can feel hard to help factory-farmed animals. We're up against a trillion-dollar global industry and its army of lobbyists, marketeers, and apologists. This industry wields vast political influence in nearly every nation and sells its products to most people on earth. Against that, we are a movement of a few thousand full-time advocates operating on a shoestring. Our entire global movement — hundreds of groups combined — brings in less funds in a year than one meat company, JBS, makes in two days. And we have the bigger task. The meat industry just wants to preserve the status quo: virtually no regulation and ever-growing demand for factory farming. We want to upend it — and place humanity on a more humane path. Yet, somehow, we're winning. After decades of installing battery cages, gestation crates, and chick macerators, the industry is now removing them. Once-dominant industries, like fur farming, are collapsing. And advocates are building momentum toward bigger reforms for all farmed animals. Here are [...] --- First published: December 16th, 2025 Source: https://forum.effectivealtruism.org/posts/qTnsqYrmSTHawTNa6/ten-big-wins-in-2025-for-farmed-animals --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “The Further Pledge: Voluntary Simplicity” by GeorgeBridgwater

    Play Episode Listen Later Dec 16, 2025 12:00


    Conscious Meaning We share every moment with trillions of other conscious beings. Some are much like us, and others experience the world very differently. Creatures without a language to structure their thoughts, some who see broader spectrums of light or others who might experience the world in comparative slow motion. Each conscious moment immediately slips into the past largely unobserved and forgotten. They fall through time like snow to become frozen in the past. Always to have happened just as they did. Each conscious moment is transient and one small part of a vast whole, so one could see any individual as meaningless and insignificant. But every conscious moment is imbued with meaning. Happiness that need not justify itself and pains that consume any desire but to escape them. As individuals, we are not responsible for the state of the world. You did not choose to create disease, poverty and mental illness. You can't control nature, and you can't control the society around you. Many schools of philosophy disagree exactly on what our moral obligations are to others. Given this disagreement, we could default to radical scepticism that all attempts to decide what the right way to [...] ---Outline:(00:11) Conscious Meaning(02:06) Ovarian lottery(03:49) The Good we can do(05:18) Creating Balance(06:13) Voluntary Simplicity(08:12) Setting Salary based on the Worlds average income(10:12) Appendix: Let he who is without sin cast the first stone --- First published: December 11th, 2025 Source: https://forum.effectivealtruism.org/posts/wd7XsSwqWCzd2uzhq/the-further-pledge-voluntary-simplicity --- Narrated by TYPE III AUDIO.

    “GWWC's 2025 evaluations of evaluators” by Aidan Whitfield

    Play Episode Listen Later Dec 15, 2025 7:25


    The Giving What We Can research team is excited to share the results of our 2025 round of evaluations of charity evaluators and grantmakers! In this round, we completed two evaluations that will inform our donation recommendations for the 2025 giving season. As with our previous rounds, there are substantial limitations to these evaluations, but we nevertheless think that they are a significant improvement to a landscape in which there were previously no independent evaluations of evaluators' work. In this post, we share the key takeaways from our two 2025 evaluations and link to the full reports. In our conclusion, we explain our plans for future evaluations. Please also see our website for more context on why and how we evaluate evaluators. We look forward to your questions and comments! (Note: we will respond when we return from leave on the 8th of December) Key takeaways from each of our 2025 evaluations The two evaluators included in our 2025 round of evaluating evaluators were: GiveWell (full report) Happier Lives Institute (full report) GiveWell Based on our evaluation, we have decided to continue including GiveWell's Top Charities, Top Charities Fund and All Grants Fund in GWWC's [...] ---Outline:(01:08) Key takeaways from each of our 2025 evaluations(01:25) GiveWell(03:18) Happier Lives Institute (HLI)(06:29) Conclusion and future plans --- First published: December 1st, 2025 Source: https://forum.effectivealtruism.org/posts/sAiHYuuGGT7qvne5P/gwwc-s-2025-evaluations-of-evaluators --- Narrated by TYPE III AUDIO.

    “I Donate because I am Christian” by NickLaing

    Play Episode Listen Later Dec 15, 2025 7:02


    And Effective Altruism has put my faith community to shame The BeginningWhen I became a Christian age 15 my life began to transform, but sadly my first external play was proclaiming no sex before marriage and saying F#$% a bit less (I've since resumed). Two years later at premed, Tuesday was my only night with no tutorial so I joined a church group, which was weirdly labelled “Social Justice”. I had zero clue what this was aboutt, maybe preventing bullying at school?. Our leader Jo opened with a question I'll never forget. “I'm fundraising for World Vision and I told my chain-smoking friend I'll buy him a pack of cigs if he joins the fundraising effort. Do you guys think that's OK?” As we discussed the conundrum for the next hour my heart jumped a little. Perhaps my time, skills and money could be useful for something more than just a comfortable life in the ‘burbs…Why do I Give?“When you give….” Jesus Christian motivations for giving vary wildly. Some mostly give to keep their church club solvent, others to save face, but most have deeper motivations. Here are mine. [...] ---Outline:(01:06) Why do I Give?(01:24) Gratitude and Joy(02:24) Utilitarian(03:14) More to come?(03:49) Christians aren't great at Giving(04:04) Father of Earning to Give?(05:07) We're not much better(06:08) Effective Altruist Giving Impresses me --- First published: December 10th, 2025 Source: https://forum.effectivealtruism.org/posts/QrQ9jwFSNoEdd373f/i-donate-because-i-am-christian --- Narrated by TYPE III AUDIO.

    “3 doubts about veganism” by emre kaplan

    Play Episode Listen Later Dec 12, 2025 5:44


    I keep thinking about what kind of identity would be useful for building a powerful animal advocacy movement. Here are 3 features of veganism that I often think about which make me doubt its usefulness.Too maximalist The official definition of veganism by the inventors of the term is the following: “Veganism is a philosophy and way of living which seeks to exclude—as far as is possible and practicable—all forms of exploitation of, and cruelty to, animals for food, clothing or any other purpose” This basically amounts to "avoid doing bad things as far as possible." The threshold sits right below what is impossible. I think that is way too ambitious. Doing the best possible thing at every circumstance shouldn't be the criterion for inclusion to a social movement. We don't expect human rights activists to avoid all forms of exploitation and cruelty as far as possible to qualify as human rights activists. Some activists respond "No, veganism is the bare minimum. The 'as far as possible and practicable' part means it's not about being perfect.". But when I ask for examples of gratuitously harmful actions that veganism doesn't forbid, at most I hear about instances of accidental uses [...] ---Outline:(00:22) Too maximalist(03:37) No space for believers to sin(04:29) Too behaviour-focused --- First published: November 26th, 2025 Source: https://forum.effectivealtruism.org/posts/BX8hPeye2QRcyftRk/3-doubts-about-veganism --- Narrated by TYPE III AUDIO.

    “The funding conversation we left unfinished” by jenn

    Play Episode Listen Later Dec 11, 2025 4:56


    People working in the AI industry are making stupid amounts of money, and word on the street is that Anthropic is going to have some sort of liquidity event soon (for example possibly IPOing sometime next year). A lot of people working in AI are familiar with EA, and are intending to direct donations our way (if they haven't started already). People are starting to discuss what this might mean for their own personal donations and for the ecosystem, and this is encouraging to see. It also has me thinking about 2022. Immediately before the FTX collapse, we were just starting to reckon, as a community, with the pretty significant vibe shift in EA that came from having a lot more money to throw around. CitizenTen, in "The Vultures Are Circling" (April 2022), puts it this way: The message is out. There's easy money to be had. And the vultures are coming. On many internet circles, there's been a worrying tone. “You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got $$!” Or, “I'm not even an EA, but I can pretend, as getting a 10k grant is [...] --- First published: December 10th, 2025 Source: https://forum.effectivealtruism.org/posts/vpPee6NgMbPcdsam3/the-funding-conversation-we-left-unfinished --- Narrated by TYPE III AUDIO.

    “Front-Load Giving Because of Anthropic Donors?” by Jeff Kaufman

    Play Episode Listen Later Dec 9, 2025 2:03


    Summary: Anthropic has many employees with an EA-ish outlook, who may soon have a lot of money. If you also have that kind of outlook, money donated sooner will likely be much higher impact. It's December, and I'm trying to figure out how much to donate. This is usually a straightforward question: give 50%. But this year I'm considering dipping into savings. There are many EAs and EA-informed employees at Anthropic, which has been very successful and is reportedly considering an IPO. The Manifold market estimates a median IPO date of June 2027: At a floated $300B valuation and many EAs among their early employees, the amount of additional funding could be in the billions. Efforts I'd most want to support may become less constrained by money than capacity: as I've experienced in running the NAO, scaling programs takes time. This means donations now seem more valuable; ones that help organizations get into a position to productively apply further funding especially so. In retrospect I wish I'd been able to support 80,000 Hours more substantially before Open Philanthropy Coefficient Giving began funding them; this time, with more ability to see what's likely [...] --- First published: December 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/rRBaP7YbXfZibSn3C/front-load-giving-because-of-anthropic-donors --- Narrated by TYPE III AUDIO.

    “Peter Wildeford talks about risks from AI on the Daily Show” by MartinBerlin

    Play Episode Listen Later Dec 9, 2025 0:31


    Ronny Chieng strikes again, this time featuring Peter Wildeford and the risks from AI on the Daily Show: --- First published: December 5th, 2025 Source: https://forum.effectivealtruism.org/posts/epuSKFdGD82cxZAGd/peter-wildeford-talks-about-risks-from-ai-on-the-daily-show --- Narrated by TYPE III AUDIO.

    “Caring about Bugs Isn't Weird” by Bob Fischer

    Play Episode Listen Later Dec 6, 2025 4:48


    I've spoken with hundreds of entomologists at conferences the world over. While there's clearly some self-selection (not everyone wants to talk to a philosopher), my experience is consistent: most think it's reasonable to care about the welfare of insects. Entomologists don't regard it as the last stop on the crazy train; they don't worry they're getting mugged; they don't think the idea is just utilitarianism run amok. Instead, they see some concern for welfare as stemming from a common-sense commitment to being humane in our dealings with animals. Let's be clear: they embrace “some concern,” not “bugs have rights.” Entomologists generally believe it's important to do invasive studies on insects, to manage their populations, to kill them to document their diversity. But given the choice between an aversive and a non-aversive way of euthanizing insects, most prefer the latter. Given the choice between killing fewer insects and more, most prefer fewer. They don't want to end good lives unnecessarily; they don't want to cause gratuitous suffering. It wasn't always this way. But the science of sentience is evolving; attitudes are evolving too. These people work with insects every day; they constantly face choices about how to catch insects, how [...] --- First published: November 23rd, 2025 Source: https://forum.effectivealtruism.org/posts/4FncrGhQKcuFthxiR/caring-about-bugs-isn-t-weird --- Narrated by TYPE III AUDIO.

    “Announcing the new AIM CEO!” by Ambitious Impact

    Play Episode Listen Later Dec 2, 2025 3:21


    We, the AIM Board and outgoing CEO Joey Savoie, are delighted to announce that Samantha Kagel has been selected as AIM's new CEO, effective December 1, 2025. Over the last few months, we have been engaged in a highly important activity: finding AIM's next CEO. This was not an easy position to fill, as we sought someone who could lead the organization to high growth and impact while retaining the core elements that have made AIM unique. We were committed to conducting a thorough search and put out a public call for candidates, considering over 100 applicants from both public applications and referrals. We evaluated external candidates, internal team members, and past charity graduates, and ultimately identified Samantha as the candidate whom we believe will best execute the next stages of AIM's development. About Samantha Samantha has served on AIM's executive team as Chief Programs Officer for the past 1.5 years, leading the strategy and delivery of our Charity Entrepreneurship function. In this role, she has demonstrated exceptional capability across multiple dimensions: building collaborative teams, driving strategic execution, and maintaining unwavering focus on impact. Before joining the executive team, Samantha successfully filled nearly every role across our organization [...] ---Outline:(01:00) About Samantha(02:13) A note from our incoming CEO, Samantha Kagel --- First published: December 1st, 2025 Source: https://forum.effectivealtruism.org/posts/r8GSGnay6scK7Jmb5/announcing-the-new-aim-ceo --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “The overall cost-effectiveness of an intervention often matters less than the counterfactual use of its funding” by abrahamrowe

    Play Episode Listen Later Nov 26, 2025 13:18


    Cross-posted from Good Structures. For impact-minded donors, it's natural to focus on doing the most cost-effective thing. Suppose you're genuinely neutral on what you do, as long as it maximizes the good. If you're donating money, you want to look for the most cost-effective opportunity (on the margin) and donate to it. But many organizations and individuals who care about cost-effectiveness try to influence the giving of others. This includes: Research organizations that try to influence the allocation or use of charitable funds. Donor advisors who work with donors to find promising opportunities. People arguing to community members on venues like the EA Forum. Charity recommenders like GiveWell and Animal Charity Evaluators. These are endeavors where you're specifically trying to influence the giving of others. And when you influence the giving of others, you don't get full credit for their decisions! You should only get credit for how much better the thing you convinced them to do is compared to what they would otherwise do. This is something that many people in EA and related communities take for granted and find obvious in the abstract. But I think the implications of this aren't always fully digested by the [...] ---Outline:(03:34) Impact is largely a function of what the donor would have done otherwise.(04:36) Is improving the use of effective or ineffective charitable dollars easier?(06:14) How do people respond to these lower impact interventions?(08:14) What are the implications of paying a lot more attention to funding counterfactuals?(10:21) Objections to this argument. --- First published: November 12th, 2025 Source: https://forum.effectivealtruism.org/posts/YrMFHJm7mbswJd7Me/the-overall-cost-effectiveness-of-an-intervention-often --- Narrated by TYPE III AUDIO.

    “Announcing ClusterFree: A cluster headache advocacy and research initiative (and how you can help)” by Alfredo Parra

    Play Episode Listen Later Nov 25, 2025 7:34


    Today we're announcing a new cluster headache advocacy and research initiative: ClusterFree Learn more about how you (and anyone) can help.Our mission ClusterFree's mission is to help cluster headache patients globally access safe, effective pain relief treatments as soon as possible through advocacy and research. Cluster headache (also known as ‘suicide headache') is considered the most painful condition known to mankind. We believe it is one of the largest sources of preventable extreme suffering in humans today. Every year, about 3 million adults (and an unknown number of minors) suffer from this debilitating condition. And yet, even in the EU, only 47% of the cluster headache population had unrestricted access to standard treatments (primarily oxygen and triptans) in 2019. Despite affecting a similar number of people as multiple sclerosis, global investment into cluster headache is minuscule. At the same time, countless patients have reported previously unattainable relief using certain psychedelics, even at low doses. For example, psilocybin, LSD and 5-MeO-DALT can effectively prevent attacks, and N,N-DMT can abort attacks within seconds and also have some preventative effects. However, these life-saving treatments are inaccessible to the vast majority of patients. We want to tackle these problems by: Publishing [...] ---Outline:(00:37) Our mission(02:32) About us(03:22) How you (and anyone) can help(04:59) Room for funding(06:41) Work with us(06:54) Further information --- First published: November 21st, 2025 Source: https://forum.effectivealtruism.org/posts/vzG8wu9b6vuoRxD3z/announcing-clusterfree-a-cluster-headache-advocacy-and --- Narrated by TYPE III AUDIO. ---Images from the article:

    “Open Philanthropy Is Now Coefficient Giving” by Aaron Gertler

    Play Episode Listen Later Nov 25, 2025 3:46


    Big news from Open Philanthropy Coefficient Giving today: Today, Open Philanthropy is becoming Coefficient Giving. Our mission remains the same, but our new name marks our next chapter as we double down on our longstanding goal of helping more funders increase their impact. We believe philanthropy can be a far more vital force for progress than it is today; too often, great opportunities to help others go unfunded. As Coefficient Giving, our aim is to make it as easy as possible for donors to find and fund them. (For more on how we chose our new name, what's changing, and what's staying the same in this next chapter, see here.) The linked essay, from Coefficient CEO Alexander Berger, shares more about the change, our approach to giving, and why we're focused on growing our work with funders outside of Good Ventures. I also wanted to highlight some details that might be of particular interest to a Forum audience. If you have other questions, leave a comment and I'll do my best to respond! Any changes to your relationship with EA? Nope. While we do lots of work outside traditional EA cause areas, we still see EA as a community [...] --- First published: November 18th, 2025 Source: https://forum.effectivealtruism.org/posts/vkvtu6xbvfkHPhJkC/open-philanthropy-is-now-coefficient-giving --- Narrated by TYPE III AUDIO.

    “The Protein Problem” by LewisBollard

    Play Episode Listen Later Nov 23, 2025 10:30


    Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. People can't get enough protein. Fully 61% of Americans say they ate more protein last year — and 85% intended to eat more this year. Last week, dairy giant Danone said it can't keep up with US demand for its high-protein yogurt. Other food makers are rushing to pack protein into everything from Doritos to Pop-Tarts. The craze is global. The net percentage of Europeans wanting more protein has more than doubled since 2023, driven by protein-hungry Brits, Poles, and Spaniards. (The epicurean French and Italians remain holdouts.) Chinese per capita protein supply recently overtook already-high American levels. Young people are leading the charge. Across Asia, Europe, and the US, most Gen Z'ers want more protein, suggesting this trend may persist. In one recent British university survey, “protein” was the top reason students gave for not giving up meat. Doctors are also telling the 6 - 10% of Americans now taking GLP-1 weight loss drugs to eat more protein to prevent muscle loss. This is [...] --- First published: November 5th, 2025 Source: https://forum.effectivealtruism.org/posts/P7NuYbwbMMNTM45Cz/the-protein-problem --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “New donation opportunity: the Center for Wild Animal Welfare” by Ben Stevenson, RichardP

    Play Episode Listen Later Nov 20, 2025 20:10


    The Center for Wild Animal Welfare (CWAW) is a new policy advocacy organization, working to improve the lives of wild animals today and build support for wild animal welfare policy. We're now fundraising for our first year, and the next $60,000 will be matched 1:1 by a generous supporter. We've already started engaging policymakers on wild animal-friendly urban infrastructure (e.g. bird-safe glass). In 2026, we plan to keep engaging on urban infrastructure; start working on additional policy areas like fertility control and pesticide policy; and pursue agenda setting (e.g. publishing a State of Wild Animal Welfare Policy report). Wild animal welfare is one of the world's most important and neglected issues. Governments routinely make decisions that affect trillions of wild animals without considering their individual wellbeing. We want to change this: CWAW is one of the first organizations in the world dedicated to ensuring policymakers consider the individual welfare of wild animals. Our focus on near term policy will help wild animals now, and also build future support by proving that wild animal welfare is a legitimate and tractable policy concern. CWAW is co-founded by Richard Parr MBE, a former policy adviser to the UK Prime Minister, and Ben [...] ---Outline:(02:27) Why support wild animal welfare policy?(07:37) What we've achieved already(09:38) What we'll do in 2026(14:51) How will CWAW use marginal funding?(15:45) Who we are(16:17) Endorsements(18:30) How to help --- First published: November 18th, 2025 Source: https://forum.effectivealtruism.org/posts/uko8rxrcmYB54ZnBH/new-donation-opportunity-the-center-for-wild-animal-welfare --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “To a first approximation, all farmed animals are bugs” by Bob Fischer

    Play Episode Listen Later Nov 19, 2025 3:47


    To a first approximation, all farmed animals are bugs. (Recalling, of course, that shrimps is bugs.) We don't know much about their needs in current production systems. The Arthropoda Foundation is trying to fix that. If we want to help the most numerous farmed animals, we have to answer some basic empirical questions. Arthropoda funds the scientists who provide those answers. Good science isn't cheap, fast, or flashy. But if we don't fund it, we're left guessing about the welfare of the most numerous animals on farms (and in the wild). The stakes are too high for guesswork. This year, Arthropoda granted out ~$160K to fund seven studies. That's seven studies for at least a trillion farmed animals. (And untold numbers of wild animals.) We could easily grant out much more. And with a staff person, we could actively develop projects to support. But as it is, we're at capacity. In its current form, Arthropoda costs about $175K per year, at least 80% of which covers grants. The rest covers costs associated with learning more about the state of the industry, running a small coordination event, and legal compliance with charitable regulations. We're about $55K short for 2026. Anything [...] --- First published: November 17th, 2025 Source: https://forum.effectivealtruism.org/posts/mdcSeMwkBEYhdTAWF/to-a-first-approximation-all-farmed-animals-are-bugs --- Narrated by TYPE III AUDIO.

    “Some hardworking dads in EA” by Julia_Wise

    Play Episode Listen Later Nov 18, 2025 3:26


    It's hard to divide anything 50/50. In many families, even if both parents have paid jobs, one parent will lean into parenting more, and the other will lean harder into paid work. In male/female couples it's usually the woman who owns more of the parenting work, and that can feel unfair if the arrangement comes from assumptions rather than a willing choice. I want to highlight some counter-examples from the effective altruism space, to show it's really possible to make an intentional choice about who does what. @Jeff Kaufman and I both travel for work, but he's more fearless than I am about having the kids solo. Once while I was at an EA conference during the annual vacation with his side of the family, he took our four-year-old and two-year-old to the beach, and also took his sister's two-year-old because she was working. Then, during this trip where he was responsible for three preschoolers, he potty-trained our toddler. My friend has pursued jobs focused on impact, while her husband has a normal job he's not pursuing for altruistic impact. He does more of the childcare while she commutes part of the week to another city [...] --- First published: November 13th, 2025 Source: https://forum.effectivealtruism.org/posts/m8B5kYHdiz5BiW9qH/some-hardworking-dads-in-ea --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Historical EA funding data: 2025 update” by Jacco Rubens

    Play Episode Listen Later Nov 17, 2025 2:39


    Long time lurker, first time poster - be nice please! :) I was searching for summary data of EA funding trends, but couldn't find anything more recent than Tyler's post from 2022. So I decided to update it. If this analysis is done properly anywhere, please let me know. The spreadsheet is here (some things might look weird due to importing from Excel to sheets) Observations EA grantmaking appears on a steady downward trend since 2022 / FTX. The squeeze on GH funding to support AI / other longtermist priorities appears to be really taking effect this year (though 2025 is a rough estimate and has significant uncertainty.) I am really interested in particular about the apparent drop in GW grants this year. I suspect that it is wrong or at least misleading - the metrics report suggests they are raising ~$300m p.a. from non OP donors. Not sure if I have made an error (missing direct to charity donations?) or if they are just sitting on funding with the ongoing USAID disruption. Methodology I compiled the latest grants databases from EA Funds, GiveWell, OpenPhilanthropy, and SFF. I added summary level data from ACE. To remove [...] ---Outline:(00:41) Observations(01:26) Methodology(02:12) Notes --- First published: November 14th, 2025 Source: https://forum.effectivealtruism.org/posts/NWHb4nsnXRxDDFGLy/historical-ea-funding-data-2025-update --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “If wild animal welfare is intractable, everything is intractable.” by mal_graham

    Play Episode Listen Later Nov 16, 2025 28:45


    Author's note: This is an adapted version of my recent talk at EA Global NYC (I'll add a link when it's available). The content has been adjusted to reflect things I learned from talking to people after my talk. If you saw the talk, you might still be interested in the “some objections” section at the end. Summary Wild animal welfare faces frequent tractability concerns, amounting to the idea that ecosystems are too complex to intervene in without causing harm. However, I suspect these concerns reflect inconsistent justification standards rather than unique intractability. To explore this idea: I provide some context about why people sometimes have tractability concerns about wild animal welfare, providing a concrete example using bird-window collisions. I then describe four approaches to handling uncertainty about indirect effects: spotlighting (focusing on target beneficiaries while ignoring broader impacts), ignoring cluelessness (acting on knowable effects only), assigning precise probabilities to all outcomes, and seeking ecologically inert interventions. I argue that, when applied consistently across cause areas, none of these approaches suggest wild animal welfare is distinctively intractable compared to global health or AI safety. Rather, the apparent difference most commonly stems from arbitrarily wide "spotlights" applied to [...] ---Outline:(00:31) Summary(02:15) Consequentialism + impartial altruism → hard to do good(03:43) The challenge: Deep uncertainty and backfire risk(04:41) Example: Bird-window collisions(05:22) We don't actually understand the welfare consequences of bird-window collisions on birds(06:08) We don't know how birds would die otherwise(07:06) The effects on other animals are even more uncertain(09:16) Four approaches to handling uncertainty(10:08) Spotlighting(15:31) Set aside that which you are clueless about(18:31) Assign precise probabilities(20:06) Seek ecologically inert interventions(22:04) Some objections & questions(22:17) The global health comparison: Spotlighting hasnt backfired (for humans)(23:22) Action-inaction distinctions(25:01) Why should justification standards be the same?(26:53) Conclusion --- First published: November 14th, 2025 Source: https://forum.effectivealtruism.org/posts/2YjqfYktNGcx6YNRy/if-wild-animal-welfare-is-intractable-everything-is --- Narrated by TYPE III AUDIO.

    “12 Theses on EA” by Mjreard

    Play Episode Listen Later Nov 13, 2025 13:03


    This is a crosspost from my Substack, where people have been liking and commenting a bunch. I'm too busy during my self-imposed version of Inkhaven to engage much – yes, pity me, I have to blog – but I don't want to leave Forum folks out of the loop! I've been following Effective Altruism discourse since 2014 and involved with the Effective Altruist community since 2015. My credentials are having run Harvard Law School and Harvard University (pan-grad schools) EA, donating $45,000 to EA causes (eep, not 10%), working at 80,000 Hours for three years, and working at a safety-oriented AI org for 10 months after that. I'm also proud of the public comms I've done for EA on this blog (here, here, and here), through my 80k podcast series, current podcast series, and through EA career advice talks I've given at EAGs and smaller events. With that background, you can at least be confident that I am familiar with my subject matter in the takes that follow. As before, let me know which of these seems interesting or wrong and there's a good chance I'll write them up with you the commenter very much in mind as [...] --- First published: November 6th, 2025 Source: https://forum.effectivealtruism.org/posts/s8aNPnrGH2fF3Hkpi/12-theses-on-ea --- Narrated by TYPE III AUDIO.

    “Recruitment is extremely important and impactful. Some people should be completely obsessed with it.” by abrahamrowe

    Play Episode Listen Later Nov 11, 2025 17:25


    Cross-post from Good Structures. Over the last few years, I helped run several dozen hiring rounds for around 15 high-impact organizations. I've also spent the last few months talking with organizations about their recruitment. I've noticed three recurring themes: Candidates generally have a terrible time Work tests are often unpleasant (and the best candidates have to complete many of them), there are hundreds or thousands of candidates for each role, and generally, people can't get the jobs they've been told are the best path to impact. Organizations are often somewhat to moderately unhappy with their candidate pools Organizations really struggle to find the talent they want, despite the number of candidates who apply. Organizations can't find or retain the recruiting talent they want It's extremely hard to find people to do recruitment in this space. Talented recruiters rarely want to stay in their roles. I think the first two points need more discussion, but I haven't seen much discussion about the last. I think this is a major issue: recruitment is probably the most important function for a growing organization, and a skilled recruiter has a fairly large counterfactual impact for the organization they support. So why is it [...] ---Outline:(01:33) Recruitment is high leverage and high impact(03:33) Organizations struggle to hire recruiters(07:52) Many of the people applying to recruitment roles emphasize their experience in recruitment. This isnt the background organizations need(08:44) Almost no one is appropriately obsessed with hiring(10:29) The state of evidence on hiring practices is bad(13:22) Retaining strong recruiters is really hard(14:51) Why might this be less important than I think?(16:40) Im trying to find people interested in this kind of approach to hiring. If this is you, please reach out. --- First published: November 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/HLktkw5LXeqSLCchH/recruitment-is-extremely-important-and-impactful-some-people --- Narrated by TYPE III AUDIO.

    “Announcing ACE's 2025 Charity Recommendations” by Animal Charity Evaluators, Vince Mak

    Play Episode Listen Later Nov 9, 2025 24:27


    16 minute read We update our list of Recommended Charities annually. This year, we announced recommendations on November 4. Each year, hundreds of billions of animals are trapped in the food industry and killed for food —that is more than all the humans who have ever walked on the face of the Earth.1 When faced with such a magnitude of suffering, it can feel overwhelming and hard to know how to help. One of the most impactful things you can do to help animals is to donate to effective animal charities—even a small donation can have a big impact. Our goal is to help you do the most good for animals by providing you with effective giving opportunities that greatly reduce their suffering. Following our comprehensive charity evaluations, we are pleased to announce our Recommended Charities!Charities awarded the status in 2025Charities retaining the status from 2024Animal Welfare ObservatoryAquatic Life InstituteShrimp Welfare ProjectÇiftlik Hayvanlarını Koruma DerneğiSociedade Vegetariana BrasileiraDansk Vegetarisk ForeningThe Humane LeagueGood Food FundWild Animal InitiativeSinergia Animal The Humane League (working globally), Shrimp Welfare Project (in Central and South America, Southeast Asia, and India), and Wild Animal Initiative (global) have continued to work on the most important issues for animals [...] ---Outline:(03:54) Charities Recommended in 2025(03:59) Animal Welfare Observatory(05:44) Shrimp Welfare Project(07:38) Sociedade Vegetariana Brasileira(09:41) The Humane League(11:22) Wild Animal Initiative(13:15) Charities Recommended in 2024(13:20) Aquatic Life Institute(15:25) Çiftlik Hayvanlarını Koruma Derneği(17:34) Dansk Vegetarisk Forening(19:18) The Good Food Fund(21:19) Sinergia Animal(23:20) Support our Recommended Charities The original text contained 2 footnotes which were omitted from this narration. --- First published: November 4th, 2025 Source: https://forum.effectivealtruism.org/posts/waL3iwczrjNt8PreZ/announcing-ace-s-2025-charity-recommendations --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Leaving Open Philanthropy, going to Anthropic” by Joe_Carlsmith

    Play Episode Listen Later Nov 6, 2025 32:02


    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.) Last Friday was my last day at Open Philanthropy. I'll be starting a new role at Anthropic in mid-November, helping with the design of Claude's character/constitution/spec. This post reflects on my time at Open Philanthropy, and it goes into more detail about my perspective and intentions with respect to Anthropic – including some of my takes on AI-safety-focused people working at frontier AI companies. (I shared this post with Open Phil and Anthropic comms before publishing, but I'm speaking only for myself and not for Open Phil or Anthropic.)On my time at Open PhilanthropyI joined Open Philanthropy full-time at the beginning of 2019.[1] At the time, the organization was starting to spin up a new “Worldview Investigations” team, aimed at investigating and documenting key beliefs driving the organization's cause prioritization – and with a special focus on how the organization should think about the potential impact at stake in work on transformatively powerful AI systems.[2] I joined (and eventually: led) the team devoted to this effort, and it's been an amazing project to be a part of. I remember [...] ---Outline:(00:51) On my time at Open Philanthropy(08:11) On going to Anthropic --- First published: November 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/EFF6wSRm9h7Xc6RMt/leaving-open-philanthropy-going-to-anthropic --- Narrated by TYPE III AUDIO.

    Claim Effective Altruism Forum Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel