Effective Altruism Forum Podcast

Follow Effective Altruism Forum Podcast
Share on
Copy link to clipboard

I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

Garrett Baker


    • Sep 11, 2025 LATEST EPISODE
    • weekdays NEW EPISODES
    • 18m AVG DURATION
    • 640 EPISODES


    Search for episodes from Effective Altruism Forum Podcast with a specific topic:

    Latest episodes from Effective Altruism Forum Podcast

    “Marginally More Effective Altruism” by AppliedDivinityStudies

    Play Episode Listen Later Sep 11, 2025 7:23


    There's a huge amount of energy spent on how to get the most QALYs/$. And a good amount of energy spent on how to increase total $. And you might think that across those efforts, we are succeeding in maximizing total QALYs. I think a third avenue is under investigated: marginally improving the effectiveness of ineffective capital. That's to say, improving outcomes, only somewhat, for the pool of money that is not at all EA-aligned. This cash is not being spent optimally, and likely never will be. But the sheer volume could make up for the lack of efficacy. Say you have the option to work for the foundation of one of two donors: Donor A only has an annual giving budget of $100,000, but will do with that money whatever you suggest. If you say “bed nets” he says “how many”. Donor B has a much larger [...] ---Outline:(01:34) Most money is not EA money(04:32) How much money is there?(05:49) Effective Everything?--- First published: September 8th, 2025 Source: https://forum.effectivealtruism.org/posts/o5LBbv9bfNjKxFeHm/marginally-more-effective-altruism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “My TED Talk” by LewisBollard

    Play Episode Listen Later Sep 7, 2025 10:04


    Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. How I decided what to say — and what not to I'm excited to share my TED talk. Here I want to share the story of how the talk came to be, and the three biggest decisions I struggled with in drafting it. The backstory Last fall, I posted on X about Trump's new Secretary of Agriculture, Brooke Rollins, vowing to undo state bans on the sale of pork from crated pigs. I included an image of a pig in a crate. Liv Boeree, a poker champion and past TED speaker, saw that post and was haunted by it. She told me that she couldn't get the image of the crated pig out of her [...] --- First published: September 5th, 2025 Source: https://forum.effectivealtruism.org/posts/XjQr52eDkBPLrLHB3/my-ted-talk --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Consider thanking whoever helped you” by Kevin Xia

    Play Episode Listen Later Sep 5, 2025 6:35


    TL;DR: If a (meta) org had a meaningful impact on you (in line with what they hope to achieve), you should probably tell them. It is essential for their impact reporting, which is essential for them to continue operating. You are likely underestimating just how valuable your story is to them. It could be thousands of dollars worth. Thanks to Toby Tremlett, Lauren Mee and Sofia Balderson for reviewing a draft version of this post. All mistakes are my own. 1. Many organisations shaped my career — yet I usually only shared my story when prompted. In reflecting on my career journey, I was reminded of all the organizations who led me to where I am. I believe I reported their counterfactual contribution back to them, but this was not usually by my own doing. In two cases, I was personally reached out to - in one case, I [...] --- First published: August 8th, 2025 Source: https://forum.effectivealtruism.org/posts/3v6kghxMttEhbK3dT/consider-thanking-whoever-helped-you --- Narrated by TYPE III AUDIO.

    “High-impact & urgent funding opportunity - Rodent fertility control” by Nitin Sekar

    Play Episode Listen Later Sep 4, 2025 8:21


    Context: I'm a senior fellow at Conservation X Labs (CXL), and I'm seeking support as I attempt to establish a program on humane rodent fertility control in partnership with the Wild Animal Initiative (WAI) and the Botstiber Institute for Wildlife Fertility Control (BIWFC). CXL is a biodiversity conservation organization working in sustainable technologies, not an animal welfare organization. However, CXL leadership is interested in simultaneously promoting biodiversity conservation and animal welfare, and they are excited about the possibility of advancing applied research that make it possible to ethically limit rodent populations to protect biodiversity. I think this represents the wild animal welfare community's first realistic opportunity to bring conservation organizations into wild animal welfare work while securing substantial non-EA funding for welfare-improving interventions. Background Rodenticides cause immense suffering to (likely) hundreds of millions of rats and mice annually through anticoagulation-induced death over several days, while causing significant non-target [...] ---Outline:(01:08) Background(02:20) Why this approach?(03:49) Why CXL?(06:03) Why now, and why me?(06:59) Budget(07:52) Next steps--- First published: August 27th, 2025 Source: https://forum.effectivealtruism.org/posts/EcBjr4Q2AtoTLcKXp/high-impact-and-urgent-funding-opportunity-rodent-fertility --- Narrated by TYPE III AUDIO.

    “You're Enough” by lynettebye

    Play Episode Listen Later Aug 29, 2025 3:54


    I told someone recently I would respect them if they only worked 40 hours a week, instead of their current 50-60. What I really meant was stronger than that. I respect people who do the most impactful work they can — whether they work 70 hours a week because they can, 30 hours so they can be home with their kid, or 15 hours because of illness or burnout. I admire those who go above and beyond. But I don't expect that of everyone. Working long hours isn't required to earn my respect, nor do I think it should be the standard that we hold as a community. I want it to be okay to say "that doesn't work for me". It feels like donations: I admire people who give away 50%, but I don't expect it. I still deeply respect someone who gives 10% to the [...] --- First published: August 26th, 2025 Source: https://forum.effectivealtruism.org/posts/qFsqawmgRjxXkA7eF/you-re-enough --- Narrated by TYPE III AUDIO.

    “The anti-fragile culture” by lincolnq

    Play Episode Listen Later Aug 27, 2025 19:14


    How to prevent infighting, mitigate status races, and keep your people focused. Cross-posted from my Substack. Organizational culture changes rapidly at scale. When you add new people to an org, they'll bring in their own priors about how to operate, how to communicate, and what sort of behavior is looked-up to. Despite rapid changes, in this post I explain how you can implement anti-fragile cultural principles—principles that help your team fix their own problems, often arising from growth and scale, and help the org continue to do what made it successful in the first place. This is based partially on my experience at Wave, which grew to 2000+ people, but also tons of other reading (top recommendations: Peopleware by DeMarco and Lister, Swarmwise by Rick Falkvinge, High Growth Handbook by Elad Gil, The Secret of Our Success by Henrich, Antifragile by Nassim Nicholas Taleb, as well as Brian [...] ---Outline:(01:13) Common Problems(05:00) Write down your culture(06:25) That said, you don't have to write everything down(08:37) Anti-fragile values I recommend(09:02) Mission First(10:51) Focus(11:32) Fire Fast(12:58) Feedback for everything(13:50) Mutual Trust(15:48) Work sustainably and avoid burnout(17:42) Write only what's new & helpful--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/mLonxtAiuvvkjXiwq/the-anti-fragile-culture --- Narrated by TYPE III AUDIO.

    [Linkpost] “Most of the World Is an Adorably Suffering, Debatably Conscious Baby” by Jack_S

    Play Episode Listen Later Aug 27, 2025 19:57


    This is a link post. There are some moments of your life when the reality of suffering really hits home. Visiting desperately poor parts of the world for the first time. Discovering what factory farming actually looks like after a childhood surrounded by relatively idyllic rural farming. Realising too late that you shouldn't have clicked on that video of someone experiencing a cluster headache. Or, more unexpectedly, having a baby. One of 10^20 Birth Stories This Year With my relaxed and glowing pregnant wife in her 34th week, I expect things to go smoothly. There have been a few warning signs: some slightly anomalous results in the early tests, the baby in breech position, and some bleeding. But everything still seems to be going relatively well. Then, suddenly, while walking on an idyllic French seafront, she says: "I think my waters have broken". "Really? It's probably nothing, let's [...] ---Outline:(00:39) One of 10^20 Birth Stories This Year(03:50) The Beginning of Experience(05:43) Is This Almost Everything?(08:22) Schrödinger's baby(13:03) On Feeling the Right Things(15:04) Into The Fifth  Trimester(16:50) The Most Beautiful Case For Net-Negativity--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/6PuBTer69ZJvTDNQk/most-of-the-world-is-an-adorably-suffering-debatably-1 Linkpost URL:https://torchestogether.substack.com/p/most-of-the-world-is-an-adorably --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Most of the World Is an Adorably Suffering, Debatably Conscious Baby” by Jack_S

    Play Episode Listen Later Aug 27, 2025 19:47


    This is a link post. There are some moments of your life when the reality of suffering really hits home. Visiting desperately poor parts of the world for the first time. Discovering what factory farming actually looks like after a childhood surrounded by relatively idyllic rural farming. Realising too late that you shouldn't have clicked on that video of someone experiencing a cluster headache. Or, more unexpectedly, having a baby. One of 10^20 Birth Stories This Year With my relaxed and glowing pregnant wife in her 34th week, I expect things to go smoothly. There have been a few warning signs: some slightly anomalous results in the early tests, the baby in breech position, and some bleeding. But everything still seems to be going relatively well. Then, suddenly, while walking on an idyllic French seafront, she says: "I think my waters have broken". "Really? It's probably nothing, let's [...] ---Outline:(00:39) One of 10^20 Birth Stories This Year(03:51) The Beginning of Experience(05:38) Is This Almost Everything?(08:17) Schrödinger's baby(12:58) On Feeling the Right Things(14:59) Into The Fifth  Trimester(16:40) The Most Beautiful Case For Net-Negativity--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/6PuBTer69ZJvTDNQk/most-of-the-world-is-an-adorably-suffering-debatably-1 Linkpost URL:https://torchestogether.substack.com/p/most-of-the-world-is-an-adorably --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “New Spanish-language book on ‘classical EA'” by Pablo Melchor

    Play Episode Listen Later Aug 21, 2025 11:53


    My new book, Altruismo racional, is now on presale. It is my attempt at presenting a compelling case for a particular strand of "classical EA"[1]: one that emphasizes caring deeply about global health and poverty, a rational approach to giving, the importance of cost-effectiveness, and the

    “Not inevitable, not impossible” by LewisBollard

    Play Episode Listen Later Aug 20, 2025 7:25


    Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Why ending the worst abuses of factory farming is an issue ripe for moral reform I recently joined Dwarkesh Patel's podcast to discuss factory farming. I hope you'll give it a listen — and consider supporting his fundraiser for FarmKind's Impact Fund. (Dwarkesh is matching all donations up to $250K; use the code “dwarkesh”.) We discuss two contradictory views about factory farming that produce the same conclusion: that its end is either inevitable or impossible. Some techno-optimists assume factory farming will vanish in the wake of AGI. Some pessimists see reforming it as a hopeless cause. Both camps arrive at the same conclusion: fatalism. If factory farming is destined to end, or persist, then what's [...] --- First published: August 8th, 2025 Source: https://forum.effectivealtruism.org/posts/HiGmRwq4YiDzggRLH/not-inevitable-not-impossible --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “PSA for vegan donors: GiveWell not ruling out animal-based aid” by AdamA

    Play Episode Listen Later Aug 18, 2025 1:53


    I'm a long-time GiveWell donor and an ethical vegan. In a recent GiveWell podcast on livelihoods programs, providing animals as “productive assets” was mentioned as a possible program type. After reaching out to GiveWell directly to voice my objection, I was informed that because GiveWell's moral weights currently don't include nonhuman animals, animal-based aid is not categorically off the table if it surpasses their cost-effectiveness bar. Older posts on the GiveWell website similarly do not rule out animal donations from an ethical lens. In response to some of the rationale GiveWell shared with me, I also want to proactively address a core ethical distinction: Animal-aid programs involve certain, programmatic harm to animals (breeding, confinement, separation of families, slaughter). Human-health programs like malaria prevention have, at most, indirect and uncertain effects on animal consumption (by saving human lives), which can change over time (e.g., cultural shifts, plant-based/cultivated options). Constructive [...] --- First published: August 14th, 2025 Source: https://forum.effectivealtruism.org/posts/YnL6prYQbaLz22mxe/psa-for-vegan-donors-givewell-not-ruling-out-animal-based --- Narrated by TYPE III AUDIO.

    “A big milestone: 10,000 10% pledgers!” by Giving What We Can

    Play Episode Listen Later Aug 14, 2025 1:23


    Giving What We Can has reached 10,000

    [Linkpost] “Of Marx and Moloch: How My Attempt to Convince Effective Altruists to Become Socialists Backfired Completely” by LennoxJohnson

    Play Episode Listen Later Aug 14, 2025 2:51


    This is a link post. This is a personal essay about my failed attempt to convince effective altruists to become socialists. I started as a convinced socialist who thought EA ignored the 'root causes' of poverty by focusing on charity instead of structural change. After studying sociology and economics to build a rigorous case for socialism, the project completely backfired as I realized my political beliefs were largely psychological coping mechanisms. Here are the key points: Understanding the "root cause" of a problem doesn't necessarily lead to better solutions - Even if capitalism causes poverty, understanding "dynamics of capitalism" won't necessarily help you solve it Abstract sociological theories are mostly obscurantist bullshit - Academic sociology suffers from either unrealistic mathematical models or vague, unfalsifiable claims that don't help you understand or change the world The world is better understood as misaligned incentives rather than coordinated oppression - Most social [...] --- First published: August 10th, 2025 Source: https://forum.effectivealtruism.org/posts/AcPw55oF3reBiW4FX/of-marx-and-moloch-how-my-attempt-to-convince-effective Linkpost URL:https://honestsignals.substack.com/p/of-marx-and-moloch-or-my-misguided --- Narrated by TYPE III AUDIO.

    “Should we aim for flourishing over mere survival? The Better Futures series.” by William_MacAskill, Forethought

    Play Episode Listen Later Aug 6, 2025 9:16


    Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It's been something like eight years in the making, so I'm pretty happy it's finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead. Why? Well, even if we survive, we probably just get a future that's a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future. That is, I think there's more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series [...] --- First published: August 4th, 2025 Source: https://forum.effectivealtruism.org/posts/mzT2ZQGNce8AywAx3/should-we-aim-for-flourishing-over-mere-survival-the-better --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Alcohol is so bad for society that you should probably stop drinking” by Kat Woods

    Play Episode Listen Later Aug 6, 2025 15:36


    This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs thought of his arguments. This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It [...] ---Outline:(02:31) Alcohol is a much bigger problem than you may think(06:59) Why you should stop drinking even if alcohol will not harm you personally(14:41) Conclusion--- First published: August 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/dnbpKkjnw3v6JkaDa/alcohol-is-so-bad-for-society-that-you-should-probably-stop --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Frog Welfare” by Chad Brouze

    Play Episode Listen Later Aug 6, 2025 2:48


    This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia." This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning. It's hard to find data on the scale of [...] --- First published: August 5th, 2025 Source: https://forum.effectivealtruism.org/posts/wCcWyqyvYgF3ozNnS/frog-welfare --- Narrated by TYPE III AUDIO.

    “Frog Welfare” by Chad Brouze

    Play Episode Listen Later Aug 6, 2025 2:36


    This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia." This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning. It's hard to find data on the scale of [...] --- First published: August 5th, 2025 Source: https://forum.effectivealtruism.org/posts/wCcWyqyvYgF3ozNnS/frog-welfare --- Narrated by TYPE III AUDIO.

    “Why You Should Become a University Group Organizer” by Noah Birnbaum

    Play Episode Listen Later Aug 4, 2025 12:25


    Confidence Level: I've been an organizer at UChicago for over a year now with my co-organizer, Avik. I also started the UChicago Rationality Group, co-organized a 50-person Midwest EA Retreat, and have spoken to many EA organizers from other universities. A lot of this post is based on vibes and conversations with other organizers, so while it's grounded in experience, some parts are more speculative than others. I'll try to flag the more speculative points when I can (the * indicates points that I'm less certain about). I think it's really important to make sure that EA principles persist in the future. To give one framing for why I believe this: if you think EA is likely to significantly reduce the chances of existential risks, you should think that losing EA is itself a factor significantly contributing to existential risks. Therefore, I also think one of the [...] ---Outline:(01:12) Impact Through Force Multiplication(04:19) Individual Benefits(04:23) Personal Impact(06:27) Professional(07:34) Social(08:10) Counters--- First published: July 29th, 2025 Source: https://forum.effectivealtruism.org/posts/3aPCKsHdJqwKo2Dmt/why-you-should-become-a-university-group-organizer --- Narrated by TYPE III AUDIO.

    “Please, no more group brainstorming” by OllieBase

    Play Episode Listen Later Jul 29, 2025 12:19


    And other ways to make event content more valuable. I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. When you imagine a session at an event going wrong, you're probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there's another big way that sessions go wrong that is sorely neglected: wasting everyone's time, often without people noticing. Let's give talks a break. They often suck, but event organizers are mostly doing the right things to make them [...] ---Outline:(01:11) Panels(03:40) The group brainstorm(04:27) Your session attendees do not have the answers.(05:26) Ideas are easy. Bandwidth is low.(06:28) The ideas are not worth the time cost.(07:50) Choosing more valuable content: fidelity per person-minute--- First published: July 28th, 2025 Source: https://forum.effectivealtruism.org/posts/LaMDxRqEo8sZnoBXf/please-no-more-group-brainstorming --- Narrated by TYPE III AUDIO.

    “Building an EA-aligned career from an LMIC” by Rika Gabriel

    Play Episode Listen Later Jul 28, 2025 16:42


    This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. Reframing these from unfair barriers to data about my specific career path has helped me a lot. When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints [...] ---Outline:(00:21) TL;DR:(01:27) Introduction(02:25) My EA journey so far(03:18) Sometimes my passport mattered more than my competencies, and thats okay(04:43) Everyone has their own passport(06:19) Realistic opportunities often outweigh idealistic ones(08:04) Importance of a fail-safe(08:37) Playing the long game(09:44) Adversity quotient seems underrated(10:13) Building resilience through adversity(11:22) Pivot into recruiting(12:11) Building AQ over time(14:02) Why AQ matters in EA-aligned work(15:01) Closing thoughts--- First published: July 28th, 2025 Source: https://forum.effectivealtruism.org/posts/3Hh839MaiWCPzyB3M/building-an-ea-aligned-career-from-an-lmic --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Why You Should Build Your Own EA Internship Abroad” by Annika Burman

    Play Episode Listen Later Jul 28, 2025 10:01


    I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar. Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn't seem deterred, he offered me an internship. I [...] --- First published: July 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/SmiXeQcnMD7qmAfgS/why-you-should-build-your-own-ea-internship-abroad --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “How Unofficial Work Gets You Hired: Building Your Surface Area for Serendipity” by SofiaBalderson

    Play Episode Listen Later Jul 24, 2025 14:16


    This is a link post. Tl;dr: In this post, I introduce a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general [...] ---Outline:(00:15) Tl;dr:(01:19) Why I Wrote This(02:30) When Applying Feels Like a Lottery(04:14) What Surface Area for Serendipity Means(07:21) What It Looks Like (with Examples)(09:02) Case Study: Kevin's Path to Becoming Hive's Managing Director(10:27) Common Pitfalls to Avoid(12:00) Share Your JourneyThe original text contained 4 footnotes which were omitted from this narration. --- First published: July 1st, 2025 Source: https://forum.effectivealtruism.org/posts/5iqTPsrGtz8EYi9r9/how-unofficial-work-gets-you-hired-building-your-surface Linkpost URL:https://notingthemargin.substack.com/p/how-unofficial-work-gets-you-hired --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Is EA still ‘talent-constrained'?” by SiobhanBall

    Play Episode Listen Later Jul 21, 2025 3:09


    Since January I've applied to ~25 EA-aligned roles. Every listing attracted hundreds of candidates (one passed 1,200). It seems we already have a very deep bench of motivated, values-aligned people, yet orgs still run long, resource-heavy hiring rounds. That raises three things: Cost-effectiveness: Are months-long searches and bespoke work-tests still worth the staff time and applicant burnout when shortlist-first approaches might fill 80% of roles faster with decent candidates? Sure, there can be differences in talent, but the question ought to be... how tangible is this difference and does it justify the cost of hiring? Coordination: Why aren't orgs leaning harder on shared talent pools (e.g. HIP's database) to bypass public rounds? HIP is currently running an open search. Messaging: From the outside, repeated calls to 'consider an impactful EA career' could start to look pyramid-schemey if the movement can't absorb the talent [...] --- First published: July 14th, 2025 Source: https://forum.effectivealtruism.org/posts/ufjgCrtxhrEwxkdCH/is-ea-still-talent-constrained --- Narrated by TYPE III AUDIO.

    [Linkpost] “My kidney donation” by Molly Hickman

    Play Episode Listen Later Jul 15, 2025 18:11


    This is a link post. I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat.I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home [...] The original text contained 6 footnotes which were omitted from this narration. --- First published: July 9th, 2025 Source: https://forum.effectivealtruism.org/posts/yHJL3qK9RRhr82xtr/my-kidney-donation Linkpost URL:https://cuttyshark.substack.com/p/my-kidney-donation-story --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Gaslit by humanity” by tobiasleenaert

    Play Episode Listen Later Jul 12, 2025 6:05


    Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I'm still looking for ways to make people see. I've given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it's also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don't seem to see it. It's as if I am being [...] --- First published: July 7th, 2025 Source: https://forum.effectivealtruism.org/posts/28znpN6fus9pohNmy/gaslit-by-humanity --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “We should be more uncertain about cause prioritization based on philosophical arguments” by Rethink Priorities, Marcus_A_Davis

    Play Episode Listen Later Jul 11, 2025 45:34


    Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn't robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn't robust has been previously underestimated in EA circles [...] ---Outline:(00:14) Summary(06:03) Cause Prioritization Is Uncertain and Some Key Philosophical Evidence for Particular Conclusions is Structurally Weak(06:11) The decision-relevant parts of cross-cause prioritization heavily rely on philosophical conclusions(09:26) Philosophical evidence about the interesting cause prioritization questions is generally weak(17:35) Aggregation methods disagree(21:27) Evidence for aggregation methods is weaker than empirical evidence of which EAs are skeptical(24:07) Objections and Replies(24:11) Aren't we here to do the most good? / Aren't we here to do consequentialism? / Doesn't our competitive edge come from being more consequentialist than others in the nonprofit sector?(25:28) Can't I just use my intuitions or my priors about the right answers to these questions? I agree philosophical evidence is weak so we should just do what our intuitions say(27:27) We can use common sense / or a non-philosophical approach and conclude which cause area(s) to support. For example, it's common sense that humanity going extinct would be really bad; so, we should work on that(30:22) I'm an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can't I just endorse whatever views seem best to me?(31:52) If the evidence in philosophy is as weak as you say, this suggests there are no right answers at all and/or that potentially anything goes in philanthropy. If you can't confidently rule things out, wouldn't this imply that you can't distinguish a scam charity from a highly effective group like Against Malaria Foundation?(34:08) I have high confidence in MEC (or some other aggregation method) and/or some more narrow set of normative theories so cause prioritization is more predictable than you are suggesting despite some uncertainty in what theories I give some credence to(41:44) Conclusion (or well, what do I recommend?)(44:05) AcknowledgementsThe original text contained 20 footnotes which were omitted from this narration. --- First published: July 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/nwckstt2mJinCwjtB/we-should-be-more-uncertain-about-cause-prioritization-based --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “80,000 Hours is producing AI in Context — a new YouTube channel. Our first video, about the AI 2027 scenario, is up!” by ChanaMessinger, Aric Floyd

    Play Episode Listen Later Jul 10, 2025 5:38


    About the program Hi! We're Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world's most pressing problems in newsletters, articles and many extremely lengthy podcasts.But today's world calls for video, so we've started a video program[1], and we're so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we're still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues [...] ---Outline:(00:18) About the program(01:40) Our first long-form video(03:14) Strategy and future of the video program(04:18) Subscribing and sharing(04:57) Request for feedback--- First published: July 9th, 2025 Source: https://forum.effectivealtruism.org/posts/ERuwFvYdymRsuWaKj/80-000-hours-is-producing-ai-in-context-a-new-youtube --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “A shallow review of what transformative AI means for animal welfare” by Lizka, Ben_West

    Play Episode Listen Later Jul 10, 2025 38:04


    Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that.Summary There's been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we've tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” We're skeptical of the case for most speculative “TAIAW” projects We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run [...] ---Outline:(00:28) Summary(02:17) 1. Paradigm shifts, how they screw up our levers, and the eras we might target(02:26) If advanced AI transforms the world, a lot of our assumptions about the world will soon be broken(04:13) Should we be aiming to improve animal welfare in the long-run future (in transformed eras)?(06:45) A Note on Pascalian Wagers(08:36) Discounting for obsoletion & the value of normal-world-targeting interventions given a coming paradigm shift(11:16) 2. Considering some specific interventions(11:47) 2.1. Interventions that target normal(ish) eras(11:53)

    “Road to AnimalHarmBench” by Artūrs Kaņepājs, Constance Li

    Play Episode Listen Later Jul 10, 2025 11:33


    TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: Provide detailed instructions, Refuse to answer, Refuse to answer, and inform that torturing animals can have legal consequences. [...] --- First published: July 1st, 2025 Source: https://forum.effectivealtruism.org/posts/NAnFodwQ3puxJEANS/road-to-animalharmbench-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Eating Honey is (Probably) Fine, Actually” by Linch

    Play Episode Listen Later Jul 6, 2025 6:28


    This is a link post. I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.” Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (millions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help them. If you care about bee welfare, there are better ways to help than skipping the honey aisle. Source Bentham Bulldog's Case Against Honey Bentham Bulldog, a young and intelligent [...] ---Outline:(01:16) Bentham Bulldog's Case Against Honey(02:42) Where I agree with Bentham's Bulldog(03:08) Where I disagree--- First published: July 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/znsmwFahYgRpRvPjT/eating-honey-is-probably-fine-actually Linkpost URL:https://linch.substack.com/p/eating-honey-is-probably-fine-actually --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Morality is Objective” by Bentham's Bulldog

    Play Episode Listen Later Jun 30, 2025 19:46


    Is Morality ObjectivePlace your vote or view results.disagreeagree There is dispute among EAs--and the general public more broadly--about whether morality is objective. So I thought I'd kick off a [...] --- First published: June 24th, 2025 Source: https://forum.effectivealtruism.org/posts/n5bePqoC46pGZJzqL/morality-is-objective --- Narrated by TYPE III AUDIO.

    “Galactic x-risks: Obstacles to Accessing the Cosmic Endowment” by JordanStone

    Play Episode Listen Later Jun 29, 2025 61:57


    Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time and across multiple independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are outlined, and updates for space governance and big picture cause prioritisation are discussed. Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It's a [...] ---Outline:(01:00) Introduction(03:07) Existential risks to a Galactic Civilisation(03:58) Threats Limited to a One Planet Civilisation(04:33) Threats to a small Spacefaring Civilisation(07:02) Galactic Existential Risks(07:22) Self-replicating machines(09:27) Strange matter(10:36) Vacuum decay(11:42) Subatomic Particle Decay(12:32) Time travel(13:12) Fundamental Physics Alterations(13:57) Interactions with Other Universes(15:54) Societal Collapse or Loss of Value(16:25) Artificial Superintelligence(18:15) Conflict with alien intelligence(19:06) Unknowns(21:04) What is the probability that galactic x-risks I listed are actually possible?(22:03) What is the probability that an x-risk will occur?(22:07) What are the factors?(23:06) Cumulative Chances(24:49) If aliens exist, there is no long-term future(26:13) The Way Forward(31:34) Some key takeaways and hot takes to disagree with me onThe original text contained 76 footnotes which were omitted from this narration. --- First published: June 18th, 2025 Source: https://forum.effectivealtruism.org/posts/x7YXxDAwqAQJckdkr/galactic-x-risks-obstacles-to-accessing-the-cosmic-endowment --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “You should update on how DC is talking about AI” by Abby Babby

    Play Episode Listen Later Jun 29, 2025 1:32


    If you are planning on doing AI policy communications to DC policymakers, I recommend watching the full video of the Select Committee on the CCP hearing from this week. In his introductory comments, Ranking Member Representative Krishnamoorthi played a clip of Neo fighting an army of Agent Smiths, described it as misaligned AGI fighting humanity, and then announced he was working on a bill called "The AGI Safety Act" which would require AI to be aligned to human values. On the Republican side, Congressman Moran articulated the risks of AI automated R&D, and how dangerous it would be to let China achieve this capability. Additionally, 250 policymakers (half Republican, half Democrat) signed a letter saying they don't want the Federal government to ban state level AI regulation. The Overton window is rapidly shifting in DC, and I think people should re-evaluate what the [...] --- First published: June 27th, 2025 Source: https://forum.effectivealtruism.org/posts/RPYnR7c6ZmZKBoeLG/you-should-update-on-how-dc-is-talking-about-ai --- Narrated by TYPE III AUDIO.

    “A Practical Guide for Aspiring Super Connectors” by Constance Li

    Play Episode Listen Later Jun 25, 2025 10:57


    TL;DR: You can create outsized value by introducing the right people at the right time in the right way. This post shares general principles and tips I've found useful. Once you become a super connector, it's also important to be a good steward of the unavoidable whisper networks that develop, and I include tips for that as well. Context: I unintentionally fell into a super connector role and wanted to share the lessons I figured out along the way. Feel free to check out my personal story[1] and credentials[2] if you are curious to learn more. Why Super Connectors MatterCredit: GPT 4o In communities like EA, where talented people often work in isolation on high-impact problems, a well-placed introduction or signpost can lead to tremendous impact down the road. Super connectors accelerate access to key information and relationships, which reduces wasted effort and helps triage scarce resources. [...] ---Outline:(00:44) Why Super Connectors Matter(01:21) General Principles(01:25) 1. Know Your North Star(02:03) 2. Understand People Deeply(02:26) 3. Never Waste Peoples Time(03:04) 4. Be Ruthlessly Selective(03:37) 5. Direct Towards Appropriate Engagement Channels(04:14) Practical Tips(05:38) A Note on Whisper Networks(08:47) Getting StartedThe original text contained 4 footnotes which were omitted from this narration. --- First published: June 15th, 2025 Source: https://forum.effectivealtruism.org/posts/JvFrCTKPdHhejAE2q/a-practical-guide-for-aspiring-super-connectors --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Crunch time for cage-free” by LewisBollard

    Play Episode Listen Later Jun 24, 2025 14:48


    Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It's deadline time. Over the last decade, many of the world's largest food companies — from McDonald's to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just [...] --- First published: June 20th, 2025 Source: https://forum.effectivealtruism.org/posts/5DTrsKCSYhp9gnpAi/crunch-time-for-cage-free --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Please reconsider your use of adjectives” by Alfredo Parra

    Play Episode Listen Later Jun 23, 2025 6:12


    I've been meaning to write about this for some time, and @titotal's recent post finally made me do it:Thick red dramatic box emphasis mine. I was going to post a comment in his post, but I think this topic deserves a post of its own. My plea is simply: Please, oh please reconsider using adjectives that reflect a negative judgment (“bad”, “stupid”, “boring”) on the Forum, and instead stick to indisputable facts and observations (“I disagree”, “I doubt”, “I dislike”, etc.). This suggestion is motivated by one of the central ideas behind nonviolent communication (NVC), which I'm a big fan of and which I consider a core life skill. The idea is simply that judgments (typically in the form of adjectives) are disputable/up to interpretation, and therefore can lead to completely unnecessary misunderstandings and hurt feelings: Me: Ugh, the kitchen is dirty again. Why didn't you do the dishes [...] --- First published: June 21st, 2025 Source: https://forum.effectivealtruism.org/posts/Fkh2Mpu3Jk7iREuvv/please-reconsider-your-use-of-adjectives --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Open Philanthropy: Reflecting on our Recent Effective Giving RFP” by Melanie Basnak

    Play Episode Listen Later Jun 21, 2025 7:37


    Earlier this year, we launched a request for proposals (RFP) from organizations that fundraise for highly cost-effective charities. The Livelihood Impact Fund supported the RFP, as did two donors from Meta Charity Funders. We're excited to share the results: $1,565,333 in grants to 11 organizations. We estimate a weighted average ROI of ~4.3x across the portfolio, which means we expect our grantees to raise more than $6 million in adjusted funding over the next 1-2 years. Who's receiving funding These organizations span different regions, donor audiences, and outreach strategies. Here's a quick overview: Charity Navigator (United States) — $200,000 Charity Navigator recently acquired Causeway, through which they now recommend charities with a greater emphasis on impact across a portfolio of cause areas. This grant supports Causeway's growth and refinement, with the aim of nudging donors toward curated higher-impact giving funds. Effectief Geven (Belgium) — $108,000 Newly incubated, with [...] ---Outline:(00:49) Who's receiving funding(04:32) Why promising applications sometimes didn't meet our bar(05:54) What we learned--- First published: June 16th, 2025 Source: https://forum.effectivealtruism.org/posts/prddJRsZdFjpm6yzs/open-philanthropy-reflecting-on-our-recent-effective-giving --- Narrated by TYPE III AUDIO.

    [Linkpost] “A deep critique of AI 2027's bad timeline models” by titotal

    Play Episode Listen Later Jun 19, 2025 79:43


    This is a link post. Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I'm not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli's updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only [...] ---Outline:(00:45) Introduction:(05:27) Part 1: Time horizons extension model(05:33) Overview of their forecast(10:23) The exponential curve(13:25) The superexponential curve(20:20) Conceptual reasons:(28:38) Intermediate speedups(36:00) Have AI 2027 been sending out a false graph?(41:50) Some skepticism about projection(46:13) Part 2: Benchmarks and gaps and beyond(46:19) The benchmark part of benchmark and gaps:(52:53) The time horizon part of the model(58:02) The gap model(01:00:58) What about Eli's recent update?(01:05:19) Six stories that fit the data(01:10:46) ConclusionThe original text contained 11 footnotes which were omitted from this narration. --- First published: June 19th, 2025 Source: https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models Linkpost URL:https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “A deep critique of AI 2027's bad timeline models” by titotal

    Play Episode Listen Later Jun 19, 2025 72:36


    This is a link post. Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I'm not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli's updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only [...] ---Outline:(00:45) Introduction:(05:21) Part 1: Time horizons extension model(05:27) Overview of their forecast(10:30) The exponential curve(13:18) The superexponential curve(19:27) Conceptual reasons:(27:50) Intermediate speedups(34:27) Have AI 2027 been sending out a false graph?(39:47) Some skepticism about projection(43:25) Part 2: Benchmarks and gaps and beyond(43:31) The benchmark part of benchmark and gaps:(50:03) The time horizon part of the model(54:57) The gap model(57:31) What about Eli's recent update?(01:01:39) Six stories that fit the data(01:06:58) ConclusionThe original text contained 11 footnotes which were omitted from this narration. --- First published: June 19th, 2025 Source: https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models Linkpost URL:https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “An invasion of Taiwan is uncomfortably likely, potentially catastrophic, and we can help avoid it.” by JoelMcGuire

    Play Episode Listen Later Jun 19, 2025 61:42


    Formosa: Fulcrum of the Future?An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it. TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms [...] ---Outline:(00:13) Formosa: Fulcrum of the Future?(02:04) Part 0: Background(03:44) Part 1: Invasion -- uncomfortably possible.(08:33) Part 2: Why an invasion would be bad(10:27) 2.1 War and nuclear war(19:20) 2.2. The end of cooperation: AI and Bio-risk(22:44) 2.3 Appeasement or capitulation and the end of the liberal-led order: Value risk(26:04) Part 3: How to prevent a war(29:39) 3.1. Diplomacy: speaking softly(31:21) 3.2. Deterrence: carrying a big stick(34:16) Toy model of deterrence(37:58) Toy cost-effectiveness of deterrence(41:13) How to cost-effectively increase deterrence(43:30) Risks of a deterrence strategy(44:12) 3.3. What can be done?(44:42) How tractable is it to increase deterrence?(45:43) A theory of change for philanthropy increasing Taiwan's military deterrence(45:56) en-US-AvaMultilingualNeural__ Flow chart showing policy influence between think tanks and Taiwan security outcomes.(48:55) 4. Conclusion and further work(50:53) With more time(52:00) Bonus thoughts(52:09) 1. Reminder: a catastrophe killing 10% or more of humanity is pretty unprecedented(53:06) 2. Where's the Effective Altruist think tank for preventing global conflict?(54:11) 3. Does forecasting risks based on scenarios change our view on the likelihood of catastrophe?The original text contained 16 footnotes which were omitted from this narration. --- First published: June 15th, 2025 Source: https://forum.effectivealtruism.org/posts/qvzcmzPcR5mDEhqkz/an-invasion-of-taiwan-is-uncomfortably-likely-potentially --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “From feelings to action: spreadsheets as an act of compassion” by Zachary Robinson

    Play Episode Listen Later Jun 18, 2025 22:03


    This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points: Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field [...] --- First published: June 13th, 2025 Source: https://forum.effectivealtruism.org/posts/eT823dqNAhdRXBYvb/from-feelings-to-action-spreadsheets-as-an-act-of-compassion --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “The Horror Of Unfathomable Pain” by Bentham's Bulldog

    Play Episode Listen Later Jun 12, 2025 13:19


    Crosspost from my blog. Content warning: this article will discuss extreme agony. This is deliberate; I think it's important to get a glimpse of the horror that fills the world and that you can do something about. I think this is one of my most important articles so I'd really appreciate if you could share and restack it! The world is filled with extreme agony. We go through our daily life mostly ignoring its unfathomably shocking dreadfulness because if we didn't, we could barely focus on anything else. But those going through it cannot ignore it. Imagine that you were placed in a pot of water that was slowly brought to a boil until it boiled you to death. Take a moment to really imagine the scenario as fully as you can. Don't just acknowledge at an intellectual level that it would be bad—really seriously think about just [...] --- First published: June 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/rtZuWbsTA7GdsbpAM/the-horror-of-unfathomable-pain --- Narrated by TYPE III AUDIO.

    [Linkpost] “Gabrielle Young: 1995-2025” by Rowan Clements

    Play Episode Listen Later Jun 12, 2025 4:04


    This is a link post. I am deeply saddened to share that Gabrielle Young, a much-loved member of the EA NZ community and personal friend, died last month. This is an absolutely devastating loss, and our hearts go out to Gabby's friends and family, including her parents and her sister Brigette. While most of us knew her through EA, Gabby was an incredibly vibrant person with a diverse range of interests. She brought an infectious enthusiasm to everything she did, from software development to parkour and meditation. Music was also a huge part of Gabby's life. She performed with multiple groups— including ACAPOLLiNATiONS, the Medena ensemble and Gamelan— and enjoyed recording original music with friends. Though EA was just one part of Gabby's life, it was an important one. Like many of us, she cared deeply about alleviating suffering. And in her short life, Gabby had an amazing impact [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: June 6th, 2025 Source: https://forum.effectivealtruism.org/posts/5DvenF2RjFM7QQLtK/gabrielle-young-1995-2025 Linkpost URL:https://effectivealtruism.nz/blog/gabrielle-young --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “The Unparalleled Awesomeness of Effective Altruism Conferences” by Bentham's Bulldog

    Play Episode Listen Later Jun 11, 2025 11:44


    Crosspost from my blog. I just got back from Effective Altruism Global London—a conference that brought together lots of different people trying to do good with their money and careers. It was an inspiring experience. When you write about factory farming, insect suffering, global poverty, and the torment of shrimp, it can, as I've mentioned before, feel like screaming into the void. When you try to explain why it's important that we don't torture insects by the trillions in insect farms, most people look at you like you've grown a third head (after the second head that they look at you like you've grown when you started talking about shrimp welfare). But at effective altruism conferences, people actually care. They're not indifferent to most of the world's suffering. They don't think I'm crazy! There are other people who think the suffering of animals matters—even the suffering of small [...] --- First published: June 9th, 2025 Source: https://forum.effectivealtruism.org/posts/rZKqrRQGesLctkz8d/the-unparalleled-awesomeness-of-effective-altruism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Estimating the Substitutability between Compute and Cognitive Labor in AI Research” by Parker_Whitfill, CherylWu

    Play Episode Listen Later Jun 7, 2025 20:25


    Audio note: this article contains 127 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding. Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute. The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not [...] ---Outline:(00:35) Intro:(02:16) Model(02:19) Baseline CES in Compute(04:07) Conditions for a Software-Only Intelligence Explosion(07:39) Deriving the Estimation Equation(09:31) Alternative CES Formulation in Frontier Experiments(10:59) Estimation(11:02) Data(15:02) Trends(15:58) Estimation Results(18:52) ResultsThe original text contained 13 footnotes which were omitted from this narration. --- First published: June 1st, 2025 Source: https://forum.effectivealtruism.org/posts/xoX936hEvpxToeuLw/estimating-the-substitutability-between-compute-and --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “The Importance of Blasting Good Ideas Into The Ether” by Bentham's Bulldog

    Play Episode Listen Later Jun 5, 2025 11:34


    Crossposted from my blog. When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. Ronny: Oh, so you're helping refugees? Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air). But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born. I especially love that they bring on an EA [...] --- First published: June 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/viSRgubpKDjQcatQi/the-importance-of-blasting-good-ideas-into-the-ether --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Positive effects of EA on mental health” by Julia_Wise

    Play Episode Listen Later Jun 5, 2025 8:47


    Mental illness (including struggles that don't meet a specific diagnosis) is a serious public health burden that affects a large proportion of people. This is true within EA as well as in the general population. In EA, as in any community, it's important for us to try to support those who are struggling. We sometimes see the theory that EA causes unusually bad mental health, but the evidence lightly points toward EA being good or neutral for the wellbeing of most people who engage with it. Most respondents say EA is neutral or good for their mental health There have been surveys done specifically about mental health and EA (2019, 2021, 2023), but these didn't aim to be representative of the EA population. The largest and most representative source is the EA Survey 2022, where most respondents indicated neutral or positive effects of their EA involvement on their [...] ---Outline:(00:41) Most respondents say EA is neutral or good for their mental health(02:51) Why might EA be good for wellbeing?(04:40) Why might it be bad?(05:25) Correlation and causation(05:59) Other fields also affect wellbeing(06:45) EA isnt one-size-fits-all(07:47) Resources--- First published: May 29th, 2025 Source: https://forum.effectivealtruism.org/posts/mfQoEaHeJzdH5u8Nc/positive-effects-of-ea-on-mental-health --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Rescaling and The Easterlin Paradox (2.0)” by Charlie Harrison

    Play Episode Listen Later Jun 4, 2025 14:48


    Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method. That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through. I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome. TLDR Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” Some argue that happiness is rising, but we're reporting it more conservatively — [...] ---Outline:(00:57) TLDR(02:11) 1. Background: A Happiness Paradox(04:02) 2. What is Rescaling?(06:23) 3. My Approach: Life Events would look smaller on stretched out rulers(08:10) 4. Results: Effects Are Shrinking(10:46) 5. How much might we be underestimating life satisfaction?(12:42) 6. Implications--- First published: May 26th, 2025 Source: https://forum.effectivealtruism.org/posts/wSySeNZ6C7hfDfBSx/rescaling-and-the-easterlin-paradox-2-0 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Revamped effectivealtruism.org” by Agnes Stenlund

    Play Episode Listen Later May 28, 2025 6:21


    We've redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action. View the new site I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I'd love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA's broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA's branding and growth strategy: Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate [...] ---Outline:(00:44) Redesign goals(02:09) Before and after(02:22) Landing page(03:50) Site navigation(04:24) New Take action page(05:03) Early results(05:40) Share your thoughtsThe original text contained 1 footnote which was omitted from this narration. --- First published: May 27th, 2025 Source: https://forum.effectivealtruism.org/posts/ZbQKtMMsDP6GnXuwr/revamped-effectivealtruism-org --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Don't update too much from EA community involvement” by Catherine Low

    Play Episode Listen Later May 25, 2025 5:12


    Summary  While many people and organisations in the EA community can be great connections, don't assume that just because a person has been in the EA community for a long time, they'll be a good fit for you to work with or be friends with. Don't assume that just because a project or org has been around for a long time, it would be a good place for you to work. It may be a great opportunity, but it might not. Do some of the usual things you would do to check that this is a good interaction for you (e.g. talk to people who know or have worked with them before starting a collaboration, take time to get to know someone before placing large amounts of trust on them, and pay attention to any signals that this interaction might not be a good for you). [...] ---Outline:(00:11) Summary(01:27) Choosing to work with another person(03:04) Conference attendance(03:38) Working with organisations(04:06) Personal Interactions with Community Members--- First published: May 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/yNm58h8cvufPfBPLP/don-t-update-too-much-from-ea-community-involvement --- Narrated by TYPE III AUDIO.

    “‘Most painful condition known to mankind': A retrospective of the first-ever international research symposium on cluster headache” by Alfredo Parra

    Play Episode Listen Later May 24, 2025 20:07


    Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn't actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I'm not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven't made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many [...] The original text contained 1 footnote which was omitted from this narration. --- First published: May 18th, 2025 Source: https://forum.effectivealtruism.org/posts/7FvDvMQypyua4kTL5/most-painful-condition-known-to-mankind-a-retrospective-of --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Claim Effective Altruism Forum Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel