POPULARITY
Summary There's a near consensus that EA needs funding diversification but with Open Phil accounting for ~90% of EA funding, that's just not possible due to some pretty basic math. Organizations and the community would need to make large tradeoffs and this simply isn't possible/worth it at this time. Lots of people want funding diversification It has been two years since the FTX collapse and one thing everyone seems to agree on is that we need more funding diversification. These takes range from off-hand wishes “it sure would be great if funding in EA were more diversified”, to organizations trying to get a certain percentage of their budgets from non-OP sources/saying they want to diversify their funding base[1][2][3][4][5][6][7][8] to Open Philanthropy/Good Ventures themselves wanting to see more funding diversification[9]. Everyone seems to agree; other people should be giving more money to the EA projects. The Math Of course, I [...] ---Outline:(00:34) Lots of people want funding diversification(01:11) The Math(03:47) Weighted Average(05:03) Making a lot of money to donate is difficult(09:18) Solutions(09:21) 1. Get more funders(10:35) 2. Spend Less(12:49) 3. Splitting up Open Philanthropy into Several Organizations(13:52) 4. More For-Profit EA Work/EA Organizations Charging for Their Work(16:23) 5. Acceptance(16:59) My Personal Solution(17:26) Conclusion(17:59) Further Readings--- First published: December 27th, 2024 Source: https://forum.effectivealtruism.org/posts/x8JrwokZTNzgCgYts/funding-diversification-for-mid-large-ea-organizations-is --- Narrated by TYPE III AUDIO.
Summary There's a near consensus that EA needs funding diversification but with Open Phil accounting for ~90% of EA funding, that's just not possible due to some pretty basic math. Organizations and the community would need to make large tradeoffs and this simply isn't possible/worth it at this time. Lots of people want funding diversification It has been two years since the FTX collapse and one thing everyone seems to agree on is that we need more funding diversification. These takes range from off-hand wishes “it sure would be great if funding in EA were more diversified”, to organizations trying to get a certain percentage of their budgets from non-OP sources/saying they want to diversify their funding base(1,2,3,4,5,6,7,8) to Open Philanthropy/Good Ventures themselves wanting to see more funding diversification(9). Everyone seems to agree; other people should be giving more money to the EA projects. The Math Of course, I [...] ---Outline:(00:07) Summary(00:29) Lots of people want funding diversification(01:10) The Math(03:46) Weighted Average(05:02) Making a lot of money to donate is difficult(09:17) Solutions(09:21) 1. Get more funders(10:34) 2. Spend Less(12:48) 3. Splitting up Open Philanthropy into Several Organizations(13:51) 4. More For-Profit EA Work/EA Organizations Charging for Their Work(16:22) 5. Acceptance(16:58) My Personal Solution(17:25) Conclusion(18:01) 1 I was approached at several EAGs, including a few weeks ago in Boston to donate to certain organizations specifically because they want to get a certain %X (30, 50, etc.) from non-OP sources but I'm sure I can find organizations who are very public about this(18:20) 2--- First published: December 27th, 2024 Source: https://forum.effectivealtruism.org/posts/x8JrwokZTNzgCgYts/funding-diversification-for-mid-large-ea-organizations-is --- Narrated by TYPE III AUDIO.
Hey everyone, I'm the producer of The 80,000 Hours Podcast, and a few years ago I interviewed AJ Jacobs on his writing, and experiments, and EA. And I said that my guess was that the best approach to making a high-impact TV show was something like: You make Mad Men — same level of writing, directing, and acting — but instead of Madison Avenue in the 1950-70s, it's an Open Phil-like org. So during COVID I wrote a pilot and series outline for a show called Bequest, and I ended up with something like that (in that the characters start an Open Phil-like org by the middle of the season, in a world where EA doesn't exist yet), combined with something like: Breaking Bad, but instead of raising money for his family, Walter White is earning to give. (That's not especially close to the story, and not claiming it's [...] --- First published: November 21st, 2024 Source: https://forum.effectivealtruism.org/posts/HjKpghhowBRLat4Hq/bequest-an-ea-ish-tv-show-that-didn-t-make-it --- Narrated by TYPE III AUDIO.
Key Takeaways Optimizing your giving's effect on "EA's portfolio” implies you should fund the causes your value system thinks are most underfunded by EA's largest allocators (e.g. Open Phil and SFF). These causes aren't necessarily your value system's most preferred causes. ("Preferred" = the ones you'd allocate the plurality of EA's resources to.) For the typical EA, this would likely imply donating more to animal welfare, which is currently heavily underfunded under the typical EA's value system. Opportunities Open Phil is exiting from, including invertebrates, digital minds, and wild animals, may be especially impactful. Alice's Investing Dilemma: A Thought Experiment Alice is a conservative investor who prefers the risk-adjusted return of a portfolio of 70% stocks and 30% bonds. Along with 9 others, Alice has been allocated $1M to split between stocks and bonds however she sees fit. The combined $10M portfolio will be held for 10 years, and its [...] ---Outline:(00:07) Key Takeaways(00:53) Alices Investing Dilemma: A Thought Experiment(01:55) In Charity, We Should Optimize The Portfolio of Everyones Actions(03:09) Theoretical Implications(03:31) The Portfolio of Everyones Actions vs EAs Portfolio(04:10) Practical Recommendations(05:26) EAs Current Resource Allocations--- First published: November 9th, 2024 Source: https://forum.effectivealtruism.org/posts/2G8XfzKyd78JqZpjQ/fund-causes-open-phil-underfunds-instead-of-your-most --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring people to… help hire more people!, published by maura on August 17, 2024 on The Effective Altruism Forum. Open Philanthropy needs more recruiters. We'd love to see you apply, even if you've never worked in hiring before. "Recruiting" is sometimes used to narrowly refer to headhunting or outreach. At Open Phil, though, "recruiting" includes everything related to the hiring process. Our recruiting team manages the systems and people that take us from applications to offers. We design evaluations, interview candidates, manage stakeholders, etc.[1] We're looking for An operations mindset. Recruiting is project management, first and foremost; we want people who can reliably juggle lots of balls without dropping them. Interpersonal skills. We want clear communicators with good people judgment. Interest in Open Phil's mission. This is an intentionally broad definition-see below! What you don't need Prior recruiting experience. We'll teach you! To be well-networked or highly immersed in EA. You should be familiar with the areas Open Phil works in (such as global health and wellbeing and global catastrophic risks), but if you're wondering "Am I EA enough for this?", you almost certainly are. The job application will be posted to OP's website in coming weeks, but isn't there yet as of this post; we're starting with targeted outreach to high-context audiences (you!) before expanding our search to broader channels. If this role isn't for you but might be for someone in your network, please send them our way-we offer a reward if you counterfactually refer someone we end up hiring. 1. ^ The OP recruiting team also does headhunting and outreach, though, and we're open to hiring more folks to help with that work, too! If that sounds exciting to you, please apply to the current recruiter posting and mention an interest in outreach work in the "anything else" field. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding for work that builds capacity to address risks from transformative AI, published by GCR Capacity Building team (Open Phil) on August 14, 2024 on The Effective Altruism Forum. Post authors: Eli Rose, Asya Bergal Posting in our capacities as members of Open Philanthropy's Global Catastrophic Risks Capacity Building team. This program, together with our separate program for programs and events on global catastrophic risk, effective altruism, and other topics, has replaced our 2021 request for proposals for outreach projects. If you have a project which was in-scope for that program but isn't for either of these, you can apply to our team's general application instead. We think it's possible that the coming decades will see "transformative" progress in artificial intelligence, i.e., progress that leads to changes in human civilization at least as large as those brought on by the Agricultural and Industrial Revolutions. It is currently unclear to us whether these changes will go well or poorly, and we think that people today can do meaningful work to increase the likelihood of positive outcomes. To that end, we're interested in funding projects that: Help new talent get into work focused on addressing risks from transformative AI. Including people from academic or professional fields outside computer science or machine learning. Support existing talent in this field (e.g. via events that help build professional networks). Contribute to the discourse about transformative AI and its possible effects, positive and negative. We refer to this category of work as "capacity-building", in the sense of "building society's capacity" to navigate these risks. Types of work we've historically funded include training and mentorship programs, events, groups, and resources (e.g., blog posts, magazines, podcasts, videos), but we are interested in receiving applications for any capacity-building projects aimed at risks from advanced AI. This includes applications from both organizations and individuals, and includes both full-time and part-time projects. Apply for funding here. Applications are open until further notice and will be assessed on a rolling basis. We're interested in funding work to build capacity in a number of different fields which we think may be important for navigating transformative AI, including (but very much not limited to) technical alignment research, model evaluation and forecasting, AI governance and policy, information security, and research on the economics of AI. This program is not primarily intended to fund direct work in these fields, though we expect many grants to have both direct and capacity-building components - see below for more discussion. Categories of work we're interested in Training and mentorship programs These are programs that teach relevant skills or understanding, offer mentorship or professional connections for those new to a field, or provide career opportunities. These could take the form of fellowships, internships, residencies, visitor programs, online or in-person courses, bootcamps, etc. Some examples of training and mentorship programs we've funded in the past: BlueDot's online courses on technical AI safety and AI governance. MATS's in-person research and educational seminar programs in Berkeley, California. ML4Good's in-person AI safety bootcamps in Europe. We've previously funded a number of such programs in technical alignment research, and we're excited to fund new programs in this area. But we think other relevant areas may be relatively neglected - for instance, programs focusing on compute governance or on information security for frontier AI models. For illustration, here are some (hypothetical) examples of programs we could be interested in funding: A summer research fellowship for individuals with technical backgr...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding for programs and events on global catastrophic risk, effective altruism, and other topics, published by GCR Capacity Building team (Open Phil) on August 13, 2024 on The Effective Altruism Forum. Post authors: Eli Rose, Asya Bergal Posting in our capacities as members of Open Philanthropy's Global Catastrophic Risks Capacity Building team. Note: This program, together with our separate program for work that builds capacity to address risks from transformative AI, has replaced our 2021 request for proposals for outreach projects. If you have a project which was in-scope for that program but isn't for either of these, you can apply to our team's general application instead. Apply for funding here. Applications are open until further notice and will be assessed on a rolling basis. This is a wide-ranging call for applications, seeking to fund programs and events in a variety of areas of interest to Open Philanthropy - including effective altruism, global catastrophic risks, biosecurity, AI for epistemics, forecasting, and other areas. In general, if the topic of your program or event falls within one of our GCR focus areas, or if it's similar to work we've funded in the past in our GCR focus areas, it may be a good fit for this program. If you're unsure about whether to submit your application, we'd encourage you to err on the side of doing so. By "programs and events" we mean scholarship or fellowship programs, internships, residencies, visitor programs, courses[1], seminars, conferences, workshops, retreats, etc., including both in-person and online activities. We're open to funding programs or events aimed at individuals at any career stage, and with a wide range of potential purposes, including teaching new skills, providing new career opportunities, offering mentorship, or facilitating networking. Examples of programs and events of this type we've funded before include: Condor Camp, a summer program for Brazilian students interested in existential risk work. The Future of Humanity Institute's Research Scholars Program supporting early-career researchers in global catastrophic risk. Effective Altruism Global, a series of conferences for individuals interested in effective altruism. Future Forum, a conference aimed at bringing together members of several communities interested in emerging technology and the future. A workshop on using AI to improve epistemics, organized by academics from NYU, the Forecasting Research Institute, the AI Objectives Institute and Metaculus. AI-focused work We have a separate call up for work that builds societal capacity to address risks from transformative AI. If your program or event is focused on transformative AI and/or risks from transformative AI, we prefer you apply to that call instead. However, which call you apply to is unlikely to make a difference to the outcome of your application. Application information Apply for funding here. The application form asks for information about you, your project/organization (if relevant), and the activities you're requesting funding for. We're interested in funding both individual/one-off programs and events, and organizations that run or support programs and events. We expect to make most funding decisions within 8 weeks of receiving an application (assuming prompt responses to any follow-up questions we may have). You can indicate on our form if you'd like a more timely decision, though we may or may not be able to accommodate your request. Applications are open until further notice and will be assessed on a rolling basis. 1. ^ To apply for funding for the development of new university courses, please see our separate Course Development Grants RFP. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wild Animal Initiative has urgent need for more funding and more donors, published by Cameron Meyer Shorb on August 6, 2024 on The Effective Altruism Forum. Our room for more funding is bigger and more urgent than ever before. Our organizational strategy will be responsive both to the total amount raised and to how many people donate, so smaller donors will have an especially high impact this year. Good Ventures recently decided to phase out funding for several areas (GV blog, EA Forum post), including wild animal welfare. That's a pretty big shock to our movement. We don't know what exactly the impact will be, except that it's complicated. The purpose of this post is to share what we know and how we're thinking about things - primarily to encourage people to donate to Wild Animal Initiative this year, but also for anyone else who might be interested in the state of the wild animal welfare movement more broadly. Summary Track record Our primary goal is to support the growth of a self-sustaining interdisciplinary research community focused on reducing wild animal suffering. Wild animal welfare science is still a small field, but we're really happy with the momentum it's been building. Some highlights of the highlights: We generally get a positive response from researchers (particularly in animal behavior science and ecology), who tend to see wild animal welfare as a natural extension of their interest in conservation (unlike EAs, who tend to see those two as conflicting with each other). Wild animal welfare is increasingly becoming a topic of discussion at scientific conferences, and was recently the subject of the keynote presentation at one. Registration for our first online course filled to capacity (50 people) within a few hours, and just as many people joined the waitlist over the next few days. Room for more funding This is the first year in which our primary question is not how much more we can do, but whether we can avoid major budget cuts over the next few years. We raised less in 2023 than we did in 2022, so we need to make up for that gap. We're also going to lose our biggest donor because Good Ventures is requiring Open Philanthropy to phase out their funding for wild animal welfare. Open Phil was responsible for about half of our overall budget. The funding from their last grant to us will last halfway through 2026, but we need to decide soon how we're going to adapt. To avoid putting ourselves back in the position of relying on a single funder, our upcoming budgeting decisions will depend on not only how much money we raise, but also how diversified our funding is. That means gifts from smaller donors will have an unusually large impact. (The less you normally donate, the more disproportionate your impact will be, but the case still applies to basically everyone who isn't a multi-million-dollar foundation.) Specifically, our goal is to raise $240,000 by the end of the year from donors giving $10k or less. Impact of marginal donations We're evaluating whether we need to reduce our budget to a level we can sustain without Open Philanthropy. The more we raise this year - and the more donors who pitch in to make that happen - the less we'll need to cut. Research grants and staff-associated costs make up the vast majority of our budget, so we'd need to make cuts in one or both of those areas. Donations would help us avoid layoffs and keep funding external researchers. What we've accomplished so far Background If you're not familiar with Wild Animal Initiative, we're working to accelerate the growth of wild animal welfare science. We do that through three interconnected programs: We make grants to scientists who take on relevant projects, we conduct our own research on high-priority questions, and we do outreach through conferences and virtual events. Strategy...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] An update from Good Ventures, published by Alexander Berger on June 14, 2024 on The Effective Altruism Forum. I wanted to share this update from Good Ventures (Cari and Dustin's philanthropy), which seems relevant to the EA community. Tl;dr: "while we generally plan to continue increasing our grantmaking in our existing focus areas via our partner Open Philanthropy, we have decided to exit a handful of sub-causes (amounting to less than 5% of our annual grantmaking), and we are no longer planning to expand into new causes in the near term by default." A few follow-ups on this from an Open Phil perspective: I want to apologize to directly affected grantees (who've already been notified) for the negative surprise here, and for our part in not better anticipating it. While this represents a real update, we remain deeply aligned with Good Ventures (they're expecting to continue to increase giving via OP over time), and grateful for how many of the diverse funding opportunities we've recommended that they've been willing to tackle. An example of a new potential focus area that OP staff had been interested in exploring that Good Ventures is not planning to fund is research on the potential moral patienthood of digital minds. If any readers are interested in funding opportunities in that space, please reach out. Good Ventures has told us they don't plan to exit any overall focus areas in the near term. But this update is an important reminder that such a high degree of reliance on one funder (especially on the GCR side) represents a structural risk. I think it's important to diversify funding in many of the fields Good Ventures currently funds, and that doing so could make the funding base more stable both directly (by diversifying funding sources) and indirectly (by lowering the time and energy costs to Good Ventures from being such a disproportionately large funder). Another implication of these changes is that going forward, OP will have a higher bar for recommending grants that could draw on limited Good Ventures bandwidth, and so our program staff will face more constraints in terms of what they're able to fund. We always knew we weren't funding every worthy thing out there, but that will be even more true going forward. Accordingly, we expect marginal opportunities for other funders to look stronger going forward. Historically, OP has been focused on finding enough outstanding giving opportunities to hit Good Ventures' spending targets, with a long-term vision that once we had hit those targets, we'd expand our work to support other donors seeking to maximize their impact. We'd already gotten a lot closer to GV's spending targets over the last couple of years, but this update has accelerated our timeline for investing more in partnerships and advising other philanthropists. If you're interested, please consider applying or referring candidates to lead our new partnerships function. And if you happen to be a philanthropist looking for advice on how to invest >$1M/year in new cause areas, please get in touch. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Examiner Report, published by Molly on May 28, 2024 on The Effective Altruism Forum. Open Phil's in-house legal team continues to keep an eye on the developments in the FTX bankruptcy case. Some resources we've put together for charities and other altruistic enterprises affected by the case can be found here. Last week the FTX Examiner released the Examiner's Report. For context, on March 20 of this year, the FTX bankruptcy court appointed an independent examiner to review a wide swath of the various investigations into FTX and compile findings and make recommendations for additional investigations. One notable finding that may be of interest to this audience is on page 165 of the Report (p. 180 of the pdf linked above). It reads: S&C [1] provided to the Examiner a list of over 1,200 charitable donations made by the FTX Group. S&C initially prioritized recovery of the largest charitable donations before turning to the next-largest group of donations and, finally, working with Landis[2] to recover funds from recipients of smaller donations. S&C concluded that, for recipients of the smallest value donations, any potential recoveries would likely be outweighed by the costs of further action. Without engaging in litigation, the Debtors have collected about $70 million from over 50 non-profits that received FTX Group-funded donations. The Debtors continue to assess possible steps to recover charitable contributions. I am aware of many relatively small-dollar grantees (
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Joining the Carnegie Endowment for International Peace, published by Holden Karnofsky on April 29, 2024 on The Effective Altruism Forum. Effective today, I've left Open Philanthropy and joined the Carnegie Endowment for International Peace[1] as a Visiting Scholar. At Carnegie, I will analyze and write about topics relevant to AI risk reduction. In the short term, I will focus on (a) what AI capabilities might increase the risk of a global catastrophe; (b) how we can catch early warning signs of these capabilities; and (c) what protective measures (for example, strong information security) are important for safely handling such capabilities. This is a continuation of the work I've been doing over the last ~year. I want to be explicit about why I'm leaving Open Philanthropy. It's because my work no longer involves significant involvement in grantmaking, and given that I've overseen grantmaking historically, it's a significant problem for there to be confusion on this point. Philanthropy comes with particular power dynamics that I'd like to move away from, and I also think Open Philanthropy would benefit from less ambiguity about my role in its funding decisions (especially given the fact that I'm married to the President of a major AI company). I'm proud of my role in helping build Open Philanthropy, I love the team and organization, and I'm confident in the leadership it's now under; I think it does the best philanthropy in the world, and will continue to do so after I move on. I will continue to serve on its board of directors (at least for the time being). While I'll miss the Open Philanthropy team, I am excited about joining Carnegie. Tino Cuellar, Carnegie's President, has been an advocate for taking (what I see as) the biggest risks from AI seriously. Carnegie is looking to increase its attention to AI risk, and has a number of other scholars working on it, including Matt Sheehan, who specializes in China's AI ecosystem (an especially crucial topic in my view). Carnegie's leadership has shown enthusiasm for the work I've been doing and plan to continue. I expect that I'll have support and freedom, in addition to an expanded platform and network, in continuing my work there. I'm generally interested in engaging more on AI risk with people outside my existing networks. I think it will be important to build an increasingly big tent over time, and I've tried to work on approaches to risk reduction (such as responsible scaling) that have particularly strong potential to resonate outside of existing AI-risk-focused communities. The Carnegie network is appealing because it's well outside my usual network, while having many people with (a) genuine interest in risks from AI that could rise to the level of international security issues; (b) knowledge of international affairs. I resonate with Carnegie's mission of "helping countries and institutions take on the most difficult global problems and advance peace," and what I've read of its work has generally had a sober, nuanced, peace-oriented style that I like. I'm looking forward to working at Carnegie, despite the bittersweetness of leaving Open Phil. To a significant extent, though, the TL;DR of this post is that I am continuing the work I've been doing for over a year: helping to design and advocate for a framework that seeks to get early warning signs of key risks from AI, accompanied by precommitments to have sufficient protections in place by the time they come (or to pause AI development and deployment until these protections get to where they need to be). ^ I will be at the California office and won't be relocating. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Day in the Life: Abhi Kumar, published by Open Philanthropy on April 19, 2024 on The Effective Altruism Forum. Open Philanthropy's "Day in the Life" series showcases the wide-ranging work of our staff, spotlighting individual team members as they navigate a typical workday. We hope these posts provide an inside look into what working at Open Phil is really like. If you're interested in joining our team, we encourage you to check out our open roles. Abhi Kumar is a Program Associate on the Farm Animal Welfare team. He works on investigating the most promising opportunities to reduce the suffering of farm animals, with a focus on the development and commercialization of alternatives to animal products. Previously, he worked on the investment teams at the venture capital funds Lever VC and Ahimsa VC. He has an MMS from the Yale School of Management & HEC Paris, and a BSocSc from Singapore Management University. Fun fact: Abhi has completed six marathons and an Ironman. Day in the Life I work on the Farm Animal Welfare team, also known internally as the "FAW" team. Our mission is to improve the lives of animals that are unlucky enough to be confined in factory farms. We do this by making grants to organizations and individuals whose work we think will most effectively improve living conditions for these animals. My primary responsibility on the team is to make grants in my area of expertise: alternatives to animal products, including plant-based meats and cellular agriculture. Grants are typically focused on accelerating these alternatives through collaboration with governments, companies, and academia. For instance, we recently made a grant to Dansk Vegetarisk Forening to advocate for increased R&D funding for alternative (alt) protein in Denmark. Lately, my mornings have started with calls with colleagues or potential grantees in Asia. I'm currently investigating a few potential grants to advance alt protein in Japan, so I spend my morning talking to experts on Japanese climate policy and reading through Japanese policy documents like the Green Food System Strategy. Japan is a promising country to expand alt protein efforts within because it's an R&D powerhouse that is also showing more interest in alt protein innovation. After my morning calls, I reflect on potential grant recommendations for our leadership and identify what the key questions (or "cruxes") are for me. Then, I note the topic as an agenda item for discussion with my manager Lewis, who supervises the FAW team. In the early afternoon, I have a check-in call with a current grantee. During these calls, we discuss what's been going well and what hasn't, as well as resolve any questions the grantee has. For instance, this grantee says they'd like to better understand our alt protein strategy, so I summarize the outcomes we're looking for with our grantmaking: more government funding, increased industry engagement, and more high-impact academic research. After these calls, I type up my call notes into a ~five-line summary that I'll share with my manager later. After that, I head down to my neighborhood café to focus on three writing tasks: First, I finish a memo on why we should fund a lab researching how to improve animal fat alternatives. My manager left a bunch of questions on my last draft, so I address his questions and re-share it with him for discussion later. Second, I write a grant approval email (what Open Phil staff know as the "handoff") to a successful grantee and connect them with our Grants team, who handle all of the legal and logistical challenges involved with actually disbursing money. Without our wonderful Grants team, figuring out how to transfer funds to grantees would be pretty painful - I'm grateful for their expertise! Lastly, I send a rejection email to a potential grantee I've been i...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Day in the Life: Alex Bowles, published by Open Philanthropy on April 19, 2024 on The Effective Altruism Forum. Open Philanthropy's "Day in the Life" series showcases the wide-ranging work of our staff, spotlighting individual team members as they navigate a typical workday. We hope these posts provide an inside look into what working at Open Phil is really like. If you're interested in joining our team, we encourage you to check out our open roles. Alex Bowles is a Senior Program Associate on Open Philanthropy's Science and Global Health R&D team[1], and a member of the Global Health and Wellbeing Cause Prioritization team. His responsibilities include estimating the cost-effectiveness of research and development grants in science and global health, identifying and assessing new strategic areas for the team, and investigating new Open Phil cause areas within global health and wellbeing. Day in the Life I'm part of the ~70% of Open Phil staff who work remotely - apart from OP Togetherness Weeks and when I'm traveling for conferences, I start each day from my desk at home in Ottawa, Canada. I wear two hats at OP: I support strategy on the Science and Global Health R&D team, and I assess new potential Open Phil cause areas as part of the GHW Cause Prioritization team. Some parts of these roles are pretty separate, but many aspects overlap. The GHW Cause Prioritization team produces a lot of internal research into areas that OP might consider making grants in. This morning, I'm reading a medium-depth investigation written by my colleague Rafa about economic growth in low- and middle-income countries before meeting with the cause prio team to discuss our thoughts. After that, I'll join the weekly Science and Global Health R&D team meeting. It's a great opportunity to catch up with my team (which is scattered across three time zones and three countries), hear about new grant opportunities our program officers are investigating, and pick my colleagues' brains for specific technical or scientific opinions. There's a range of expertise on our team, so one person can tell me all I need to know (and more) about how different malaria vaccines work while another helps me think through the nitty-gritty of how particular grants might support health policy changes we care about. One place where my two roles converge is exploring new areas that the Science and Global Health R&D team is considering investigating for grantmaking opportunities. Program officers have independent discretion to make grants in areas they identify, but my exploratory work can help them and the team's leadership better understand which areas are most important and neglected. Today I'm investigating how funding hepatitis C vaccine development might fit within our frameworks. At the moment, I'm studying projections of how much existing treatments might reduce the disease burden in the coming decades, so we can better understand the burden a vaccine available in, say, ten years might address. I often spend part of my day creating an initial "back-of-the-envelope calculation" (BOTEC) to gauge the cost-effectiveness of a grant a program officer is investigating. Today is no different, as I'm currently focused on estimating the cost-effectiveness of a grant to support the development of a new approach to malaria vaccines. This includes thinking through the project's likelihood of success, our predictions about the potential vaccine's effectiveness, and the extent to which our funding might speed up the project. To learn more about our team's work, I highly recommend the blog of Jacob Trefethen, who leads our team and manages me. Recent posts cover health technologies that could exist in five years (but probably won't) and a voucher program that incentivizes drug companies to work on neglected tropical diseases. ^ ...
I like Open Phil's worldview diversification. But I don't think their current roster of worldviews does a good job of justifying their current practice. In this post, I'll suggest a reconceptualization that may seem radical in theory but is conservative in practice. Something along these lines strikes me as necessary to justify giving substantial support to paradigmatic Global Health & Development charities in the face of competition from both Longtermist/x-risk and Animal Welfare competitor causes. Current Orthodoxy I take it that Open Philanthropy's current "cause buckets" or candidate worldviews are typically conceived of as follows: neartermist - incl. animal welfare neartermist - human-only longtermism / x-risk We're told that how to weigh these cause areas against each other "hinge[s] on very debatable, uncertain questions." (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we [...] ---Outline:(00:35) Current Orthodoxy(01:17) The Problem(02:43) A Proposed Solution(04:26) ImplicationsThe original text contained 3 footnotes which were omitted from this narration. --- First published: March 18th, 2024 Source: https://forum.effectivealtruism.org/posts/dmEwQZSbPsYhFay2G/ea-worldviews-need-rethinking --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ways in which I'm not living up to my EA values, published by Joris P on March 18, 2024 on The Effective Altruism Forum. This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. When I was pretty new to EA, I was way too optimistic about how Wise and Optimized and Ethical and All-Knowing experienced EAs would be. I thought Open Phil would have some magic spreadsheets with the answers to all questions in the universe I thought that, surely, experienced EAs had for 99% figured out what they thought was the biggest problem in the world I imagined all EAs to have optimized almost everything, and to basically endorse all their decisions: their giving practices, their work-life balance, the way they talked about EA to others, etc. I've now been around the community for a few years. I'm still really grateful for and excited about EA ideas, and I love being around the people inspired by EA ideas (I even work on growing our community!). However, I now also realize that today, I am far from how Wise and Optimized and Ethical and All-Knowing Joris-from-4-years-ago expected future Joris and his peers to be. There's two things that caused me to not live up to those ideals: I was naive about how Wise and Optimized and Ethical and All-Knowing someone could realistically be There's good things I could reasonably do or should have reasonably done in the past 4 years To make this concrete, I wanted to share some ways in which I think I'm not living up to my EA values or expectations from a few years ago. I think Joris-from-4-years-ago would've found this list helpful.[1] I'm still not fully vegan Donating: I just default to the community norm of donating 10%, without having thought about it hard I haven't engaged for more than 30 minutes with arguments around e.g. patient philanthropy I left my GWWC donations to literally the last day of the year and didn't spend more than one hour on deciding where to donate I have a lot less certainty over the actual positive impact of the programs we run than I expected when I started this job I'm still as bad at math as I was in uni, meaning my botecs are just not that great It's so, so much harder than I expected to account for counterfactuals and to find things you can measure that are robustly good I still find it really hard to pitch EA I hope this inspires some people (especially those who I (and others) might look up to) to share how they're not perfect. What are some ways in which you're not living up to your values, or to what you-from-the-past maybe expected you would be doing by now? ^ I'll leave it up to you whether these fall in category 1 (basically unattainable) or 2 (attainable). I also do not intend to turn this into a discussion of what things EAs "should" do, which things are actually robustly good, etc. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA "Worldviews" Need Rethinking, published by Richard Y Chappell on March 18, 2024 on The Effective Altruism Forum. I like Open Phil's worldview diversification. But I don't think their current roster of worldviews does a good job of justifying their current practice. In this post, I'll suggest a reconceptualization that may seem radical in theory but is conservative in practice. Something along these lines strikes me as necessary to justify giving substantial support to paradigmatic Global Health & Development charities in the face of competition from both Longtermist/x-risk and Animal Welfare competitor causes. Current Orthodoxy I take it that Open Philanthropy's current "cause buckets" or candidate worldviews are typically conceived of as follows: neartermist - incl. animal welfare neartermist - human-only longtermism / x-risk We're told that how to weigh these cause areas against each other "hinge[s] on very debatable, uncertain questions." (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings? Neither of which strikes me as especially uncertain (though I know others disagree). The Problem I worry that the "human-only neartermist" bucket lacks adequate philosophical foundations. I think Global Health & Development charities are great and worth supporting (not just for speciesist presentists), so I hope to suggest a firmer grounding. Here's a rough attempt to capture my guiding thought in one paragraph: Insofar as the GHD bucket is really motivated by something like sticking close to common sense, "neartermism" turns out to be the wrong label for this. Neartermism may mandate prioritizing aggregate shrimp over poor people; common sense certainly does not. When the two come apart, we should give more weight to the possibility that (as-yet-unidentified) good principles support the common-sense worldview. So we should be especially cautious of completely dismissing commonsense priorities in a worldview-diversified portfolio (even as we give significant weight and support to a range of theoretically well-supported counterintuitive cause areas). A couple of more concrete intuitions that guide my thinking here: (1) fetal anesthesia as a cause area intuitively belongs with 'animal welfare' rather than 'global health & development', even though fetuses are human. (2) It's a mistake to conceive of global health & development as purely neartermist: the strongest case for it stems from positive, reliable flow-through effects. A Proposed Solution I suggest that we instead conceive of (1) Animal Welfare, (2) Global Health & Development, and (3) Longtermist / x-risk causes as respectively justified by the following three "cause buckets": Pure suffering reduction Reliable global capacity growth Speculative moonshots In terms of the underlying worldview differences, I think the key questions are something like: (i) How confident should we be in our explicit expected value estimates? How strongly should we discount highly speculative endeavors, relative to "commonsense" do-gooding? (ii) How does the total (intrinsic + instrumental) value of improving human lives & capacities compare to the total (intrinsic) value of pure suffering reduction? [Aside: I think it's much more reasonable to be uncertain about these (largely empirical) questions than about the (largely moral) questions that underpin the orthodox breakdown of EA worldviews.] Hopefully it's clear how these play out: greater confidence in EEV lends itself to supporting moonshots to reduce x-risk or otherwise seek to improve the long-term future in a highly targeted, deliberate way. Less confidence here may support more generic methods of global capacity-building, such as improving health and (were there any ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm glad I joined an experienced, 20+ person organization, published by michel on March 15, 2024 on The Effective Altruism Forum. This is a Draft Amnesty Week draft. It may not be polished up to my usual standards. I originally started this post for the EA forum's career week last year, but I missed the deadline. I've used Draft Amnesty Week as a nudge to fix up a few bullets and am just sharing what I got. In which: I tentatively conclude I made the right choice by joining CEA instead of doing independent alignment research or starting my own EA community building project. In December and January last year, I spent a lot of time thinking about what my next career move should be. I was debating roughly four choices: Joining the CEA Events Team Beginning independent research in AI strategy and governance Supporting early stage (relatively scrappy) AI safety field-building efforts Starting an EA community or infrastructure building project[1] I decided to join the CEA events team, and I'm glad I did. I'm moderately sure this was the right choice in hindsight (maybe 60%), but counterfactuals are hard and who knows, maybe one of the other paths would have proved even better Here are some benefits from CEA that I think would have been harder for me to get on the other paths. I get extended contact with - and feedback from - very competent people Example: I helped organize the Meta Coordination Forum and worked closely with Max Dalton and Sophie Thomson as a result. I respect both of them a lot and they both regularly gave me substantive feedback on my idea generation, emails, docs, etc. I learn a lot of small but, in aggregate, important things that would be more effortful to learn on my own Examples: How to organize a slack workspace, how to communicate efficiently, when and how to engage with lawyers, how to utilize virtual assistants, how to build a good team culture, how to write a GDoc that people can easily skim, when to leave comments and how to do so quickly, how to use decision making tools like BIRD, how to be realistic about impact evaluations, etc. I have a support system Example: I've been dealing with post-concussion symptoms for the past year, and having private healthcare has helped me address those symptoms. Example: Last year I was catastrophizing about a project I was leading on. After telling my manager about how anxious I had been about the project, we met early that week and checked in on the status of all the different work streams and clarified next steps. By the end of the week I felt much better. I think I have a more realistic model of how organizations, in general, work. I bet this helps me predict other orgs behavior and engage with them productively. It would probably also help me start my own org. Example: If I want Open Phil to do X, it's become clear to me that I should probably think about who at OP is most directly responsible for X, write up the case for X in an easy to skim way with a lot of reasoning transparency, and then send that doc to the person and express a willingness to meet to talk more about it. And all the while I should be nice and humble, because there's probably a lot of behind the scenes stuff I don't know about. And the people I want to change the behavior of are probably very busy and have a ton of daily execution work to do that makes it hard for them to zoom out to the level I'm likely asking them to Example: I better understand the time/overhead-costs to making certain info transparent and doing public comms well, so I have more realistic expectations of other orgs. Example: If I were to start my own org, I would have a better sense of how to set a vision, how to ship MVPs and test hypotheses, as well as more intuitive sense of when things are going well vs. poorly. If I want to later work at a non-EA org, my expe...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Open Philanthropy Grantmaking Program: Forecasting, published by Open Philanthropy on February 20, 2024 on The Effective Altruism Forum. Written by Benjamin Tereick We are happy to announce that we have added forecasting as an official grantmaking focus area. As of January 2024, the forecasting team comprises two full-time employees: myself and Javier Prieto. In August 2023, I joined Open Phil to lead our forecasting grantmaking and internal processes. Prior to that, I worked on forecasts of existential risk and the long-term future at the Global Priorities Institute. Javier recently joined the forecasting team in a full-time capacity from Luke Muehlhauser's AI governance team, which was previously responsible for our forecasting grantmaking. While we are just now launching a dedicated cause area, Open Phil has long endorsed forecasting as an important way of improving the epistemic foundations of our decisions and the decisions of others. We have made several grants to support the forecasting community in the last few years, e.g., to Metaculus, the Forecasting Research Institute, and ARLIS. Moreover, since the launch of Open Phil, grantmakers have often made predictions about core outcomes for grants they approve. Now with increased staff capacity, the forecasting team wants to build on this work. Our main goal is to help realize the promise of forecasting as a way to improve high-stakes decisions, as outlined in our focus area description. We are excited both about projects aiming to increase the adoption rate of forecasting as a tool by relevant decision-makers, and about projects that provide accurate forecasts on questions that could plausibly influence the choices of these decision-makers. We are interested in such work across both of our portfolios: Global Health and Wellbeing and Global Catastrophic Risks. [1] We are as of yet uncertain about the most promising type of project in the forecasting focus area, and we will likely fund a variety of different approaches. We will also continue our commitment to forecasting research and to the general support of the forecasting community, as we consider both to be prerequisites for high-impact forecasting. Supported by other Open Phil researchers, we plan to continue exploring the most plausible theories of change for forecasting. I aim to regularly update the forecasting community on the development of our thinking. Besides grantmaking, the forecasting team is also responsible for Open Phil's internal forecasting processes, and for managing forecasting services for Open Phil staff. This part of our work will be less public, but we will occasionally publish insights from our own processes, like Javier's 2022 report on the accuracy of our internal forecasts. ^ It should be noted that administratively, the forecasting team is part of the Global Catastrophic Risks portfolio, and historically, our forecasting work has had closer links to that part of the organization. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Medical Roundup #1, published by Zvi on January 17, 2024 on LessWrong. Saving up medical and health related stories from several months allowed for much better organizing of them, so I am happy I split these off. I will still post anything more urgent on a faster basis. There's lots of things here that are fascinating and potentially very important, but I've had to prioritize and focus elsewhere, so I hope others pick up various torches. Vaccination Ho! We have a new malaria vaccine. That's great. WHO thinks this is not an especially urgent opportunity, or any kind of 'emergency' and so wants to wait for months before actually putting shots into arms. So what if we also see reports like 'cuts infant deaths by 13%'? WHO doing WHO things, WHO Delenda Est and all that. What can we do about this? Also, EA and everyone else who works in global health needs to do a complete post-mortem of how this was allowed to take so long, and why they couldn't or didn't do more to speed things along. There are in particular claims that the 2015-2019 delay was due to lack of funding, despite a malaria vaccine being an Open Phil priority. Saloni Dattani, Rachel Glennerster and Siddhartha Haria write about the long road for Works in Progress. They recommend future use of advance market commitments, which seems like a no brainer first step. We also have an FDA approved vaccine for chikungunya. Oh, and also we invented a vaccine for cancer, a huge boost to melanoma treatment. Katalin Kariko and Drew Weissman win the Nobel Prize for mRNA vaccine technology. Rarely are such decisions this easy. Worth remembering that, in addition to denying me admission despite my status as a legacy, the University of Pennsylvania also refused to allow Kariko a tenure track position, calling her 'not of faculty quality,' and laughed at her leaving for BioNTech, especially when they refer to this as 'Penn's historic research team.' Did you also know that Katalin's advisor threatened to have her deported if she switched labs, and attempted to follow through on that threat? I also need to note the deep disappointment in Elon Musk, who even a few months ago was continuing to throw shade on the Covid vaccines. And what do we do more generally about the fact that there are quite a lot of takes that one has reason to be nervous to say out loud, seem likely to be true, and also are endorsed by the majority of the population? When we discovered all the vaccines. Progress continues. We need to go faster. Reflections on what happened with medical start-up Alvea. They proved you could move much faster on vaccine development than anyone would admit, but then found that there was insufficient commercial or philanthropic demand for doing so to make it worth everyone's time, so they wound down. As an individual and as a civilization, you get what you pay for. Potential Progress Researchers discover what they call an on/off switch for breast cancer. Not clear yet how to use this to help patients. London hospital uses competent execution on basic 1950s operations management, increases surgical efficiency by a factor of about five. Teams similar to a Formula 1 pit crew cut sterilization times from 40 minutes to 2. One room does anesthesia on the next patient while the other operates on the current one. There seems to be no reason this could not be implemented everywhere, other than lack of will? Dementia rates down 13% over the past 25 years, for unclear reasons. Sarah Constantin explores possibilities for cognitive enhancement. We have not yet tried many of the things one would try. We found a way to suppress specific immune reactions, rather than having to suppress immune reactions in general, opening up the way to potentially fully curing a whole host of autoimmune disorders. Yes, in mice, of course it's in mice, so don't ge...
Rebroadcast: this episode was originally released in January 2021.You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land?”You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you're in the big world — if the coin landed tails, way more people should be having an experience just like yours.But then you get up, walk outside, and look at the number on your box.‘3'. Huh. Now you don't know what to believe.If God made 10 billion boxes, surely it's much more likely that you would have seen a number like 7,346,678,928?In today's interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning' could be relevant for figuring out where we should direct our charitable giving.Links to learn more, summary, and full transcript.Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘longtermism' — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that's both very large relative to what's possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.But imagine that humanity has two possible futures ahead of it: Either we're going to have a huge future like that, in which trillions of people ultimately exist, or we're going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘doomsday argument‘ alone.If that's true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we're incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.There are many critics of this theoretical ‘doomsday argument', and it may be the case that it logically doesn't work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.They also discuss:Which worldviews Open Phil finds most plausible, and how it balances themWhich worldviews Ajeya doesn't embrace but almost doesHow hard it is to get to other solar systemsThe famous ‘simulation argument'When transformative AI might actually arriveThe biggest challenges involved in working on big research reportsWhat it's like working at Open PhilAnd much moreProducer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Sofia Davis-Fogel
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Social science research on animal welfare we'd like to see, published by Martin Gould on January 12, 2024 on The Effective Altruism Forum. Context and objectives This is a list of social science research topics related to animal welfare, developed by researchers on the Open Phil farm animal welfare team. We compiled this list because people often ask us for suggestions on topics that would be valuable to research. The primary audience for this document is students (undergrad, grad, high school) and researchers without significant budgets (since the topics we list here could potentially be answered using primarily desktop research).[1] Additional context: We are not offering to fund research on these topics, and we are not necessarily offering to review or advise research on these topics. In the interest of brevity, we have not provided much context for each topic. But if you are a PhD student or academic, we may be able to provide you with more detail on our motivation and our interpretation of the current literature: please email Martin Gould with your questions. The topics covered in this document are the ones we find most interesting; for other animal advocacy topic lists see here. Note that we do not attempt to cover animal welfare science in these topics, and that the topics are listed in no particular order (i.e. we don't place a higher priority on the topics listed first). In some areas, we are not fully up to date on the existing literature, so some of our questions may have been answered by research already conducted. We think it is generally valuable to use back-of-the-envelope-calculations to explore ideas and findings. If you complete research on these topics, please feel free to share it with us (email below) and with the broader animal advocacy movement (one option is to post here). We're happy to see published findings, working papers, and even detailed notes that you don't intend to formally publish. If you have anything to share or any feedback, please email Martin Gould. This post is also on the Open Phil blog here. Topics Corporate commitments By how many years do animal welfare corporate commitments speed up reforms that might eventually happen anyway due to factors like government policy, individual consumer choices, or broad moral change? How does this differ by the type of reform? (For example, cage-free vs. Better Chicken Commitment?) How does this differ by country or geographical region (For example, the EU vs. Brazil?) What are the production costs associated with specific animal welfare reforms? Here is an example of such an analysis for the European Chicken Commitment. Policy reform What are the jurisdictions most amenable to FAW policy reform over the next 5-10 years? What specific reform(s) are most tractable, and why? To what extent is animal welfare an issue that is politically polarizing (i.e. clearly associated with a particular political affiliation)? Is this a barrier to reform? If so, how might political polarization of animal welfare be reduced? How do corporate campaigns and policy reform interact with and potentially reinforce each other? What conclusions should be drawn about the optimal timing of policy reform campaigns? What would be the cost-effectiveness of a global animal welfare benchmarking project? (That is, comparing farm animal welfare by country and by company, as a basis to drive competition, as with similar models in human rights and global development.) Which international institutions (e.g. World Bank, WTO, IMF, World Organisation for Animal Health, UN agencies) have the most influence over animal welfare policy in emerging economies? What are the most promising ways to influence these institutions? Does this vary by geographical region (for example, Asia vs. Latin America)? Alt protein What % of PBMA (plant-ba...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winners in the Forum's Donation Election (2023), published by Lizka on December 24, 2023 on The Effective Altruism Forum. TL;DR: We ran a Donation Election in which 341 Forum users[1] voted on how we should allocate the Donation Election Fund ($34,856[2]). The winners are: Rethink Priorities - $12,847.75 Charity Entrepreneurship: Incubated Charities Fund - $11,351.11 Animal Welfare Fund (EA Funds) - $10,657.07 This post shares more information about the results: Comments from voters about their votes: patterns include referencing organizations' marginal funding posts, updating towards the neglectedness of animal welfare, appreciating strong track records, etc. Voting patterns: most people voted for 2-4 candidates (at least one of which was one of the three winners), usually in multiple cause areas Cause area stats: similar numbers of points went to cross-cause, animal welfare, risk/future-oriented, and global health candidates (ranked in that order) All candidate results, including raw point[3] totals: the Long-Term Future Fund initially placed second by raw point totals Concluding thoughts & other charities You can find some extra information in this spreadsheet. Highlights from the comments: why people voted the way they did We asked voters if they wanted to share a note about why they voted the way they did. 74 people (~20%) wrote a comment. I'm sharing a few excerpts[4] below, and more in a comment on this post (separated for the sake of space) - consider reading the longer version if you have a moment. There were some recurring patterns in different people's notes, some of which appear in these two comments explaining their authors' votes: "[AWF], because I was convinced by the post about how animal welfare dominates in non-longtermist causes, [CE], so that there can be even more excellent ways of making the world a better place by donating, [GWWC], because I wish we had unlimited money to give to all the others" "Realized I'm too partial to [global health] and biased against animal welfare, [so I decided to vote for the] most effective animal organization. Rethink's post was very convincing. CE has the most innovative ideas in GHD and it isn't close. GiveWell is GiveWell." Rethink Priorities's funding request post was mentioned a lot. People also noted specific aspects of RP's work that they appreciate, like the EA Survey, public benefits/publishing research on cause prioritization, moral weights work, and research into particularly neglected animals. There were also shoutouts to the staff: "ALLFED and Rethink Priorities both consist of highly talented and motivated individuals that are working on high-potential, high-impact projects. Both organizations have left a strong impression on me in terms of their approach to reasoning and problem solving. [...] Both organizations have recently posted extremely well-detailed [updates on their financial situation and how additional funding would help]. [...]" CE's Incubated Charities Fund (and Charity Entrepreneurship more broadly) got a lot of appreciation for their good and/or unusual ideas and track record. There were also comments like: "...direct-action global health charities need more funding now, especially in light of reductions in future funding from Open Phil. [And] there's enough potential upside to charity incubation to put a good bit of money there." A number of people wrote that they'd updated towards donating to animal welfare as a result of recent discussions ( often explicitly because of this post). Many gave a lot of their points to the Animal Welfare Fund, sometimes referencing GWWC's evaluations of the evaluators. Some also said they wanted to vote for animal welfare to correct for what they saw as its relative neglectedness in EA or to emphasize that it has a central place in EA. One example: "I vo...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Infrastructure Fund's Plan to Focus on Principles-First EA, published by Linch on December 6, 2023 on The Effective Altruism Forum. Summary EA Infrastructure Fund (EAIF)[1] has historically had a somewhat scattershot focus within "EA meta." This makes it difficult for us to know what to optimize for or for donors to evaluate our performance. More We propose that we switch towards focusing our grantmaking on Principles-First EA.[2] More This includes supporting: research that aids prioritization across different cause areas projects that build communities focused on impartial, scope sensitive and ambitious altruism infrastructure, especially epistemic infrastructure, to support these aims We hope that the tighter focus area will make it easier for donors and community members to evaluate the EA Infrastructure Fund, and decide for themselves whether EAIF is a good fit to donate to or otherwise support. Our tentative plan is to collect feedback from the community, donors, and other stakeholders until the end of this year. Early 2024 will focus on refining our approach and helping ease transition for grantees. We'll begin piloting our new vision in Q2 2024. More Note: The document was originally an internal memo written by Caleb Parikh, which Linch Zhang adapted into an EA Forum post. Below, we outline a tentative plan. We are interested in gathering feedback from community members, particularly donors and EAIF grantees, to see how excited they'd be about the new vision. Introduction and background context I (Caleb) [3]think the EA Infrastructure Fund needs a more coherent and transparent vision than it is currently operating under. EA Funds' EA Infrastructure Fund was started about 7 years ago under CEA. The EA Infrastructure Fund (formerly known as the EA Community Fund or EA Meta Fund) has given out 499 grants worth about 18.9 million dollars since the start of 2020. Throughout its various iterations, the fund has had a large impact on the community and I am proud of a number of the grants we've given out. However, the terminal goal of the fund has been somewhat conceptually confused, which likely leads to a focus and allocation of resources often seemed scattered and inconsistent. For example, EAIF has funded various projects that are associated with meta EA. Sometimes, these are expansive, community-oriented endeavors like local EA groups and podcasts on effective altruism topics. However, we've also funded more specialized projects for EA-adjacent communities. The projects include rationality meetups, fundraisers for effective giving in global health, and AI Safety retreats. Furthermore, in recent years, EAIF also functioned as a catch-all grantmaker for EA or EA-adjacent projects that aren't clearly under the purview of other funds. As an example, it has backed early-stage global health and development projects. I think EAIF has historically served a valuable function. However, I currently think it would be better for EAIF to have a narrower focus. As the lead for EA Funds, the bottom line of EAIF has been quite unclear to me, which has made it challenging for me to assess its performance and grantmaking quality. This lack of clarity has also posed challenges for fund managers in evaluating grant proposals, as they frequently face thorny philosophical questions, such as determining the comparative value of a neartermist career versus a longtermist career. Furthermore, the lack of conceptual clarity makes it difficult for donors to assess our effectiveness or how well we match their donation objectives. This problem is exacerbated by us switching back to a more community-funded model, in contrast to our previous reliance on significant institutional donors like Open Phil[4]. I expect most small and medium-sized individual donors to have less time or resources to...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates to Open Phil's career development and transition funding program, published by Bastian Stern on November 29, 2023 on The Effective Altruism Forum. We've recently made a few updates to the program page for our career development and transition funding program (recently renamed, previously the "early-career funding program"), which provides support - in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital - for individuals who want to pursue careers that could help to reduce global catastrophic risks (especially risks from advanced artificial intelligence and global catastrophic biological risks) or otherwise improve the long-term future. The main updates are as follows: We've broadened the program's scope to explicitly include later-career individuals, which is also reflected in the new program name. We've added some language to clarify that we're open to supporting a variety of career development and transition activities, including not just graduate study but also unpaid internships, self-study, career transition and exploration periods, postdocs, obtaining professional certifications, online courses, and other types of one-off career-capital-building activities. Earlier versions of the page stated that the program's primary focus was to provide support for graduate study specifically, which was our original intention when we first launched the program back in 2020. We haven't changed our views about the impact of that type of funding and expect it to continue to account for a large fraction of the grants we make via this program, but we figured we should update the page to clarify that we're in fact open to supporting a wide range of other kinds of proposals as well, which also reflects what we've already been doing in practice. This program now subsumes what was previously called the Open Philanthropy Biosecurity Scholarship; for the time being, candidates who would previously have applied to that program should apply to this program instead. (We may decide to split out the Biosecurity Scholarship again as a separate program at a later point, but for practical purposes, current applicants can ignore this.) Some concrete examples of the kinds of applicants we're open to funding, in no particular order (copied from the program page): A final-year undergraduate student who wants to pursue a master's or a PhD program in machine learning in order to contribute to technical research that helps mitigate risks from advanced artificial intelligence. An individual who wants to do an unpaid internship at a think tank focused on biosecurity, with the aim of pursuing a career dedicated to reducing global catastrophic biological risk. A former senior ML engineer at an AI company who wants to spend six months on self-study and career exploration in order to gain context on and investigate career options in AI risk mitigation. An individual who wants to attend law school or obtain an MPP, with the aim of working in government on policy issues relevant to improving the long-term future. A recent physics PhD who wants to spend six months going through a self-guided ML curriculum and working on interpretability projects in order to transition to contributing to technical research that helps mitigate risks from advanced artificial intelligence. A software engineer who wants to spend the next three months self-studying in order to gain relevant certifications for a career in information security, with the longer-term goal of working for an organization focused on reducing global catastrophic risk. An experienced management consultant who wants to spend three months exploring different ways to apply their skill set to reducing global catastrophic risk and apply...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, published by Ariel Simnegar on November 19, 2023 on The Effective Altruism Forum. Thanks to Michael St. Jules for his comments. Key Takeaways The evidence that animal welfare dominates in neartermism is strong. Open Philanthropy (OP) should scale up its animal welfare allocation over several years to approach a majority of OP's neartermist grantmaking. If OP disagrees, they should practice reasoning transparency by clarifying their views: How much weight does OP's theory of welfare place on pleasure and pain, as opposed to nonhedonic goods? Precisely how much more does OP value one unit of a human's welfare than one unit of another animal's welfare, just because the former is a human? How does OP derive this tradeoff? How would OP's views have to change for OP to prioritize animal welfare in neartermism? Summary Rethink Priorities (RP)'s moral weight research endorses the claim that the best animal welfare interventions are orders of magnitude (1000x) more cost-effective than the best neartermist alternatives. Avoiding this conclusion seems very difficult: Rejecting hedonism (the view that only pleasure and pain have moral value) is not enough, because even if pleasure and pain are only 1% of what's important, the conclusion still goes through. Rejecting unitarianism (the view that the moral value of a being's welfare is independent of the being's species) is not enough. Even if just for being human, one accords one unit of human welfare 100x the value of one unit of another animal's welfare, the conclusion still goes through. Skepticism of formal philosophy is not enough, because the argument for animal welfare dominance can be made without invoking formal philosophy. By analogy, although formal philosophical arguments can be made for longtermism, they're not required for longtermist cause prioritization. Even if OP accepts RP's conclusion, they may have other reasons why they don't allocate most neartermist funding to animal welfare. Though some of OP's possible reasons may be fair, if anything, they'd seem to imply a relaxation of this essay's conclusion rather than a dismissal. It seems like these reasons would also broadly apply to AI x-risk within longtermism. However, OP didn't seem put off by these reasons when they allocated a majority of longtermist funding to AI x-risk in 2017, 2019, and 2021. I request that OP clarify their views on whether or not animal welfare dominates in neartermism. The Evidence Endorses Prioritizing Animal Welfare in Neartermism GiveWell estimates that its top charity (Against Malaria Foundation) can prevent the loss of one year of life for every $100 or so. We've estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent. If you value chicken life-years equally to human life-years, this implies that corporate campaigns do about 10,000x as much good per dollar as top charities. … If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x). Holden Karnofsky, "Worldview Diversification" (2016) "Worldview Diversification" (2016) describes OP's approach to cause prioritization. At the time, OP's research found that if the interests of animals are "at least 1-10% as important" as those of humans, then "animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options". After the better part of a decade, the latest and most rigorous research funded by OP has endorsed a stronger claim: Any significant moral weight for animals implies that OP should prioritize animal welfare in ne...
Thanks to Michael St. Jules for his comments.Key TakeawaysThe evidence that animal welfare dominates in neartermism is strong.Open Philanthropy (OP) should scale up its animal welfare allocation over several years to approach a majority of OP's neartermist grantmaking.If OP disagrees, they should practice reasoning transparency by clarifying their views:How much weight does OP's theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?Precisely how much more does OP value one unit of a human's welfare than one unit of another animal's welfare, just because the former is a human? How does OP derive this tradeoff?How would OP's views have to change for OP to prioritize animal welfare in neartermism?Summary.Rethink Priorities (RP)'s moral weight research endorses the claim that the best animal welfare interventions are orders of magnitude (1000x) more cost-effective than the best neartermist alternatives.Avoiding this conclusion seems very difficult:Rejecting hedonism (the view that only pleasure and pain have moral [...] ---Outline:(00:09) Key Takeaways(02:46) The Evidence Endorses Prioritizing Animal Welfare in Neartermism(06:32) Objections(06:35) Animal Welfare Does Not Dominate in Neartermism(07:07) RPs Project Assumptions are Incorrect(09:21) Endorsing Overwhelming Non-Hedonism(14:13) Endorsing Overwhelming Hierarchicalism(16:09) Its Strongly Intuitive that Helping Humans Helping Chickens(16:46) Skepticism of Formal Philosophy(17:58) Even if Animal Welfare Dominates, it Still Shouldnt Receive a Majority of Neartermist Funding(18:12) Worldview Diversification Opposes Majority Allocations to Controversial Cause Areas(19:43) OP is Already a Massive Animal Welfare Funder(20:16) Animal Welfare has Faster Diminishing Marginal Returns than Global Health(21:36) Increasing Animal Welfare Funding would Reduce OP's Influence on Philanthropists(23:42) Request for Reasoning Transparency from OP(26:13) Conclusion--- First published: November 19th, 2023 Source: https://forum.effectivealtruism.org/posts/btTeBHKGkmRyD5sFK/open-phil-should-allocate-most-neartermist-funding-to-animal --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Phil releases RFPs on LLM Benchmarks and Forecasting, published by Lawrence Chan on November 11, 2023 on The AI Alignment Forum. As linked at the top of Ajeya's "do our RFPs accelerate LLM capabilities" post, Open Philanthropy (OP) recently released two requests for proposals (RFPs): An RFP on LLM agent benchmarks: how do we accurately measure the real-world, impactful capabilities of LLM agents? An RFP on forecasting the real world-impacts of LLMs: how can we understand and predict the broader real-world impacts of LLMs? Note that the first RFP is both significantly more detailed and has narrower scope than the second one, and OP recommends you apply for the LLM benchmark RFP if your project may be a fit for both. Brief details for each RFP below, though please read the RFPs for yourself if you plan to apply. Benchmarking LLM agents on consequential real-world tasks Link to RFP: https://www.openphilanthropy.org/rfp-llm-benchmarks We want to fund benchmarks that allow researchers starting from very different places to come to much greater agreement about whether extreme capabilities and risks are plausible in the near-term. If LLM agents score highly on these benchmarks, a skeptical expert should hopefully become much more open to the possibility that they could soon automate large swathes of important professions and/or pose catastrophic risks. And conversely, if they score poorly, an expert who is highly concerned about imminent catastrophic risk should hopefully reduce their level of concern for the time being. In particular, they're looking for benchmarks with the following three desiderata: Construct validity: the benchmark accurately captures a potential real-world, impactful capability of LLM agents. Consequential tasks: the benchmark features tasks that will have massive economic impact or can pose massive risks. Continuous scale: the benchmark improves relatively smoothly as LLM agents improve (that is, they don't go from ~0% performance to >90% like many existing LLM benchmarks have). Also, OP will do a virtual Q&A session for this RFP: We will also be hosting a 90-minute webinar to answer questions about this RFP on Wednesday, November 29 at 10 AM Pacific / 1 PM Eastern (link to come). Studying and forecasting the real-world impacts of systems built from LLMs Link to RFP: https://www.openphilanthropy.org/rfp-llm-impacts/ This RFP is significantly less detailed, and primarily consists of a list of projects that OP may be willing to fund: To this end, in addition to our request for proposals to create benchmarks for LLM agents, we are also seeking proposals for a wide variety of research projects which might shed light on what real-world impacts LLM systems could have over the next few years. Here's the full list of projects they think could make a strong proposal: Conducting randomized controlled trials to measure the extent to which access to LLM products can increase human productivity on real-world tasks. For example: Polling members of the public about whether and how much they use LLM products, what tasks they use them for, and how useful they find them to be. In-depth interviews with people working on deploying LLM agents in the real world. Collecting "in the wild" case studies of LLM use, for example by scraping Reddit (e.g. r/chatGPT), asking people to submit case studies to a dedicated database, or even partnering with a company to systematically collect examples from consenting customers. Estimating and collecting key numbers into one convenient place to support analysis. Creating interactive experiences that allow people to directly make and test their guesses about what LLMs can do. Eliciting expert forecasts about what LLM systems are likely to be able to do in the near future and what risks they might pose. Synthesizing, summarizing, and ...
I'm a grantmaker who previously spent a decade as a professional investor. I've recently helped some Open Phil, GiveWell, and Survival and Flourishing Fund grantees with their cash and foreign exchange (FX) management. In the EA community, we seem collectively quite bad at this. My aim with this post is to help others 80/20 their cash and FX management: for 20% of the effort (these 4 items below), we can capture 80% of the benefit of corporate best practices. This will often be a highly impactful use of your time: I think that for the median organization, implementing these suggestions will take 15 to 30 hours of staff time, but will be about as valuable as raising 5% more money.My suggestions are:have more than one bankinvest your cash in a government-guaranteed money market account, earning ~5%hold international currencies in the same proportions as your spending in those currencieswatch out for [...] ---Outline:(01:09) Have more than one bank(03:04) Invest your cash in a government guaranteed money market account, earning ~5%(06:59) Hold FX in the same proportions as your spending(08:58) Watch out for FX fees--- First published: October 14th, 2023 Source: https://forum.effectivealtruism.org/posts/akSr4YXHKZcK8EnQe/cash-and-fx-management-for-ea-organizations --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams, published by Open Philanthropy on September 30, 2023 on The Effective Altruism Forum. It's been another busy year at Open Philanthropy; after nearly doubling the size of our team in 2022, we've added over 30 new team members so far in 2023. Now we're launching a number of open applications for roles in all of our Global Catastrophic Risks (GCR) cause area teams (AI Governance and Policy, Technical AI Safety, Biosecurity & Pandemic Preparedness, GCR Cause Prioritization, and GCR Capacity Building). The application, job descriptions, and general team information are available here. Notably, you can apply to as many of these positions as you'd like with a single application form! We're hiring because our GCR teams feel pinched and really need more capacity. Program Officers in GCR areas think that growing their teams will lead them to make significantly more grants at or above our current bar. We've had to turn down potentially promising opportunities because we didn't have enough time to investigate them; on the flip side, we're likely currently allocating tens of millions of dollars suboptimally in ways that more hours could reveal and correct. On the research side, we've had to triage important projects that underpin our grantmaking and inform others' work, such as work on the value of Open Phil's last dollar and deep dives into various technical alignment agendas. And on the operational side, maintaining flexibility in grantmaking at our scale requires significant creative logistical work. Both last year's reduction in capital available for GCR projects (in the near term) and the uptick in opportunities following the global boom of interest in AI risk make our grantmaking look relatively more important; compared to last year, we're now looking at more opportunities in a space with less total funding. GCR roles we're now hiring for include: Program associates to make grants in technical AI governance mechanisms, US AI policy advocacy, general AI governance, technical AI safety, biosecurity & pandemic preparedness, EA community building, AI safety field building, and EA university groups. Researchers to identify and evaluate new areas for GCR grantmaking, conduct research on catastrophic risks beyond our current grantmaking areas, and oversee a range of research efforts in biosecurity. We're also interested in researchers to analyze issues in technical AI safety and (separately) the natural sciences. Operations roles embedded within our GCR grantmaking teams: the Biosecurity & Pandemic Preparedness team is looking for an infosec specialist, an ops generalist, and an executive assistant (who may also support some other teams); the GCR Capacity Building team is looking for an ops generalist. Most of these hires have multiple possible seniority levels; whether you're just starting in your field or have advanced expertise, we encourage you to apply. If you know someone who would be great for one of these roles, please refer them to us. We welcome external referrals and have found them extremely helpful in the past. We also offer a $5,000 referral bonus; more information here. How we're approaching these hires You only need to apply once to opt into consideration for as many of these roles as you're interested in. A checkbox on the application form will ask which roles you'd like to be considered for. We've also made efforts to streamline work tests and use the same tests for multiple roles where possible; however, some roles do use different work tests, so it's possible you'll still have to take different work tests for different roles, especially if you're interested in roles across a wide array of skillsets (e.g., both research and operations). You may also have interviews with mu...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Princeton course on longtermism, published by Calvin Baker on September 2, 2023 on The Effective Altruism Forum. This semester (Fall 2023), Prof Adam Elga and I will be co-instructing Longtermism, Existential Risk, and the Future of Humanity, an upper div undergraduate philosophy seminar at Princeton. (Yes, I did shamelessly steal half of our title from The Precipice.) We are grateful for support from an Open Phil course development grant and share the reading list here for all who may be interested. Part 1: Setting the stage Week 1: Introduction to longtermism and existential risk Core Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury. Read introduction, chapter 1, and chapter 2 (pp. 49-56 optional); chapters 4-5 optional but highly recommended. Optional Roser (2022) "The Future is Vast: Longtermism's perspective on humanity's past, present, and future" Our World in Data Karnofsky (2021) 'This can't go on' Cold Takes (blog) Kurzgesagt (2022) "The Last Human - A Glimpse into the Far Future" Week 2: Introduction to decision theory Core Weisberg, J. (2021). Odds & Ends. Read chapters 8, 11, and 14. Ord, T., Hillerbrand, R., & Sandberg, A. (2010). "Probing the improbable: Methodological challenges for risks with low probabilities and high stakes." Journal of Risk Research, 13(2), 191-205. Read sections 1-2. Optional Weisberg, J. (2021). Odds & Ends chapters 5-7 (these may be helpful background for understanding chapter 8, if you don't have much background in probability). Titelbaum, M. G. (2020) Fundamentals of Bayesian Epistemology chapters 3-4 Week 3: Introduction to population ethics Core Parfit, Derek. 1984. Reasons and Persons. Oxford: Oxford University Press. Read sections 4.16.120-23, 125, and 127 (pp. 355-64; 366-71, and 377-79). Parfit, Derek. 1986. "Overpopulation and the Quality of Life." In Applied Ethics, ed. P. Singer, 145-164. Oxford: Oxford University Press. Read sections 1-3. Optional Remainders of Part IV of Reasons and Persons and "Overpopulation and the Quality of Life" Greaves (2017) "Population Axiology" Philosophy Compass McMahan (2022) "Creating People and Saving People" section 1, first page of section 4, and section 8 Temkin (2012) Rethinking the Good 12.2 pp. 416-17 and section 12.3 (esp. pp. 422-27) Harman (2004) "Can We Harm and Benefit in Creating?" Roberts (2019) "The Nonidentity Problem" SEP Frick (2022) "Context-Dependent Betterness and the Mere Addition Paradox" Mogensen (2019) "Staking our future: deontic long-termism and the non-identity problem" sections 4-5 Week 4: Longtermism: for and against Core Greaves, Hilary and William MacAskill. 2021. "The Case for Strong Longtermism." Global Priorities Institute Working Paper No.5-2021. Read sections 1-6 and 9. Curran, Emma J. 2023. "Longtermism and the Complaints of Future People". Forthcoming in Essays on Longtermism, ed. H. Greaves, J. Barrett, and D. Thorstad. Oxford: OUP. Read section 1. Optional Thorstad (2023) "High risk, low reward: A challenge to the astronomical value of existential risk mitigation." Focus on sections 1-3. Curran, E. J. (2022). "Longtermism, Aggregation, and Catastrophic Risk" (GPI Working Paper 18-2022). Global Priorities Institute. Beckstead (2013) "On the Overwhelming Importance of Shaping the Far Future" Chapter 3 "Toby Ord on why the long-term future of humanity matters more than anything else, and what we should do about it" 80,000 Hours podcast Frick (2015) "Contractualism and Social Risk" sections 7-8 Part 2: Philosophical problems Week 5: Fanaticism Core Bostrom, N. (2009). "Pascal's mugging." Analysis, 69 (3): 443-445. Russell, J. S. "On two arguments for fanaticism." Noûs, forthcoming. Read sections 1, 2.1, and 2.2. Temkin, L. S. (2022). "How Expected Utility Theory Can Drive Us Off the Rails." In L. S. ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Linkpost) Alexander Berger is now the sole CEO of Open Philanthropy, published by Yadav on August 14, 2023 on The Effective Altruism Forum. I haven't seen this anywhere on the Forum, and I've been curious to see what Holden and Open Phil do. Alexander Berger is now the sole CEO of Open Phil, while Holden is the 'Director of AI Strategy' and Emily Oehlsen is the Managing Director of Open Phil. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on cause area focus working group, published by Bastian Stern on August 10, 2023 on The Effective Altruism Forum. Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns, there has recently been a fair amount of discussion among EAs whether it would make sense to rebalance the movement's portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes. In March 2023, Open Philanthropy's Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions. In the end, the group only ended up having two meetings, in part because it proved more difficult than expected to surface key action-relevant disagreements. Prior to the first session, participants circulated relevant memos and their initial thoughts on the topic. The group also did a small amount of evidence-gathering on how the FTX collapse has impacted the perception of EA among key target audiences. At the end of the process, working group members filled in an anonymous survey where they specified their level of agreement with a list of ideas/hypotheses that were generated during the two sessions. This included many proposals/questions for which this group/its members aren't the relevant decision-makers, e.g. proposals about actions taken/changes made by various organisations. The idea behind discussing these wasn't for this group to make any sort of direct decisions about them, but rather to get a better sense of what people thought about them in the abstract, in the hope that this might sharpen the discussion about the broader question at issue. Some points of significant agreement: Overall, there seems to have been near-consensus that relative to the status quo, it would be desirable for the movement to invest more heavily in cause-area-specific outreach, at least as an experiment, and less (in proportional terms) in outreach that uses EA/EA-related framings. At the same time, several participants also expressed concern about overshooting by scaling back on forms of outreach with a strong track-record and thereby "throwing out the baby with the bathwater", and there seems to have been consensus that a non-trivial fraction of outreach efforts that are framed in EA terms are still worth supporting. Consistently with this, when asked in the final survey to what extent the EA movement should rebalance its portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes, responses generally ranged from 6-8 on a 10-point scale (where 5=stick with the status quo allocation, 0=rebalance 100% to outreach using EA framings, 10=rebalance 100% to outreached framed in terms of constituent causes), with one respondent selecting 3/10. There was consensus that it would be good if CEA replaced one of its (currently) three annual conferences with a conference that's explicitly framed as being about x-risk or AI-risk focused conference. This was the most concrete recommendation to come out of this working group. My sense from the discussion was that this consensus was mainly driven by people agreeing that there would be value of information to be gained from trying this; I perceived more disagreement about how likely it is that this would prove a good permanent change. In response to a corresponding prompt (" . at least one of the EAGs should get replaced by an x-risk or AI-risk focused conference ."), answers ranged from 7-9 (mean 7.9), on a scale where 0=ve...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Does a Marginal Grant at LTFF Look Like? Funding Priorities and Grantmaking Thresholds at the Long-Term Future Fund, published by Linch on August 10, 2023 on The Effective Altruism Forum. The Long-Term Future Fund (LTFF) makes small, targeted grants with the aim of improving the long-term trajectory of humanity. We are currently fundraising to cover our grantmaking budget for the next 6 months. We would like to give donors more insight into how we prioritize different projects, so they have a better sense of how we plan to spend their marginal dollar. Below, we've compiled fictional but representative grants to illustrate what sort of projects we might fund depending on how much we raise for the next 6 months, assuming we receive grant applications at a similar rate and quality to the recent past. Our motivations for presenting this information are a) to provide transparency about how the LTFF works, and b) to move the EA and longtermist donor communities towards a more accurate understanding of what their donations are used for. Sometimes, when people donate to charities (EA or otherwise), they may wrongly assume that their donations go towards funding the average, or even more optimistically, the best work of those charities. However, it is usually more useful to consider the marginal impact for the world that additional dollars would buy. By offering illustrative examples of the sort of projects we might fund at different levels of funding, we hope to give potential donors a better sense of what their donations might buy, depending on how much funding has already been committed. We hope that this post will help improve the quality of thinking and discussions about charities in the EA and longtermist communities. For donors who believe that the current marginal LTFF grants are better than marginal funding of all other organizations, please consider donating! Compared to the last 3 years, we now have both a) unusually high quality and quantity of applications and b) unusually low amount of donations, which means we'll have to raise our bar substantially if we do not receive additional donations. This is an especially good time to donate, as donations are matched 2:1 by Open Philanthropy (OP donates $2 for every $1 you donate). That said, if you instead believe that marginal funding of another organization is (between 1x and 3x, depending on how you view marginal OP money) better than current marginal LTFF grants, then please do not donate to us, and instead donate to them and/or save the money for later. Background on the LTFF We are committed to improving the long-term trajectory of civilization, with a particular focus on reducing global catastrophic risks. We specialize in funding early stage projects rather than established organizations. From March 2022 to March 2023, we received 878 applications and funded 263 as grants, worth ~$9.1M dollars total (average $34.6k/grant). To our knowledge, we have made more small grants in this time period than any other longtermist- or EA- motivated funder. Other funders in this space include Open Philanthropy, Survival and Flourishing Fund, and recently Lightspeed Grants and Manifund. Historically, ~40% of our funding has come from Open Phil. However, we are trying to become more independent of Open Phil. As a temporary stopgap measure, Open Phil is matching donations to LTFF 2:1 instead of granting to us directly. 100% of money we fundraise for LTFF qua LTFF goes to grantees; we fundraise separately and privately for operational costs. We try to be very willing to fund weird things that the grantmakers' inside views believe are really impactful for the long-term future. You can read more about our work at our website here, or in our accompanying payout report here. Methodology for this analysis At the LTFF, we assign ea...
Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns, there has recently been a fair amount of discussion among EAs whether it would make sense to rebalance the movement's portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes. In March 2023, Open Philanthropy's Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.In the end, the group only ended up having two meetings, in part because it proved more difficult than expected to surface key action-relevant disagreements. Prior to the first session, participants circulated relevant memos and their initial thoughts on the topic. The group also did a small [...] --- First published: August 10th, 2023 Source: https://forum.effectivealtruism.org/posts/3kMQTjtdWqkxGuWxB/update-on-cause-area-focus-working-group --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How can we improve Infohazard Governance in EA Biosecurity?, published by Nadia Montazeri on August 5, 2023 on The Effective Altruism Forum. Or: "Why EA biosecurity epistemics are whack" The effective altruism (EA) biosecurity community focuses on reducing the risks associated with global biological catastrophes (GCBRs). This includes preparing for pandemics, improving global surveillance, and developing technologies to mitigate the risks of engineered pathogens. While the work of this community is important, there are significant challenges to developing good epistemics, or practices for acquiring and evaluating knowledge, in this area. One major challenge is the issue of infohazards. Infohazards are ideas or information that, if widely disseminated, could cause harm. In the context of biosecurity, this could mean that knowledge of specific pathogens or their capabilities could be used to create bioweapons. As a result, members of the EA biosecurity community are often cautious about sharing information, particularly in online forums where it could be easily disseminated. The issue of infohazards is not straightforward. Even senior biosecurity professionals may have different thresholds for what they consider to be an infohazard. This lack of consensus can make it difficult for junior members to learn what is appropriate to share and discuss. Furthermore, it can be challenging for senior members to provide feedback on the appropriateness of specific information without risking further harm if that information is disseminated to a wider audience. At the moment, all EA biosecurity community-building efforts are essentially gate-kept by Open Phil, whose staff are particularly cautious about infohazards, even compared to experts in the field at the Center for Health Security. Open Phil staff time is chronically scarce, making it impossible to copy and critique their heuristics on infohazards, threat models, and big-picture biosecurity strategy from 1:1 conversations. Challenges for cause and intervention prioritisation These challenges can lead to a lack of good epistemics within the EA biosecurity community, as well as a deference culture where junior members defer to senior members without fully understanding the reasoning behind their decisions. This can result in a failure to adequately assess the risks associated with GCBRs and make well-informed decisions. The lack of open discourse on biosecurity risks in the EA community is particularly concerning when compared to the thriving online discourse on AI alignment, another core area of longtermism for the EA movement. While there are legitimate reasons for being cautious about sharing information related to biosecurity, this caution may lead to a lack of knowledge sharing and limited opportunities for junior members of the community to learn from experienced members. In the words of a biosecurity researcher who commented on this draft: "Because of this lack of this discussion, it seems that some junior biosecurity EAs fixate on the "gospel of EA biosecurity interventions" - the small number of ideas seen as approved, good, and safe to think about. These ideas seem to take up most of the mind space for many junior folks thinking about what to do in biosecurity. I've been asked "So, you're working in biosecurity, are you going to do PPE or UVC?" one too many times. There are many other interesting defence-dominant interventions, and I get the sense that even some experienced folks are reluctant to explore this landscape." Another example is the difficulty of comparing biorisk and AI risk without engaging in potentially infohazardous concrete threat models. While both are considered core cause areas of longtermism, it is challenging to determine how to prioritise these risks without evaluating the likelihood of a catast...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Funds organisational update: Open Philanthropy matching and distancing, published by calebp on August 2, 2023 on The Effective Altruism Forum. We want to communicate some changes that are happening at EA Funds, particularly on the EA Infrastructure Fund and the Long-Term Future Fund. In summary: EA Funds (particularly the EAIF and LTFF) and Open Philanthropy have historically had overlapping staff, and Open Phil has supported EA Funds, but we (staff at EA Funds and Open Philanthropy) are now trying to increase the separation between EA Funds and Open Philanthropy. In particular: The current chairs of the LTFF and the EAIF, who have also joined as staff members at Open Philanthropy, are planning to step down from their respective chair positions over the next several months. Max Daniel is going to step down as the EAIF's chair on August 2nd, and Asya Bergal is planning to step down as the LTFF's chair in October. To help transition EA Funds away from reliance on Open Philanthropy's financial support, Open Philanthropy is planning to match donations to the EA Infrastructure and Long-Term Future Fund at 2:1 rates, up to $3.5M each, over the next six months. The EAIF and LTFF have substantial funding gaps - we are looking to raise an additional $3.84M for the LTFF and $4.83M for the EAIF. over the next six months. By default, I expect, the LTFF to have ~$720k, and the EAIF to have ~$400k by default. Our relationship with Open Philanthropy EA Funds started in 2017 and was largely developed during CEA's time at Y Combinator. It spun out of CEA in 2020, though both CEA and EA Funds are part of the Effective Ventures Foundation. Last year, EA Funds moved over $35M towards high-impact projects through the Animal Welfare Fund (AWF), EA Infrastructure Fund (EAIF), Global Health and Development Fund (GHDF), and Long-Term Future Fund (LTFF). Over the last two years, the EAIF and LTFF used some overlapping resources with Open Philanthropy in the following ways: Over the last year, Open Philanthropy has contributed a substantial proportion of EAIF and LTFF budgets and has covered our entire operations budget.[1] They also made a sizable grant in February 2022. (You can see more detail on Open Philanthropy's website.) The chairs of the EAIF and LTFF both joined the Longtermist EA Community Growth team at Open Philanthropy and have worked in positions at EA Funds and Open Philanthropy simultaneously. (Asya Bergal joined the LTFF in June 2020, has been chair since February 2021, and joined Open Philanthropy in April 2021; Max Daniel joined the EAIF in March 2021, has been chair since mid-2021, and joined Open Philanthropy in November 2022.) As a board member of the Effective Ventures Foundation (UK), Claire Zabel, who is also the Senior Program Officer for EA Community Growth (Longtermism) at Open Philanthropy and supervises both Asya and Max, has regularly met with me throughout my tenure at EA Funds to hear updates on EA Funds and offer advice on various topics related to EA Funds (both day-to-day issues and higher-level organisation strategy). That said, I think it is worth noting that: The majority of funding for the LTFF has come from non-Open Philanthropy sources. Open Philanthropy as an organisation has limited visibility into our activities, though certain Open Philanthropy employees, particularly Max Daniel and Asya Bergal, have a lot of visibility into certain parts of EA Funds. Our grants supporting our operations and LTFF/EAIF grantmaking funds have had minimal restrictions. Since the shutdown of the FTX Future Fund, Open Phil and I have both felt more excited about building a grantmaking organisation that is legibly independent from Open Phil. Earlier this year, Open Phil staff reached out to me proposing some steps to make this happen, and have worked with me closely ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Takeaways from the Metaculus AI Progress Tournament, published by Javier Prieto on July 27, 2023 on The Effective Altruism Forum. In 2019, Open Philanthropy commissioned a set of forecasts on AI progress from Metaculus. The forecasting questions had time horizons between 6 months and >6 years. As of June 2023, 69 of the 111 questions had been resolved unambiguously. In this post, I analyze the accuracy of these forecasts as a function of question (sub)category, crowd size, and forecasting horizon. Unless otherwise indicated, my analyses are about Metaculus' proprietary aggregate forecast ("the Metaculus prediction") evaluated at the time the question closed. Related work Feel free to skip to the next section if you're already familiar with these analyses. This analysis published 2 years ago (July 2021) looked at 64 resolved AI questions and concluded there was weak but ultimately inconclusive evidence of bias towards faster progress. A more recent analysis from March 2023 found that Metaculus had a worse Brier score on (some) AI questions than on average across all questions and presented a few behavioral correlates of accuracy within AI questions, e.g. accuracy was poorer on questions with more updates and when those updates were less informative in a certain technical sense (see post for details). Metaculus responded to the previous post with a more comprehensive analysis that included all resolved AI questions (152 in total, 64 of which were binary and 88 continuous). They show that performance is significantly better than chance for both question types and marginally better than was claimed in the previous analysis (which relied on a smaller sample of questions), though still worse than the average for all questions on the site. The analysis I present below has some overlaps with those three but it fills an important gap by studying whether there's systematic over- or under-optimism in Metaculus's AI progress predictions using data from a fairly recent tournament that had monetary incentives and thus (presumably) should've resulted in more careful forecasts. Key takeaways NB: These results haven't been thoroughly vetted by anyone else. The conclusions I draw represent my views, not Open Phil's. Progress on benchmarks was underestimated, while progress on other proxies (compute, bibliometric indicators, and, to a lesser extent, economic indicators) was overestimated. [more] This is consistent with a picture where AI progresses surprisingly rapidly on well-defined benchmarks but the attention it receives and its "real world" impact fail to keep up with performance on said benchmarks. However, I see a few problems with this picture: It's unclear to me how some of the non-benchmark proxies are relevant to AI progress, e.g. The TOP500 compute benchmark is mostly about supercomputers that (AFAICT) are mostly used to run numerical simulations, not to accelerate AI training and inference. In fact, some of the top performers don't even have GPUs. The number of new preprints in certain ML subfields over short (~6-month) time horizons may be more dependent on conference publication cycles than underlying growth. Most of these forecasts came due before or very soon after the release of ChatGPT and GPT-4 / Bing, a time that felt qualitatively different from where we are today. Metaculus narrowly beats chance and performs worse in this tournament than on average across all continuous questions on the site despite the prize money. This could indicate that these questions are inherently harder, or that they drove less or lower-quality engagement. [more] There's no strong evidence that performance was significantly worse on questions with longer horizons (
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Correctly Calibrated Trust, published by ChanaMessinger on June 24, 2023 on The Effective Altruism Forum. This post comes from finding out that Asya Bergal was having thoughts about this and was maybe going to write a post, thoughts I was having along similar lines, and a decision to combine energy and use the strategy fortnight as an excuse to get something out the door. A lot of this is written out of notes I took from a call with her, so she get credit for a lot of the concrete examples and the impetus for writing a post shaped like this. Interested in whether this resonates with people's experience! Short version: [Just read the bold to get a really short version] There's a lot of “social sense of trust” in EA, in my experience. There's a feeling that people, organizations and projects are broadly good and reasonable (often true!) that's based on a combination of general vibes, EA branding and a few other specific signals of approval, as well as an absence of negative signals. I think that it's likely common to overweight those signals of approval and the absence of disapproval. Especially post-FTX, I'd like us to be well calibrated on what the vague intuition we download from the social web is telling us, and place trust wisely. [“Trust” here is a fuzzy and under-defined thing that I'm not going to nail down - I mean here something like a general sense that things are fine and going well] Things like getting funding, being highly upvoted on the forum, being on podcasts, being high status and being EA-branded are fuzzy and often poor proxies for trustworthiness and of relevant people's views on the people, projects and organizations in question. Negative opinions (anywhere from “that person not so great” to “that organization potentially quite sketch, but I don't have any details”) are not necessarily that likely to find their way to any given person for a bunch of reasons, and we don't have great solutions to collecting and acting on character evidence that doesn't come along with specific bad actions. It's easy to overestimate what you would know if there's a bad thing to know. If it's decision relevant or otherwise important to know how much to trust a person or organization, I think it's a mistake to rely heavily on the above indicators, or on the “general feeling” in EA. Instead, get data if you can, and ask relevant people their actual thoughts - you might find them surprisingly out of step with what the vibe would indicate. I'm pretty unsure what we can or should do as a community about this, but I have a few thoughts at the bottom, and having a post about it as something to point to might help. Longer version: I think you'll get plenty out of this if you read the headings and read more under each heading if something piques your curiosity Part 1: What fuzzy proxies are people using and why would they be systematically overweighted? (I don't know how common these mistakes are, or that they apply to you, the specific reader of the post. I expect them to bite harder if you're newer or less connected, but I also expect that it's easy to be somewhat biased in the same directions even if you have a lot of context. I'm hoping this serves as contextualization for the former and a reminder / nudge for the latter.) Getting funding from OP and LTFF Seems easy to expect that if someone got funding from Open Phil or the Long Term Future Fund, that's a reasonable signal about the value of their work or the competence or trustworthiness or other virtues of the person running it. It obviously is Bayesian evidence, but I expect this to be extremely noisy. These organisations engage in hits-based philanthropy - as I understand it, they don't expect most of the grants they make to be especially valuable (but the amount and way this is true varies by funder - Linch describes...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive rates Open-Phil-funded charity NTI “too rich”, published by Sanjay on June 18, 2023 on The Effective Altruism Forum. Exec summary Under SoGive's methodology, charities holding more than 1.5 years' expenditure are typically rated “too rich”, in the absence of a strong reason to judge otherwise. (more) Our level of confidence in the appropriateness of this policy depends on fundamental ethical considerations, and could be “clearly (c.95%) very well justified” or “c.50% to c.90% confident in this policy, depending on the charity” (more) We understand that the Nuclear Threat Initiative (NTI) holds > 4 years of spend (c$85m), as at the most recently published Form 990, well in excess of our warning threshold. (more) We are now around 90% confident that NTI's reserves are well in excess of our warning threshold, indeed >3x annual spend, although there are some caveats. (more) Our conversation with NTI about this provides little reason to believe that we should deviate from our default rating of “too rich”. (more) It is possible that NTI could show us forecasts of their future income and spend that might make us less likely to be concerned about the value of donations to NTI, although this seems unlikely since they have already indicated that they do not wish to share this. (more) We do not typically recommend that donors donate to NTI. However we do think it's valuable for donors to communicate that they are interested in supporting their work, but are avoiding donating to NTI because of their high reserves. (more) Although this post is primarily to help donors decide whether to donate to NTI, readers may find it interesting for understanding SoGive's approach to charities which are too rich, and how this interacts with different ethical systems. We thank NTI for agreeing to discuss this with us knowing that there was a good chance that we might publish something on the back of the discussion. We showed them a draft of this post before publishing; they indicated that they disagree with the premise of the piece, but declined to indicate what specifically they disagreed with. 0. Intent of this post Although this post highlights the fact that NTI has received funding from Open Philanthropy (Open Phil), the aim is not to put Open Philanthropy on the spot or demand any response from them. Rather, we have argued that it is often a good idea for donors to “coattail” (i.e. copy) donations made by Open Phil. For donors doing this, including donors supported by SoGive, we think it's useful to know which Open Phil grantees we might give lower or higher priority to. 1. Background on SoGive's methodology for assessing reserves The SoGive ratings scale has a category called “too rich”. It is used for charities which we deem to have a large enough amount of money that it no longer makes sense for donors to provide them with funds. We set this threshold at 18 months of spend (i.e. if the amount of unrestricted reserves is one and a half times as big as its annual spend then we typically deem the charity “too rich”). To be clear, this allows the charity carte blanche to hold as much money as it likes as long as it indicates that it has a non-binding plan for that money. So, having generously ignored the designated reserves, we then notionally apply the (normally severe) stress of all the income disappearing overnight. Our threshold considers the scenario where the charity has so much reserves that it could go for one and a half years without even having to take management actions such as downsizing its activities. In this scenario, we think it is likely better for donors to send their donations elsewhere, and allow the charity to use up its reserves. Originally we considered a different, possibly more lenient policy. We considered that charities should be considered too rich if they...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Large epistemological concerns I should maybe have about EA a priori, published by Luise on June 7, 2023 on The Effective Altruism Forum. Note I originally wrote this as a private doc, but I thought maybe it's valuable to publish. I've only minimally edited it. Also, I now think the epistemological concerns listed below aren't super clearly carved and have a lot of overlap. The list was never meant to be a perfect carving, just to motion at the shape of my overall concerns, but even so, I'd write it differently if I was writing it today. Motivation For some time now, I've wanted nothing more than to finish university and just work on EA projects I love. I'm about to finish my third year of university and could do just that. A likely thing I would work on is alignment field-building, e.g., helping to run the SERI MATS program again. (In this doc, will use alignment field-building as the representative of all the community building/operations-y projects I'd like to work on, for simplicity.) However, in recent months, I have become more careful about how I form opinions. I am more truthseeking and more epistemically modest (but also more hopeful that I can do more than blind deferral in complex domains). I now no longer endorse the epistemics (used here broadly as “ways of forming beliefs”) that led me to alignment field-building in the first place. For example, I think this in part looked like “chasing cool, weird ideas that feel right to me” and “believing whatever high-status EAs believe”. I am now deeply unsure about many assumptions underpinning the plan to do alignment field-building. I think I need to take some months to re-evaluate these assumptions. In particular, here are the questions I feel I need to re-evaluate: 1. What should my particular takes about particular cause areas (chiefly alignment) and about community building be? My current takes often feel immodest and/or copied from specific high-status people. For example, my takes on which alignment agendas are good are entirely copied from a specific Berkeley bubble. My takes on the size of the “community building multiplier” are largely based on quite immodest personal calculations, disregarding that many “experts” think the multiplier is lower. I don't know what the right amount of immodesty and copying from high-status people is, but I'd like to at least try to get closer. 2. Is the “EA viewpoint” on empirical issues (e.g., on AI risk) correct (because we are so smart)? Up until recently I just assumed (a part of) EA is right about large empirical questions like “How effectively-altruistic is ‘Systemic Change'?”, “How high are x-risks?” and “Is AI an x-risk?”. (“Empirical” as opposed to “moral”.) First, this was maybe a naïve kind of tribalistic support, later because of the “superior epistemics” of EAs. The poster version of this is “Just believe whatever Open Phil says”. Here's my concern: In general, people adopt stories they like on big questions, e.g., the capitalism-is-cancer-and-we-need-to-overhaul-the-system story or the AI-will-change-everything-tech-utopia story. People don't seek out all the cruxy information and form credences to actually get closer to the truth. I used to be fine just to back “a plausible story of how things are”, as I suspect many EAs are. But now I want to back the correct story of how things are. I'm wondering if the EA/Open Phil worldview is just a plausible story. This story probably contains a lot of truthseeking and truth on lower-level questions, such as “How effective is deworming?”. But on high-level questions such as “How big a deal is AGI?”, maybe it is close to impossible not just to believe in a story and instead do the hard truthseeking thing. Maybe that would be holding EA/Open Phil to an impossible standard. I simply don't know currently if EA/Open Phil ep...
This is partly based on my experiences working as a Program Officer leading Open Phil's Longtermist EA Community Growth team, but it's a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position.Context: I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on engaging more EAs vs. other non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it. Tl;dr: I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won't apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century' hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach. A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use “EAs” or “longtermists” as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, I'm concerned that this is a reason we're failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the “existential risk reduction” frame [this seemed more true in 2022 than 2023]. This is in the vein of Neel Nanda's "Simplify EA Pitches to "Holy Shit, X-Risk"" and Scott Alexander's “Long-termism vs. Existential Risk”, but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA. EA and longtermism: not a crux for doing the most important workRight now, my priority in my professional life is helping humanity navigate the imminent creation of potential transformative technologies, to try to make the future better for sentient beings than it would otherwise be. I think that's likely the most important thing anyone can do these days. And I don't think EA or longtermism is a crux for this prioritization anymore. A lot of us (EAs who currently prioritize x-risk reduction) were “EA-first” — we came to these goals first via broader EA principles and traits, like caring deeply about others; liking rigorous research, scope sensitivity, and expected value-based reasoning; and wanting to meet others with similar traits. Next, we were exposed to a cluster of philosophical and empirical arguments about [...]--- First published: June 2nd, 2023 Source: https://forum.effectivealtruism.org/posts/cP7gkDFxgJqHDGdfJ/ea-and-longtermism-not-a-crux-for-saving-the-world --- Narrated by TYPE III AUDIO. Share feedback on this narration.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA and Longtermism: not a crux for saving the world, published by ClaireZabel on June 3, 2023 on The Effective Altruism Forum. This is partly based on my experiences working as a Program Officer leading Open Phil's Longtermist EA Community Growth team, but it's a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position. Context: I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on engaging more EAs vs. other non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it. Tl;dr: I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won't apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century' hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach. A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use “EAs” or “longtermists” as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, I'm concerned that this is a reason we're failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the “existential risk reduction” frame [this seemed more true in 2022 than 2023]. This is in the vein of Neel Nanda's "Simplify EA Pitches to "Holy Shit, X-Risk"" and Scott Alexander's “Long-termism vs. Existential Risk”, but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA. EA and longtermism: not a crux for doing the most important work Right now, my priority in my professional life is helping humanity navigate the imminent creation of potential transformative technologies, to try to make the future better for sentient beings than it would otherwise be. I think that's likely the most important thing anyone can do these days. And I don't think EA or longtermism is a crux for this prioritization anymore. A lot of us (EAs who currently prioritize x-risk reduction) were “EA-first” — we came to these goals first via broader EA principles and traits, like caring deeply about others; liking rigorous research, scope sensitivity, and expected value-based reasoning; and wanting to meet others with similar traits. Next, we were exposed to a cluster of philosophical and empirical arguments about the importance of the far future and potential technologies and other changes that could influence it. Some of us were “longtermists-second”; we prioritized making the far future as good as possible regardless of whether we thought we were in an exceptional position to do this, and that existential risk reduction would be one of the core activities for doing it. For most of the last decade, I think that most of us have emphasized EA ideas when trying to discuss X-risk with people outside our circles. And locally, this worked pretty well; some people (a whole bunch, actually) found these ideas compelling and ended up prioritizing similarly. I think t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should Rational Animations invite viewers to read content on LessWrong?, published by Writer on May 27, 2023 on LessWrong. When I introduced Rational Animations, I wrote: What I won't do is aggressively advertise LessWrong and the EA Forum. If the channel succeeds, I will organize fundraisers for EA charities. If I adapt an article for YT, I will link it in the description or just credit the author. If I use quotes from an author on LW or the EA Forum, I will probably credit them on-screen. But I will never say: "Come on LW! Plenty of cool people there!" especially if the channel becomes big. Otherwise, "plenty of cool people" becomes Reddit pretty fast. If the channel becomes big, I will also refrain from posting direct links to LW and the EA Forum. Remind me if I ever forget. And let me know if these rules are not conservative enough. In my most recent communications with Open Phil, we discussed the fact that a YouTube video aimed at educating on a particular topic would be more effective if viewers had an easy way to fall into an "intellectual rabbit hole" to learn more. So a potential way to increase Rational Animations' impact is to increase the number of calls to action to read the sources of our videos. LessWrong and the EA Forum are good candidates for people to go learn more. Suppose we adapt an article from the sequences. One option is to link to readthesequences.com, another is to link to the version of the article hosted on LessWrong. It seems to me that linking to the version hosted on LessWrong would dramatically increase the chance of people falling into an "intellectual rabbit hole" and learning a lot more and eventually start to contribute.Here are what I think are the main cons and pros of explicitly inviting people to read stuff on LessWrong: Cons: If lots of people join at once we may get a brief period of lower-quality posts and comments until moderation catches up. If we can't onboard people fast enough, the quality of what's in the site will become a lot lower over time, and people producing excellent content will be driven away. Pro: If people read LessWrong, they will engage with important topics such as reducing existential risk from AI. Eventually, some of them might be able to contribute in important ways, such as by doing alignment research. I'm more optimistic now about trying to invite people than I was in 2021, mainly thanks to recent adjustments in moderation policy and solutions such as the Rejected Content Section, which, in my understanding, are meant at least in part for dealing with the large influx of new users resulting from the increased publicity around AGI x-risk. I think it likely (~ 70%) that such moderation norms are strong enough filters that they would select for good contributions and for users that can contribute in positive ways. Among the cons, I think number 2 is worse and more likely to happen without a strong and deliberate moderation effort. That said, it's probably relatively easy and safe to run experiments. I don't think many people would make a new account as a result of an isolated call to action to read an article hosted on LessWrong. I have high confidence that the number of new accounts would be between 100 and 1000 per million views, given the three conditions in this market. You should treat the market as providing a per-video upper bound, since the conditions describe some very aggressive publicity. Market conditions: In a video, Rational Animations invites viewers to read a LessWrong article or sequence. It's an explicit invitation made in the narration of the video rather than, e.g., only as in-video text. The article/sequence is linked in three places: at the end of the video (e.g., in place or near the Patreon link), at the top of the video description, and in the pinned comment. The video accrues...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Supreme Court Upholds Prop 12!, published by Rockwell on May 11, 2023 on The Effective Altruism Forum. The United States Supreme Court just released its decision on the country's most pivotal farmed animal welfare case—NATIONAL PORK PRODUCERS COUNCIL ET AL. v. ROSS, SECRETARY OF THE CALIFORNIA DEPARTMENT OF FOOD AND AGRICULTURE, ET AL. —upholding California's Prop 12, the strongest piece of farmed animal legislation in the US.In 2018, California residents voted by ballot measure to ban the sale of pig products that come from producers that use gestation crates, individual crates the size of an adult pig's body that mother pigs are confined to 24/7 for the full gestation of their pregnancies, unable to turn around. In response, the pork industry sued and the case made its way to the nation's highest court.If the Supreme Court had not upheld Prop 12, years of advocacy efforts would have been nullified and advocates would no longer be able to pursue state-level legislative interventions that improve welfare by banning the sale of particularly cruelly produced animal products. It would have been a tremendous setback for the US animal welfare movement. Instead, today is a huge victory. Groups like HSUS spearheaded efforts to uphold Prop 12, even in the face of massive opposition. The case exemplified the extent to which even left-leaning politicians side with animal industry over animal welfare, as even the Biden administration sided with the pork industry. Today is a monumental moment for farmed animal advocacy. Congratulations to everyone who worked to make this happen! Read more about it: Summary and analysis from Lewis Bollard (Senior Program Officer for Farm Animal Welfare at Open Phil) here on Twitter. Victory announcement by the Humane Society of the United States here. New York Times coverage here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's surprised me as an entry-level generalist at Open Phil & my recommendations to early career professionals, published by Sam Anschell on March 30, 2023 on The Effective Altruism Forum. Disclaimer The opinions expressed in this post are my own and do not represent Open Philanthropy. Valentine's day 2022 was my first day of work at Open Phil. As a 24-year-old who had spent two post-grad years as a poker dealer/cardroom union representative, I had little in the way of white collar context or transferable skills. Recently, a few undergraduates and early career professionals have reached out to learn what the job is like & how they can get further involved in EA. In this post I'll try to provide the advice that I would have benefited from hearing a couple years ago. I'm hoping to widen the aperture of possibilities to early career professionals who are excited to use their time and talents to do good. I know how difficult it can be to land an EA job - it took years of on-and-off applying before I got an offer. It's normal to face a string of rejections and it's valid to feel frustrated by that, but I think the benefits to individuals and organizations when a hire is made are so great that continuing to apply is worth it. I encourage anyone who is struggling to get their foot in the door to read Aaron's Epistemic Stories - I found it really motivating. TLDR: Before starting this job, I underestimated the $ value of person-hours at EA orgs. I may have done this because: There's a disconnect between salary and social value generated (even though salaries at EA orgs are generous). Most for-profit companies value their average staff member's contributions at about 2x their salary, and I suspect EA orgs value their average staff member's contributions at more like 8x+ their salary. It could be uncomfortable to think that time at an EA org would be very valuable, both because of what it would imply for labor/leisure tradeoffs and because it could lead to imposter syndrome. It can be easy to mentally compartmentalize work at EA orgs as creating a similar level of social impact to work at nonprofits in general, despite believing that EA interventions are much more cost-effective than the average nonprofit's interventions. Due to this underestimate, I now think I should have focused on working directly on EA projects and spending more time applying for EA jobs earlier. Here are some of my recommendations to early career professionals: Don't feel like you have to put multiple years into a job before leaving to show you're not a job-hopper. EA orgs understand the desire to contribute to work you find meaningful as soon as you can! I suspect people apply to too few jobs given how unpleasant it can be to job hunt, and I strongly encourage you to keep putting yourself out there. I applied to a few hundred jobs before landing this one, as did many of my friends who work at EA orgs. Not getting any jobs despite many applications isn't a sign that you're a bad applicant. Doing even unsexy work for an organization that you're strongly mission-aligned with is more motivating than you might expect. I write about impactful ways that anyone can spend time at the end of this post. A ~Year in the Life What I did at work It's hard to look at a job description and get a sense of what the day-to-day looks like (and see whether one might be qualified for the job). Success in my and many other entry-level jobs seems to be a product of enthusiasm, dependability (which I'd define as the independence/organization skills to manage a task so that the person who assigns it doesn't need to follow up), and good judgment (when to check in, what tone to use in emails, etc.) In my day-to-day as a business operations generalist (assistant level), I've: Helped manage the physical office space: Purchased, ren...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on March 10, 2023 on The Effective Altruism Forum. We are pleased to announce the 2023 Open Philanthropy AI Worldviews Contest. The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk. We plan to distribute $225,000 in prize money across six winning entries. This is the same contest we preannounced late last year, which is itself the spiritual successor to the now-defunct Future Fund competition. Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly. The contest deadline is May 31, 2023. All work posted for the first time on or after September 23, 2022 is eligible. Use this form to submit your entry. Prize Conditions and Amounts Essays should address one of these two questions: Question 1: What is the probability that AGI is developed by January 1, 2043? Question 2: Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system? Essays should be clearly targeted at one of the questions, not both. Winning essays will be determined by the extent to which they substantively inform the thinking of a panel of Open Phil employees. There are several ways an essay could substantively inform the thinking of a panelist: An essay could cause a panelist to change their central estimate of the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI by 2070. An essay could cause a panelist to change the shape of their probability distribution for AGI by 2043 or existential catastrophe conditional on AGI by 2070, which could have strategic implications even if it doesn't alter the panelist's central estimate. An essay could clarify a concept or identify a crux in a way that made it clearer what further research would be valuable to conduct (even if the essay doesn't change anybody's probability distribution or central estimate). We will keep the composition of the panel anonymous to avoid participants targeting their work too closely to the beliefs of any one person. The panel includes representatives from both our Global Health & Wellbeing team and our Longtermism team. Open Phil's published body of work on AI broadly represents the views of the panel. Panelist credences on the probability of AGI by 2043 range from ~10% to ~45%. Conditional on AGI being developed by 2070, panelist credences on the probability of existential catastrophe range from ~5% to ~50%. We will award a total of six prizes across three tiers: First prize (two awards): $50,000 Second prize (two awards): $37,500 Third prize (two awards): $25,000 Eligibility Submissions must be original work, published for the first time on or after September 23, 2022 and before 11:59 pm EDT May 31, 2023. All authors must be 18 years or older. Submissions must be written in English. No official word limit — but we expect to find it harder to engage with pieces longer than 5,000 words (not counting footnotes and references). Open Phil employees and their immediate family members are ineligible. The following groups are also ineligible: People who are residing in, or nationals of, Puerto Rico, Quebec, or countries or jurisdictions that prohibit such contests by law People who are specifically sanctioned by the United States or based in a US-sanctioned country (North Korea, Iran, Russia, Myanmar, Afghanistan, Syria, Venezuela, and Cuba at time of writing) You can submit as many entries as you want, but you can only win one prize. Co-authorship is fine. See here for additional details and fine print. Submission Use this form to submit your entries. We strongl...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Centralisation?, published by DavidNash on March 6, 2023 on The Effective Altruism Forum. Summary I think EA is under centralised There are few ‘large' EA organisations but most EA opportunities are 1-2 person projects This is setting up most projects to fail without proper organisational support and does not provide good incentives for experienced professionals to work on EA projects EA organisations with good operations could incubate smaller projects before spinning them out Levels of Centralisation We could imagine different levels of centralisation for a movement ranging from fully decentralised to fully centralised. Fully decentralised, everyone works on their own project, no organisations bigger than 1 person Fully centralised, everyone works inside the same organisation (e.g. the civil service) It seems that EA tends more towards the decentralised model, there are relatively few larger organisations with ~50 or more people (Open Phil, GiveWell, Rethink Priorities, EVF), there are some with ~5-20 people and a lot of 1-2 person projects. I think EA would be much worse if it was one large organisation but there is probably a better balance found between the two extremes then we have at the moment. I think being overly decentralised may be setting up most people to fail. Why would being overly decentralised be setting people up to fail? Being an independent researcher/organiser is harder without support systems in place, and trying to coordinate this outside of an organisation is more complicated These support systems include Having a manager Having colleagues to bounce ideas off/moral support Having professional HR/operations support Health insurance Being an employee rather than a contractor/grant recipient that has to worry about receiving future funding (although there are similar concerns about being fired) When people are setting up their own projects it can take up a large proportion of their time in the first year just doing operations to run that project, unrelated to the actual work they want to do. This can include spending a lot of the first year just fundraising for the second year How a lack of centralisation might affect EA overall Being a movement with lots of small project work will appeal more to those with a higher risk tolerance, potentially pushing away more experienced people who would want to work on these projects, but within a larger organisation Having a lot of small organisations will lead to a lot of duplication of operation/administration work It will be harder to have good governance for lots of smaller organisations, some choose to not have any governance structures at all unless they grow There is less competition for employees if the choice is between 3 or 4 operationally strong organisations or being in a small org What can change? Organisations with good operations and governance could support more projects internally - One example of this already is the Rethink Priorities Special Projects Program These projects can be supported until they have enough experience and internal operations to survive and thrive independently Programs that are mainly around giving money to individuals could be converted into internal programs, something more similar to the Research Scholars Program, or Charity Entrepreneurship's Incubation Program Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Show Open - Phil Martelli meets with the media See omnystudio.com/listener for privacy information.
Read the full transcriptWhat is Effective Altruism? Which parts of the Effective Altruism movement are good and not so good? Who outside of the EA movement are doing lots of good in the world? What are the psychological effects of thinking constantly about the trade-offs of spending resources on ourselves versus on others? To what degree is the EA movement centralized intellectually, financially, etc.? Does the EA movement's tendency to quantify everything, to make everything legible to itself, cause it to miss important features of the world? To what extent do EA people rationalize spending resources on inefficient or selfish projects by reframing them in terms of EA values? Is a feeling of tension about how to allocate our resources actually a good thing?Ajeya Cotra is a Senior Research Analyst at Open Philanthropy, a grantmaking organization that aims to do as much good as possible with its resources (broadly following effective altruist methodology); she mainly does research relevant to Open Phil's work on reducing existential risks from AI. Ajeya discovered effective altruism in high school through the book The Life You Can Save, and quickly became a major fan of GiveWell. As a student at UC Berkeley, she co-founded and co-ran the Effective Altruists of Berkeley student group, and taught a student-led course on EA. Listen to her 80,000 Hours podcast episode or visit her LessWrong author page for more info.Michael Nielsen was on the podcast back in episode 016. You can read more about him there!
What is Effective Altruism? Which parts of the Effective Altruism movement are good and not so good? Who outside of the EA movement are doing lots of good in the world? What are the psychological effects of thinking constantly about the trade-offs of spending resources on ourselves versus on others? To what degree is the EA movement centralized intellectually, financially, etc.? Does the EA movement's tendency to quantify everything, to make everything legible to itself, cause it to miss important features of the world? To what extent do EA people rationalize spending resources on inefficient or selfish projects by reframing them in terms of EA values? Is a feeling of tension about how to allocate our resources actually a good thing?Ajeya Cotra is a Senior Research Analyst at Open Philanthropy, a grantmaking organization that aims to do as much good as possible with its resources (broadly following effective altruist methodology); she mainly does research relevant to Open Phil's work on reducing existential risks from AI. Ajeya discovered effective altruism in high school through the book The Life You Can Save, and quickly became a major fan of GiveWell. As a student at UC Berkeley, she co-founded and co-ran the Effective Altruists of Berkeley student group, and taught a student-led course on EA. Listen to her 80,000 Hours podcast episode or visit her LessWrong author page for more info.Michael Nielsen was on the podcast back in episode 016. You can read more about him there!