Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: My mistakes on the path to impact, published by Denise_Melchin on the effective altruism forum. Doing a lot of good has been a major priority in my life for several years now. Unfortunately I made some substantial mistakes which have lowered my expected impact a lot, and I am on a less promising trajectory than I would have expected a few years ago. In the hope that other people can learn from my mistakes, I thought it made sense to write them up here! I will attempt to list the mistakes which lowered my impact most over the past several years in this post and then analyse their causes. Writing this post and previous drafts has also been very personally useful to me, and I can recommend undertaking such an analysis. Please keep in mind that my analysis of my mistakes is likely at least a bit misguided and incomprehensive. It would have been nice to condense the post a bit more and structure it better, but having already spent a lot of time on it and wanting to move on to other projects, I thought it would be best not to let the perfect be the enemy of the good! To put my mistakes into context, I will give a brief outline of what happened in my career-related life in the past several years before discussing what I consider to be my main mistakes. Background I came across the EA Community in 2012, a few months before I started university. Before that point my goal had always been to become a researcher. Until early 2017, I did a mathematics degree in Germany and received a couple of scholarships. I did a lot of ‘EA volunteering' over the years, mostly community building and large-scale grantmaking. I also did two unpaid internships at EA orgs, one during my degree and one after graduating, in summer 2017. After completing my summer internship, I started to try to find a role at an EA org. I applied to ~7 research and grantmaking roles in 2018. I got to the last stage 4 times, but received no offers. The closest I got was receiving a 3-month-trial offer as a Research Analyst at Open Phil, but it turned out they were unable to provide visas. In 2019, I worked as a Research Assistant for a researcher at an EA aligned university institution on a grant for a few hundred hours. I stopped as there seemed to be no route to a secure position and the role did not seem like a good fit. In late 2019 I applied for jobs suitable for STEM graduates with no experience. I also stopped doing most of my EA volunteering. In January 2020 I began to work in an entry-level data analyst role in the UK Civil Service which I have been really happy with. In November, after 6.5mon full-time equivalent worked, I received a promotion to a more senior role with management responsibility and a significant pay rise. First I am going to discuss what I think I did wrong from a first-order practical perspective. Afterwards I will explain which errors in my decision making process I consider the likely culprits for these mistakes - the patterns of behaviour which need to be changed to avoid similar mistakes in the future. A lot of the following seems pretty silly to me now, and I struggle to imagine how I ever fully bought into the mistakes and systematic errors in my thinking in the first place. But here we go! What did I get wrong? I did not build broad career capital nor kept my options open. During my degree, I mostly focused on EA community building efforts as well as making good donation decisions. I made few attempts to build skills for the type of work I was most interested in doing (research) or skills that would be particularly useful for higher earning paths (e.g. programming), especially later on. My only internships were at EA organisations in research roles. I also stopped trying to do well in my degree later on, and stopped my previously-substantial involvement in political work. In my firs...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Growth and the case against randomista development, published by HaukeHillebrandt, John G. Halstead on the effective altruism forum. Update, 3/8/2021: I (Hauke) gave a talk at Effective Altruism Global on this post: Summary Randomista development (RD) is a form of development economics which evaluates and promotes interventions that can be tested by randomised controlled trials (RCTs). It is exemplified by GiveWell (which primarily works in health) and the randomista movement in economics (which primarily works in economic development). Here we argue for the following claims, which we believe to be quite weak: Prominent economists make plausible arguments which suggest that research on and advocacy for economic growth in low- and middle-income countries is more cost-effective than the things funded by proponents of randomista development. Effective altruists have devoted too little attention to these arguments. Assessing the soundness of these arguments should be a key focus for current generation-focused effective altruists over the next few years. We hope to start a conversation on these questions, and potentially to cause a major reorientation within EA. We also believe the following stronger claims: 4. Improving health is not the best way to increase growth. 5. A ~4 person-year research effort will find donation opportunities working on economic growth in LMICs which are substantially better than GiveWell's top charities from a current generation human welfare-focused point of view. However, economic growth is not all that matters. GDP misses many crucial determinants of human welfare, including leisure time, inequality, foregone consumption from investment, public goods, social connection, life expectancy, and so on. A top priority for effective altruists should be to assess the best way to increase human welfare outside of the constraints of randomista development, i.e. allowing intervention that have not or cannot be tested by RCTs. We proceed as follows: We define randomista development and contrast it with research and advocacy for growth-friendly policies in low- and middle-income countries. We show that randomista development is overrepresented in EA, and that, in contradistinction, research on and advocacy for growth-friendly economic policy (we refer to this as growth throughout) is underrepresented We then show why some prominent economists believe that, a priori, growth is much more effective than most RD interventions. We present a quantitative model that tries to formalize these intuitions and allows us to compare global development interventions with economic growth interventions. The model suggests that under plausible assumptions a hypothetical growth intervention can be thousands of times more cost-effective than typical RD interventions such as cash-transfers. However, when these assumptions are relaxed and compared to the very good RD interventions, growth interventions are on a similar level of effectiveness as RD interventions. We consider various possible objections and qualifications to our argument. Acknowledgements Thanks to Stefan Schubert, Stephen Clare, Greg Lewis, Michael Wiebe, Sjir Hoeijmakers, Johannes Ackva, Gregory Thwaites, Will MacAskill, Aidan Goth, Sasha Cooper, and Carl Shulman for comments. Any mistakes are our own. Opinions are ours, not those of our employers. Marinella Capriati at GiveWell commented on this piece, and the piece does not represent her views or those of GiveWell. 1. Defining Randomista Development We define randomista development (RD) as an approach to development economics which investigates, evaluates and recommends only interventions which can be tested by randomised controlled trials (RCTs). RD can take low-risk or more “hits-based” forms. Effective altruists have especially focused on the low-risk for...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Announcing my retirement, published by Aaron Gertler on the effective altruism forum. A few sharp-eyed readers noticed my imminent departure from CEA in our last quarterly report. Gold stars all around! My last day as our content specialist — and thus, my last day helping to run the Forum — is December 10th. The other moderators will continue to handle the basics, and we're in the process of hiring my replacement. (Let me know if anyone comes to mind!) Managing this place was fun. It wasn't always fun, but — on the whole, a good time. I've enjoyed giving feedback to a few hundred people, organizing some interesting AMAs, running a writing contest, building up the Digest, hosting workshops for EA groups around the world, and deleting a truly staggering number of comments advertising escort services (I'll spare you the link). More broadly, I've felt a continual sense of admiration for everyone who cares about the Forum and tries to make it better — by reading, voting, posting, crossposting, commenting, tagging, Wiki-editing, bug-reporting, and/or moderating. Collectively, you've put in tens of thousands of hours of work to develop our strange, complicated, unique website, with scant compensation besides karma. (Now that I'm leaving, it's time to be honest — despite the rumors, our karma isn't the kind that gets you a better afterlife.) Thank you for everything you've done to make this job what it was. What's next? In January, I'll join Open Philanthropy as their communications officer, working to help their researchers publish more of their work. I'll also be joining Effective Giving Quest as their first partnered streamer. Wish me luck: moderating this place sometimes felt like herding cats, but it's nothing compared to Twitch chat. My Forum comments will be less frequent, but probably spicier. thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: My current impressions on career choice for longtermists, published by Holden Karnofsky on the effective altruism forum. This post summarizes the way I currently think about career choice for longtermists. I have put much less time into thinking about this than 80,000 Hours, but I think it's valuable for there to be multiple perspectives on this topic out there. Edited to add: see below for why I chose to focus on longtermism in this post. While the jobs I list overlap heavily with the jobs 80,000 Hours lists, I organize them and conceptualize them differently. 80,000 Hours tends to emphasize "paths" to particular roles working on particular causes; by contrast, I emphasize "aptitudes" one can build in a wide variety of roles and causes (including non-effective-altruist organizations) and then apply to a wide variety of longtermist-relevant jobs (often with options working on more than one cause). Example aptitudes include: "helping organizations achieve their objectives via good business practices," "evaluating claims against each other," "communicating already-existing ideas to not-yet-sold audiences," etc. (Other frameworks for career choice include starting with causes (AI safety, biorisk, etc.) or heuristics ("Do work you can be great at," "Do work that builds your career capital and gives you more options.") I tend to feel people should consider multiple frameworks when making career choices, since any one framework can contain useful insight, but risks being too dogmatic and specific for individual cases.) For each aptitude I list, I include ideas for how to explore the aptitude and tell whether one is on track. Something I like about an aptitude-based framework is that it is often relatively straightforward to get a sense of one's promise for, and progress on, a given "aptitude" if one chooses to do so. This contrasts with cause-based and path-based approaches, where there's a lot of happenstance in whether there is a job available in a given cause or on a given path, making it hard for many people to get a clear sense of their fit for their first-choice cause/path and making it hard to know what to do next. This framework won't make it easier for people to get the jobs they want, but it might make it easier for them to start learning about what sort of work is and isn't likely to be a fit. I've tried to list aptitudes that seem to have relatively high potential for contributing directly to longtermist goals. I'm sure there are aptitudes I should have included and didn't, including aptitudes that don't seem particularly promising from a longtermist perspective now but could become more so in the future. In many cases, developing a listed aptitude is no guarantee of being able to get a job directly focused on top longtermist goals. Longtermism is a fairly young lens on the world, and there are (at least today) a relatively small number of jobs fitting that description. However, I also believe that even if one never gets such a job, there are a lot of opportunities to contribute to top longtermist goals, using whatever job and aptitudes one does have. To flesh out this view, I lay out an "aptitude-agnostic" vision for contributing to longtermism. Some longtermism-relevant aptitudes "Organization building, running, and boosting" aptitudes[1] Basic profile: helping an organization by bringing "generally useful" skills to it. By "generally useful" skills, I mean skills that could help a wide variety of organizations accomplish a wide variety of different objectives. Such skills could include: Business operations and project management (including setting objectives, metrics, etc.) People management and management coaching (some manager jobs require specialized skills, but some just require general management-associated skills) Executive leadership (setting and enfo...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation, published by EA applicant on the effective altruism forum. (I am writing this post under a pseudonym because I don't want potential future non-EA employers to find this with a quick google search. Initially my name could be found on the CV linked in the text, but after this post was shared much more widely than I had expected, I got cold feet and removed it.) In the past 12 months, I applied for 20 positions in the EA community. I didn't get any offer. At the end of this post, I list all those positions, and how much time I spent in the application process. Before that, I write about why I think more posts like this could be useful. Please note: The positions were all related to long-termism, EA movement building, or meta-activities (e.g. grant-making). To stress this again, I did not apply for any positions in e.g. global health or animal welfare, so what I'm going to say might not apply to these fields. Costs of applications Applying has considerable time-costs. Below, I estimate that I spent 7-8 weeks of full-time work in application processes alone. I guess it would be roughly twice as much if I factored in things like searching for positions, deciding which positions to apply for, or researching visa issues. (Edit: Some organisations reimburse for time spent in work tests/trials. I got paid in 4 of the 20 application processes. I might have gotten paid in more processes if I had advanced further). At least for me, handling multiple rejections was mentally challenging. Additionally, the process may foster resentment towards the EA community. I am aware the following statement is super in-accurate and no one is literally saying that, but sometimes this is the message I felt I was getting from the EA community: “Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren't that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint. (20 applications later) . Yeah, when we said that we need people, we meant capable people. Not you. You suck.” Why I think more posts like this would have been useful for me Overall, I think it would have helped me to know just how competitive jobs in the EA community (long-termism, movement building, meta-stuff) are. I think I would have been more careful in selecting the positions I applied for and I would probably have started exploring other ways to have an impactful career earlier. Or maybe I would have applied to the same positions, but with less expectations and less of a feeling of being a total loser that will never contribute anything towards making the world a better place after being rejected once again
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: EAF"s ballot initiative doubled Zurich's development aid, published by Jonas Vollmer on the effective altruism forum. Summary In 2016, the Effective Altruism Foundation (EAF), then based in Switzerland, launched a ballot initiative asking to increase the city of Zurich's development cooperation budget and to allocate it more effectively. In 2018, we coordinated a counterproposal with the city council that preserved the main points of our original initiative and had a high chance of success. In November 2019, the counterproposal passed with a 70% majority. Zurich's development cooperation budget will thus increase from around $3 million to around $8 million per year. The city will aim to allocate it “based on the available scientific research on effectiveness and cost-effectiveness.” This seems to be the first time that Swiss legislation on development cooperation mentions effectiveness requirements. The initiative cost around $25,000 in financial costs and around $190,000 in opportunity costs. Depending on the assumptions, it raised a present value of $20–160 million in development funding. EAs should consider launching similar initiatives in other Swiss cities and around the world. Initial proposal and signature collection In spring 2016, the Effective Altruism Foundation (EAF), then still based in Basel, Switzerland, launched a ballot initiative asking for the city of Zurich's development cooperation budget to be increased and to be allocated more effectively. (For information on EAF's current focus, see this article.) We chose Zurich due to its large budget and leftist/centrist majority. I published an EA Forum post introducing the initiative and a corresponding policy paper (see English translation). (Note: In the EA Forum post, I overestimated the publicity/movement-building benefits and the probability that the original proposal would pass. I overemphasized the quantitative estimates, especially the point estimates, which don't adequately represent the uncertainty. I underestimated the success probability of a favorable counterproposal. Also, the policy paper should have had a greater focus on hits-based, policy-oriented interventions because I think these have a chance of being even more cost-effective than more “straightforward” approaches and also tend to be viewed more favorably by professionals.) We hired people and coordinated volunteers (mostly animal rights activists we had interacted with before) to collect the required 3,000 signatures (plus 20% safety margin) over six months to get a binding ballot vote. Signatures had to be collected in person in handwritten form. For city-level initiatives, people usually collect about 10 signatures per hour, and paying people to collect signatures costs about $3 per signature on average. Picture: Start of signature collection on 25 May 2016. Picture: Submission of the initiative at Zurich's city hall on 22 November 2016. The legislation we proposed (see the appendix) focused too strongly on Randomized Controlled Trials (RCTs) and demanded too much of a budget increase (from $3 million to $87 million per year). We made these mistakes because we had internal disagreements about the proposal and did not dedicate enough time to resolving them. This led to negative initial responses from the city council and influential charities (who thought the budget increase was too extreme, were pessimistic about the odds of success, and disliked the RCT focus), implying a
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Is effective altruism growing? An update on the stock of funding vs. people, published by Benjamin_Todd on the effective altruism forum. This is a cross-post from 80,000 Hours. See part 2 on the allocation across cause areas. In 2015, I argued that funding for effective altruism – especially within meta and longtermist areas – had grown faster than the number of people interested in it, and that this was likely to continue. As a result, there would be a funding ‘overhang', creating skill bottlenecks for the roles needed to deploy this funding. A couple of years ago, I wondered if this trend was starting to reverse. There hadn't been any new donors on the scale of Good Ventures (the main partner of Open Philanthropy), which meant that total committed funds were growing slowly, giving the number of people a chance to catch up. However, the spectacular asset returns of the last few years and the creation of FTX, seem to have shifted the balance back towards funding. Now the funding overhang seems even larger in both proportional and absolute terms than 2015. In the rest of this post, I make some rough guesses at total committed funds compared to the number of interested people, to see how the balance of funding vs. talent might have changed over time. This will also serve as an update on whether effective altruism is growing – with a focus on what I think are the two most important metrics: the stock of total committed funds, and of committed people. This analysis also made me make a small update in favour of giving now vs. investing to give later. Here's a summary of what's coming up: How much funding is committed to effective altruism (going forward)? Around $46 billion. How quickly have these funds grown? About 37% per year since 2015, with much of the growth concentrated in 2020–2021. How much is being donated each year? Around $420 million, which is just over 1% of committed capital, and has grown maybe about 21% per year since 2015. How many committed community members are there? About 7,400 active members and 2,600 ‘committed' members, growing 10–20% per year 2018–2020, and growing faster than that 2015–2017. Has the funding overhang grown or shrunk? Funding seems to have grown faster than the number of people, so the overhang has grown in both proportional and absolute terms. What might be the implications for career choice? Skill bottlenecks have probably increased for people able to think of ways to spend lots of funding effectively, run big projects, and evaluate grants. To caveat, all of these figures are extremely rough, and are mainly estimated off the top of my head. I haven't checked them with the relevant donors, so they might not endorse these estimates. However, I think they're better than what exists currently, and thought it was important to try to give some kind of rough update on how my thinking has changed. There are likely some significant mistakes; I'd be keen to see a more thorough version of this analysis. Overall, please treat this more like notes from a podcast than a carefully researched article. Which growth metrics matter? Broadly, the future[1] impact of effective altruism depends on the total stock of: The quantity of committed funds The number of committed people (adjusted for skills and influence) The quality of our ideas (which determine how effectively funding and labour can be turned into impact) (In economic growth models, this would be capital, labour, and productivity.) You could consider other resources like political capital, reputation, or public support as well, though we can also think of these as being a special type of labour. In this post, I'm going to focus on funding and labour. (To do an equivalent analysis for ideas, which could easily matter more, we could try to estimate whether the expected return of our best way...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Announcing "Naming What We Can"!, published by GidonKadosh, EdoArad, Davidmanheim, ShayBenMoshe, sella, Guy Raveh, Asaf Ifergan on the effective altruism forum. We hereby announce a new meta-EA institution - "Naming What We Can". Vision We believe in a world where every EA organization and any project has a beautifully crafted name. We believe in a world where great minds are free from the shackles of the agonizing need to name their own projects. Goal To name and rename every EA organization, project, thing, or person. To alleviate any suffering caused by name-selection decision paralysis Mission Using our superior humor and language articulation prowess, we will come up with names for stuff. About us We are a bunch of revolutionaries who believe in the power of correct naming. We translated over a quintillion distinct words from English to Hebrew. Some of us have read all of Unsong. One of us even read the whole bible. We spent countless fortnights debating the in and outs of our own org's title - we Name What We Can. What Do We Do? We're here for the service of the EA community. Whatever you need to rename - we can name. Although we also rename whatever we can. Even if you didn't ask. Examples As a demonstration, we will now see some examples where NWWC has a much better name than the one currently used. 80,000 Hours => 64,620 Hours. Better fits the data and more equal toward women, two important EA virtues. Charity Entrepreneurship => Charity Initiatives. (We don't know anyone who can spell entrepreneurship on their first try. Alternatively, own all of the variations: Charity Enterpeneurship, Charity Entreprenreurshrip, Charity Entrepenurship, Charity Entepenoorship, .) Global Priorities Institute => Glomar Priorities Institute. We suggest including the dimension of time, making our globe a glome. OpenPhil => Doing Right Philanthropy. Going by Dr.Phil would give a lot more clicks. EA Israel => זולתנים יעילים בארץ הקודש ProbablyGood => CrediblyGood. Because in EA we usually use credence rather than probability. EA Hotel => Centre for Enabling EA Learning & Research. Giving What We Can => Guilting Whoever We Can. Because people give more when they are feeling guilty about being rich. Cause Prioritization => Toby Ordering. Max Dalton => Max Delta. This represents the endless EA effort to maximize our ever-marginal utility. Will MacAskill => will McAskill. Evidently a more common use: Peter singer & steven pinker should be the same person, to avoid confusion. OpenAI => ProprietaryAI. Followed by ClosedAI, UnalignedAI, MisalignedAI, and MalignantAI. FHI => Bostrom's Squad. GiveWell => Don'tGivePlayPumps. We feel that the message could be stronger this way. Doing Good Better => Doing Right Right. Electronic Arts, also known as EA, should change its name to Effective Altruism. They should also change all of their activities to Effective Altruism activities. Impact estimation Overall, we think the impact of the project will be net negative on expectation (see our Guesstimate model). That is because we think that the impact is likely to be somewhat positive, but there is a really small tail risk that we will cause the termination of the EA movement. However, as we are risk-averse we can mostly ignore high tails in our impact assessment so there is no need to worry. Call to action As a first step, we offer our services freely here on this very post! This is done to test the fit of the EA community to us. All you need to do is to comment on this post and ask us to name or rename whatever you desire. Additionally, we hold a public recruitment process here on this very post! If you want to apply to NWWC as a member, comment on this post with a name suggestion of your choosing! Due to our current lack of diversity in our team, we particularly encourage women, people of color, ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Major UN report discusses existential risk and future generations (summary), published BY finm, Avital Balwit on the effective altruism forum. Co-written with Avital Balwit. Introduction and Key Points On September 10th, the Secretary General of the United Nations released a report called “Our Common Agenda”. This report seems highly relevant for those working on longtermism and existential risk, and appears to signal unexpectedly strong interest from the UN. It explicitly uses longtermist language and concepts, and suggests concrete proposals for institutions to represent future generations and manage catastrophic and existential risks. In this post we've tried summarising the report for an EA audience. Some notable features of the report: It explicitly discusses “future generations”, “long-termism”, and “existential risk” It highlights biorisks, nuclear weapons, advanced technologies, environmental disasters/climate change as extreme or even existential risks It recommends the “regulation of artificial intelligence to ensure that this is aligned with shared global values” It proposes several instruments for protecting future generations: A Futures Lab for futures impact assessments and “regularly reporting on megatrends and catastrophic risks” A Special Envoy for Future Generations to assist on “long-term thinking and foresight” and explore various international mechanisms for representing future generations, including... Repurposing the Trusteeship Council to represent the interests of future generations (a major but long-inactive organ of the UN) A Declaration on Future Generations It proposes instruments for addressing major risks: An Emergency Platform to convene key actors in response to complex global crises A Strategic Foresight and Global Risk Report to be released every 5 years It also calls for a 2023 Summit of the Future to discuss topics including these proposals addressing major risks and future generations Other topics discussed which might be of interest: Protecting and regulating the ‘digital commons' and an internet-enabled ‘infodemic' The governance of outer space Lethal autonomous weapons Improving pandemic response and preparedness Developing well-being indices to complement GDP Context A year ago, on the 75th anniversary of the formation of the UN, member nations asked the Secretary General, António Guterres, to produce a report with recommendations to advance the agenda of the UN. This report is his response. The report also coincides with Guterres' re-election for his second term as Secretary General, which will begin in January 2022 and will likely last 5 years. The report was informed by consultations, listening exercises, and input from outside experts. Toby Ord (author of The Precipice) was asked to contribute to the report as such an ‘outside expert'. Among other things he underlined that ‘future generations' does not (just) mean ‘young people', and that international institutions should begin to address risks even more severe than COVID-19, up to and including existential risks. All of the new instruments and institutions described in the report are proposals made to the General Assembly of member nations. It remains to be seen how many of them will ultimately be implemented, and in what eventual form. Summary of the Report The report is divided into five main sections, with sections 3 and 4 being of greatest relevance from an EA or longtermist perspective. The first section situates the report in the context of the pandemic, suggesting that now is an unusually “pivotal moment” between “breakdown” and “breakthrough”. It highlights major past successes (the Montreal Protocol, the eradication of smallpox) and notes how the UN was established in the aftermath of WWII to “save succeeding generations” from war. It then calls for a “new globa...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Don't Be Bycatch, published by AllAmericanBreakfast on the effective altruism forum. It's a common story. Someone who's passionate about EA principles, but has little in the way of resources, tries and fails to do EA things. They write blog posts, and nothing happens. They apply to jobs, and nothing happens. They do research, and don't get that grant. Reading articles no longer feels exciting, but like a chore, or worse: a reminder of their own inadequacy. Anybody who comes to this place, I heartily sympathize, and encourage them to disentangle themselves from this painful situation any way they can. Why does this happen? Well, EA has two targets. Subscribers to EA principles who the movement wants to become big donors or effective workers. Big donors and effective workers who the movement wants to subscribe to EA principles. I won't claim what weight this community and its institutions give to (1) vs. (2). But when we set out to catch big fish, we risk turning the little fish into bycatch. The technical term for this is churn. Part of the issue is the planner's fallacy. When we're setting out, we underestimate how long and costly it will be to achieve an impact, and overestimate what we'll accomplish. The higher above average you aim for, the more likely you are to fall short. And another part is expectation-setting. If the expectation right from the get-go is that EA is about quickly achieving big impact, almost everyone will fail, and think they're just not cut out for it. I wish we had a holiday that was the opposite of Petrov Day, where we honored somebody who went a little bit out of their comfort zone to try and be helpful in a small and simple way. Or whose altruistic endeavor was passionate, costly, yet ineffective, and who tried it anyway, changed their mind, and valued it as a learning experience. EA organizations and writers are doing us a favor by presenting a set of ideas that speak to us. They can't be responsible for addressing all our needs. That's something we need to figure out for ourselves. EA is often criticized for its "think global" approach. But the EA is our local, our global local. How do we help each other to help others? From one little fish in the sEA to another, this is my advice: Don't aim for instant success. Aim for 20 years of solid growth. Alice wants to maximize her chance of a 1,000% increase in her altruistic output this year. Zahara's trying to maximize her chance of a 10% increase in her altruistic output. They're likely to do very different things to achieve these goals. Don't be like Alice. Be like Zahara. Start small, temporary, and obvious. Prefer the known, concrete, solvable problem to the quest for perfection. Yes, running an EA book club or, gosh darn it, picking up trash in the park is a fine EA project to cut our teeth on. If you donate 0% of your income, donating 1% of your income is moving in the right direction. Offer an altruistic service to one person. Interview one person to find out what their needs are. Ask, don't tell. When entrepreneurs do market research, it's a good idea to avoid telling the customer about the idea. Instead, they should ask the customer about their needs and problems. How do they solve their problems right now? Then they can go back to the Batcave and consider whether their proposed solution would be an improvement. Let yourself become something, just do it a little more gradually. It's good to keep your options open, but EA can be about slowing and reducing the process of commitment, increasing the ability to turn and bend. It doesn't have to be about hard stops and hairpin turns. It's OK to take a long time to make decisions and figure things out. Build each other up. Do zoom calls. Ask each other questions. Send a message to a stranger whose blog posts you like. Form relationships, and...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Reducing long-term risks from malevolent actors, published by David_Althaus, Tobias_Baumann on the effective altruism forum. Summary Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history. (More) Malevolent individuals in positions of power could negatively affect humanity's long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. (More) Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks. (More) We therefore consider interventions to reduce the expected influence of malevolent humans on the long-term future. The development of manipulation-proof measures of malevolence seems valuable, since they could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs. (More) We also explore possible future technologies that may offer unprecedented leverage to mitigate against malevolent traits. (More) Selecting against psychopathic and sadistic tendencies in genetically enhanced, highly intelligent humans might be particularly important. However, risks of unintended negative consequences must be handled with extreme caution. (More) We argue that further work on reducing malevolence would be valuable from many moral perspectives and constitutes a promising focus area for longtermist EAs. (More) What do we mean by malevolence? Before we make any claims about the causal effects of malevolence, we first need to explain what we mean by the term. To this end, consider some of the arguably most evil humans in history—Hitler, Mao, and Stalin—and the distinct personality traits they seem to have shared.[1] Stalin repeatedly turned against former comrades and friends (Hershman & Lieb, 1994, ch. 15, ch. 18), gave detailed instructions on how to torture his victims, ordered their loved ones to watch (Glad, 2002, p. 13), and deliberately killed millions through various atrocities. Likewise, millions of people were tortured and murdered under Mao's rule, often according to his detailed instructions (Dikötter, 2011; 2016; Chang & Halliday, ch. 8, ch. 23, 2007). He also took pleasure in watching acts of torture and imitating in what his victims went through (Chang & Halliday, ch. 48, 2007). Hitler was not only responsible for the death of millions, he also engaged in personal sadism. On his specific instructions, the plotters of the 1944 assassination attempt were hung by piano wires and their agonizing deaths were filmed (Glad, 2002). According to Albert Speer, “Hitler loved the film and had it shown over and over again” (Toland, 1976, p. 818). Hitler, Mao, and Stalin—and most other dictators—also poured enormous resources into the creation of personality cults, manifesting their colossal narcissism (Dikötter, 2019). (The section Malevolent traits of Hitler, Mao, Stalin, and other dictators in Appendix B provides more evidence.) Many scientific constructs of human malevolence could be used to summarize the relevant psychological traits shared by Hitler, Mao, Stalin, and other malevolent individuals in positions of power. We focus on the Dark Tetrad traits (Paulhus, 2014) because they seem especially relevant and have been studied extensively by psychologists. The Dark Tetrad comprises the following four traits—the more well-known Dark Triad (Paulhus & Williams, 2002) refers to the first three traits: Machiavellianism is characterized by manipulating and deceiving others to further one's own interests, indifference to morality, and obsession with achieving power or wealth. Narcissism involves an inflated sense of one's importance and abilities, an excessive need for admiration, a lack of emp...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Problem areas beyond 80,000 Hours' current priorities, published by Ardenlk on the effective altruism forum. Why we wrote this post At 80,000 Hours we've generally focused on finding the most pressing issues and the best ways to address them. But even if some issue is 'the most pressing'—in the sense of being the highest impact thing for someone to work on if they could be equally successful at anything—it might easily not be the highest impact thing for many people to work on, because people have various talents, experience, and temperaments. Moreover, the more people involved in a community, the more reason there is for them to spread out over different issues. There will eventually be diminishing returns as more people work on the same set of issues, and both the value of information and the value of capacity building from exploring more areas will be greater if more people are able to take advantage of that work. We're also pretty uncertain which problems are the highest impact things to work on—even for people who could work on anything equally successfully. For example, maybe we should be focusing much more on preventing great power conflict than we have been. After all, the first plausible existential risk to humanity was the creation of the atom bomb; it's easy to imagine that wars could incubate other, even riskier technological advancements. Or maybe there is some dark horse cause area—like research into surveillance—that will turn out to be way more important for improving the future than we thought. Perhaps for these reasons, many of our advisors guess that it would be ideal if 5-20% of the effective altruism community's resources were focused on issues that the community hasn't historically been as involved in, such as the ones listed below. We think we're currently well below this fraction, so it's plausible some of these areas might be better for some people to go into right now than our top priority problem areas. Who is best suited to work on these other issues? Pioneering a new problem area from an effective altruism perspective is challenging, and in some ways harder than working on a priority area, where there is better training and infrastructure. Working on a less-researched problem can require a lot of creativity and critical thinking about how you can best have a positive impact by working on the issue. For example, it likely means working out which career options within the area are the most promising for direct impact, career capital, and exploration value, and then pursuing them even if they differ from what most other people in the area tend to value or focus on. You might even eventually need to 'create your own job' if pre-existing positions in the area don't match your priorities. The ideal person would therefore be self-motivated, creative, and willing to chart the waters for others, as well as have a strong interest or relevant experience in one of these less-explored issues. We compiled the following lists by combining suggestions from 6 of our advisors with our own ideas, judgement, and research. We were looking for issues that might be very important, especially for improving the long-term future, and which might be currently neglected by people thinking from an effective altruism perspective. If something was suggested twice, we took that as a presumption in favor of including it. We're very uncertain about the value of working on any one of these problems, but we think it's likely that there are issues on these lists (and especially the first one) that are as pressing as our highest priority problem areas. What are the pros and cons of working in each of these areas? Which are less tractable than they appear, or more important? Which are already being covered adequately by existing groups we don't know enough about? What potentia...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: The case of the missing cause prioritisation research, published by weeatquince on the effective altruism forum. Introduction / summary In 2011 I came across Giving What We Can, which shortly blossomed into effective altruism. Call me a geek if you like but I found it exciting, like really exciting. Here were people thinking super carefully about the most effective ways to have an impact, to create change, to build a better world. Suddenly a boundless opportunity to do vast amounts of good opened up before my eyes. I had only just got involved and by giving to fund bednets and had already magnified my impact on the world 100 times. And this was just the beginning. Obviously bednets were not the most effective charitable intervention, they were just the most effective we had found to date – with just a tiny amount of research. Imagine what topic could be explored next: the long run effects of interventions, economic growth, political change, geopolitics, conflict studies, etc. We could work out how to compare charities of vastly different cause areas, or how to do good beyond donations (some people were already starting to talk about career choices). Some people said we should care about animals (or AI risk), I didn't buy it (back then), but imagine, we could work out what different value sets lead to different causes and the best charities for each. As far as I could tell the whole field of optimising for impact seemed vastly under-explored. This wasn't too surprising – most people don't seem to care that much about doing charitable giving well and anyway it was only just coming to light how truly bad our intuitions were at making charitable choices (with the early 2000's aid skepticism movement). Looking back, I was optimistic. Yet in some regards my optimism was well-placed. In terms of spreading ideas, my small group of geeky uni friends went on to create something remarkable, to shift £m if not £bn of donations to better causes, to help 1000s maybe 100,000s of people make better career decisions. I am no longer surprised if a colleague, tinder date or complete stranger has heard of effective altruism (EA) or gives money to AMF (a bednet charity). However, in terms of the research I was so excited about, of developing the field of how to do good, there has been minimal progress. After nearly a decade, bednets and AI research still seem to be at the top of everyone's Christmas donations wish list. I think I assumed that someone had got this covered, that GPI or FHI or whoever will have answers, or at least progress on cause research sometime soon. But last month, whilst trying to review my career, I decided to look into this topic, and, oh boy, there just appears to be a massive gaping hole. I really don't think it is happening. I don't particularly want to shift my career to do cause prioritisation research right now. So I am writing this piece in the hope that I can either have you, my dear reader, persuade me this work is not of utmost importance, or have me persuade you to do this work (so I don't have to). A. The importance of cause prioritisation research What is your view on the effective altruism community and what it has achieved? What is the single most important idea to come out of the community? Feel free to take a moment to reflect. (Answers on a postcard, or comment). It seems to me (predictably given the introduction) that far and away the most valuable thing EA has done is the development of and promotion of cause prioritisation as a concept. This idea seems (shockingly and unfortunately) unique to EA.[1] It underpins all EA thinking, guides where EA aligned foundations give and leads to people seriously considering novel causes such as animal welfare or longtermism. This post mostly focuses on the current progress of and neglectedness of this work ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Lessons from my time in Effective Altruism, published by richard_ngo on the effective altruism forum. I'll start with an overview of my personal story, and then try to extract more generalisable lessons. I got involved in EA around the end of 2014, when I arrived at Oxford to study Computer Science and Philosophy. I'd heard about EA a few years earlier via posts on Less Wrong, and so already considered myself EA-adjacent. I attended a few EAGx conferences, became friends with a number of EA student group organisers, and eventually steered towards a career in AI safety, starting with a masters in machine learning at Cambridge in 2017-2018. I think it's reasonable to say that, throughout that time, I was confidently wrong (or at least unjustifiably confident) about a lot of things. In particular: I dismissed arguments about systemic change which I now find persuasive, although I don't remember how - perhaps by conflating systemic change with standard political advocacy, and arguing that it's better to pull the rope sideways. I endorsed earning to give without having considered the scenario which actually happened, of EA getting billions of dollars of funding from large donors. (I don't know if this possibility would have changed my mind, but I think that not considering it meant my earlier belief was unjustified.) I was overly optimistic about utilitarianism, even though I was aware of a number of compelling objections; I should have been more careful to identify as "utilitarian-ish" rather than rounding off my beliefs to the most convenient label. When thinking about getting involved in AI safety, I took for granted a number of arguments which I now think are false, without actually analysing any of them well enough to raise red flags in my mind. After reading about the talent gap in AI safety, I expected that it would be very easy to get into the field - to the extent that I felt disillusioned when given (very reasonable!) advice, e.g. that it would be useful to get a PhD first. As it turned out, though, I did have a relatively easy path into working on AI safety - after my masters, I did an internship at FHI, and then worked as a research engineer on DeepMind's safety team for two years. I learned three important lessons during that period. The first was that, although I'd assumed that the field would make much more sense once I was inside it, that didn't really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field. The second was that the job simply wasn't a good fit for me (for reasons I'll discuss later on). The third was that I'd been dramatically underrating “soft skills” such as knowing how to make unusual things happen within bureaucracies. Due to a combination of these factors, I decided to switch career paths. I'm now a PhD student in philosophy of machine learning at Cambridge, working on understanding advanced AI with reference to the evolution of humans. By now I've written a lot about AI safety, including a report which I think is the most comprehensive and up-to-date treatment of existential risk from AGI. I expect to continue working in this broad area after finishing my PhD as well, although I may end up focusing on more general forecasting and futurism at some point. Lessons I think this has all worked out well for me, despite my mistakes, but often more because of luck (including the luck of having smart and altruistic friends) than my own decisions. So while I'm not sure how much I would change in hindsight, it's worth asking what would have been valuable to know in worlds where I wasn't so lucky. Here are five such things. 1. EA is trying to achieve something very difficult. A lot of my initial attraction towards EA was because it seemed like a slam-dunk case: here's an obvious i...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: EA needs consultancies, published by lukeprog on the effective altruism forum. Problem EA organizations like Open Phil and CEA could do a lot more if we had access to more analysis and more talent, but for several reasons we can't bring on enough new staff to meet these needs ourselves, e.g. because our needs change over time, so we can't make a commitment that there's much future work of a particular sort to be done within our organizations.[1] This also contributes to there being far more talented EAs who want to do EA-motivated work than there are open roles at EA organizations.[2] A partial solution? In the public and private sectors, one common solution to this problem is consultancies. They can be think tanks like the National Academies or RAND,[3] government contractors like Booz Allen or General Dynamics, generalist consulting firms like McKinsey or Deloitte, niche consultancies like The Asia Group or Putnam Associates, or other types of service providers such as UARCs or FFRDCs. At the request of their clients, these consultancies (1) produce decision-relevant analyses, (2) run projects (including building new things), (3) provide ongoing services, and (4) temporarily "loan" their staff to their clients to help with a specific project, provide temporary surge capacity, provide specialized expertise that it doesn't make sense for the client to hire themselves, or fill the ranks of a new administration.[4] (For brevity, I'll call these "analyses," "projects," "ongoing services," and "talent loans," and I'll refer to them collectively as "services.") This system works because even though demand for these services can fluctuate rapidly at each individual client, in aggregate across many clients there is a steady demand for the consultancies' many full-time employees, and there is plenty of useful but less time-sensitive work for them to do between client requests. Current state of EA consultancies Some of these services don't require EA talent, and can thus be provided for EA organizations by non-EA firms, e.g. perhaps accounting firms. But what about analyses and services that require EA talent, e.g. because they benefit from lots of context about the EA community, or because they benefit from habits of reasoning and moral intuitions that are far more common in the EA community than elsewhere?[5] Rethink Priorities (RP) has demonstrated one consultancy model: producing useful analyses specifically requested by EA organizations like Open Philanthropy across a wide range of topics.[6] If their current typical level of analysis quality can be maintained, I would like to see RP scale as quickly as they can. I would also like to see other EAs experiment with this model.[7] BERI offers another consultancy model, providing services that are difficult or inefficient for clients to handle themselves through other channels (e.g. university administration channels). There may be a few other examples, but I think not many.[8] Current demand for these services All four models require sufficient EA client demand to be sustainable. Fortunately, my guess is that demand for ≥RP-quality analysis from Open Phil alone (but also from a few other EA organizations I spoke to) will outstrip supply for the foreseeable future, even if RP scales as quickly as they can and several RP clones capable of ≥RP-quality analysis are launched in the next couple years.[9] So, I think more EAs should try to launch RP-style "analysis" consultancies now. However, for EAs to get the other three consultancy models off the ground, they probably need clearer evidence of sufficiently large and steady aggregate demand for those models from EA organizations. At least at first, this probably means that these models will work best for services that demand relatively "generalist" talent, perhaps corresponding ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: The Cost of Rejection, published by Daystar Eld on the effective altruism forum. For those that don't know, I've worked as a therapist for the rationality and EA community for over two years now, first part time, then full time in early 2020. I often get asked about my observations and thoughts on what sorts of issues are particularly prevalent or unique to the community, and while any short answer to that would be oversimplifying the myriad of issues I've treated, I do feel comfortable saying that "concern with impact" is a theme that runs pretty wide and deep no matter what people come to sessions to talk about. Seeing how this plays out in various different ways has motivated me to write on some aspects of it, starting with this broad generalization; rejection hurts. Specifically, rejection from a job that's considered high impact (which, for many, implicitly includes all jobs with EA organizations) hurts a lot. And I think that hurt has a negative impact that goes beyond the suffering involved. In addition to basing this post off of my own observations, I've written it with the help of/on behalf of clients who have been affected by this, some of whom reviewed and commented on drafts. I. Premises There are a few premises that I'm taking for granted that I want to list out in case people disagree with any specific ones: The EA population is growing, as are EA organizations in number and size. This seems overall to be a very good thing. In absolute numbers, EA organizations are growing slower or at pace with the overall EA population. Even with massive increases in funding this seems inevitable, and also probably good? There are many high impact jobs outside of EA orgs that we would want people in the community to have. (By EA orgs I specifically mean organizations headed by and largely made up of people who self-identify as Effective Altruists, not just those using evidence-and-reason-to-do-the-most-good) ((Also there's a world in which more people self-identify as EAs and therefore more organizations are considered EA and by that metric it's bad that EA orgs are growing slower than overall population, but that's also not what I mean)) Even with more funding being available, there will continue to be many more people applying to EA jobs than getting them. I don't have clear numbers for this, but asking around at a few places got me estimates between ~47-124 applications for specific positions (one of which noted that ~¾ of them were from people clearly within and familiar with the EA community), and hundreds of applications for specific grants (at least once breaking a thousand). This is good for the organizations and community as a whole, but has bad side effects, such as: Rejection hurts, and that hurt matters. For many people, rejection is easily accepted as part of trying new things, shooting for the moon, and challenging oneself to continually grow. For many others, it can be incredibly demoralizing, sometimes to the point of reducing motivation to continue even trying to do difficult things. So when I say the hurt matters, I don't just mean that it's suffering and we should try to reduce suffering wherever we can. I also mean that as the number of EAs grows faster than the number of positions in EA orgs, the knock-on effects of rejection will slow community and org growth, particularly since: The number of EAs who receive rejections from EA orgs will likely continue to grow, both absolutely and proportionally. Hence, this article. II. Models There are a number of models I have for all of this that could be totally wrong. I think it's worth spelling them out a bit more so that people can point to more bits and let me know if they are important, or why they might not be as important as I think they are. Difficulty in Self Organization First, I think it's import...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Concerns with ACE's Recent Behavior, published by Hypatia on the effective altruism forum. Epistemic Status: I feel pretty confident that the core viewpoint expressed in this post is correct, though I'm less confident in some specific claims. I have not shared a draft of this post with ACE, and so it's possible I've missed important context from their perspective. EDIT: ACE board member Eric Herboso has responded with his personal take on this situation. He believes some points in this post are wrong or misleading. For example, he disputes my claim that ACE (as an organization) attempted to cancel a conference speaker. EDIT: Jakub Stencel from Anima International has posted a response. He clarifies a few points and offers some context regarding the CARE conference situation. Background In the past year, there has been some concern in EA surrounding the negative impact of “cancel culture”[1] and worsening discourse norms. Back in October, Larks wrote a post criticizing EA Munich's decision to de-platform Robin Hanson.The post was generally well-received, and there have been other posts on the forum discussing potential risks from social-justice oriented discourse norms. For example, see The Importance of Truth-Oriented Discussions in EAand EA considerations regarding increasing political polarization. I'm writing this post because I think some recent behavior from Animal Charity Evaluators (ACE) is a particularly egregious example of harmful epistemic norms in EA. This behavior includes: Making (in my view) poorly reasoned statements about anti-racism and encouraging supporters to support or donate to anti-racist causes and organizations of dubious effectiveness Attempting to cancel an animal rights conference speaker because of his views on Black Lives Matter, withdrawing from that conference because the speaker's presence allegedly made ACE staff feel unsafe, and issuing a public statement supporting its staff and criticizing the conference organizers Penalizing charities in reviews for having leadership and/or staff who are deemed to be insufficiently progressive on racial equity, and stating it won't offer movement grants funding to those who disagree with its views on diversity, equity, and inclusion[2]. Because I'm worried that this post could hurt my future ability to get a job in EAA, I'm choosing to remain anonymous. My goal here is to: a) Describe ACE's behavior in order to raise awareness and foster discussion, since this doesn't seem to have attracted much attention, and b) Give a few reasons why I think ACE's behavior has been harmful, though I'll be brief since I think similar points have been better made elsewhere I also want to be clear that I don't think ACE is the only bad actor here, as other areas of the EAA community have also begun to embrace harmful social-justice derived discourse norms[3]. However, I'm focusing my criticism on ACE here because: It positions itself as an effective altruism organization, rather than a traditional animal advocacy organization It is well known and generally respected by the EA community It occupies a powerful position within the EAA movement, directing millions of dollars in funding each year and conducting a large fraction of the movement's research And before I get started, I'd also like to make a couple caveats: I think ACE does a lot of good work, and in spite of this recent behavior, I think its research does a lot to help animals. I'm also not trying to “cancel” ACE or any of its staff. But I do think the behavior outlined in this post is bad enough that ACE supporters should be vocal about their concerns and consider withholding future donations. I am not suggesting that racism, discrimination, inequality, etc. shouldn't be discussed, or that addressing these important problems isn't EA-worthy. The EA commu...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Introducing Probably Good: A New Career Guidance Organization, published by omernevo, sella on the effective altruism forum. We're excited to announce the launch of Probably Good, a new organization that provides career guidance intended to help people do as much good as possible. Context For a while, we have felt that there was a need for a more generalist careers organization than 80,000 Hours — one which is more agnostic regarding different cause areas and might provide a different entry point into the community to people who aren't a good fit for 80K's priority areas. Following 80,000 Hours' post about what they view as gaps in the careers space, we contacted them about how a new organization could effectively fill some of those gaps. After a few months of planning, asking questions, writing content, and interviewing experts, we're almost ready to go live (we aim to start putting our content online in 1-2 months) and would love to hear more from the community at large. How You Can Help The most important thing we'd like from you is feedback. Please comment on this post, send us personal messages on the Forum, email us (omer at probablygood dot org, sella at probablygood dot org), or set up a conversation with us via videoconference. We would love to receive as much feedback as we can get. We're particularly interested in hearing about things that you, personally, would actually read // use // engage with, but would appreciate absolutely any suggestions or feedback. Probably Good Overview The most updated version of the overview is here. Following is the content of the overview at the time this announcement is posted. Overview Probably Good is a new organization that provides career guidance intended to help people do as much good as possible. We will start by focusing on online content and a small number of 1:1 consultations. We will later consider other forms of career guidance such as a job board, scaling up the 1:1 consultations, more in-depth research, etc. Our approach to guidance is focused on how to help each individual maximize their career impact based on their values, personal circumstances, and motivations. This means that we will accommodate a wide range of preferences (for example, different cause areas), as long as they're consistent with our principles, and try to give guidance in accordance with those preferences. Therefore, we'll be looking at a wide range of impactful careers under different views on what to optimize for or under various circumstantial constraints, such as how to maximize impact within specific career paths, within specific geographic regions, through earning to give, or within more specific situations (e.g. making an impact from within a large corporation). There are other organizations in this space, the most well-known being 80,000 Hours. We think our approach is complementary to 80,000 Hours' current approach: Their guidance mostly focuses on people aiming to work on their priority problem areas, and we would be able to guide high quality candidates who aren't. We would direct candidates to 80,000 Hours or other specialized organizations (such as Animal Advocacy Careers) if they're a better fit for their principles and priority paths. This characterization of our target audience is very broad; this has two main motivations. First, as part of our experimental approach: we are interested in identifying which cause areas currently have the most unserved demand. By providing preliminary value in multiple areas of expertise, we hope to more efficiently identify where our investment would be most useful, and we may specialize (in a more informed manner) in the future. The second motivation for this is that one possibility for specialization is as a “router” interface - helping individuals make preliminary decisions tailored to the...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: What Makes Outreach to Progressives Hard, published by Cullen_OKeefe on the effective altruism forum. This post summarizes some of my conclusions on things that can make EA outreach to progressives hard, as well as some tentative recommendations on techniques for making such outreach easier. To be clear, this post does not argue or assume that outreach to progressives is harder than outreach to other political ideologies.[1] Rather, the point of this post is to highlight identifiable, recurring memes/thought patterns that cause Progressives to reject or remain skeptical of EA. My Background (Or, Why I am Qualified to Talk About This) Nothing in here is based on systematic empirical analysis. It should therefore be treated as highly uncertain. My analysis here draws on two sources: Reflecting on my personal journey as someone who transitioned from a very social-justice-y worldview to a more EA-aligned one (and therefore understands the former well), who is still solidly left-of-center, and who still retains contacts in the social justice (SJ) world; and My largely failed attempts as former head of Harvard Law School Effective Altruism to get progressive law students to make very modest giving commitments to GiveWell charities. Given that the above all took place in America, this post is most relevant to American political dynamics (especially at elite universities), and may very well be inapplicable elsewhere.[2] Readers may worry that I am being a bit uncharitable here. However, I am not trying to present the best progressive objections to EA (so as to discover the truth), but rather the most common ones (so as to persuade people better). In other words, this post is about marketing and communications, not intellectual criticisms. Since I think many of the common progressive objections to EA are bad, I will attempt to explain them in (what I take to be) their modal or undifferentiated form, not steelman them. Relatedly, when I say "progressives" through the rest of this post, I am mainly referring to the type of progressive who is skeptical of EA, not all progressives. There are many amazing progressive EAs, who do not see these two ideologies to be in conflict whatsoever. And many non-EA progressives will believe few of these things. Nevertheless, I do think I am pointing to a real set of memes that are common—but definitely not universal—among the American progressive left as of 2021. This is sufficient for understanding the messaging challenges facing EAs within progressive institutions. Reasons Progressives May Not Like EA Legacy of Paternalistic International Aid Many progressives have a strong prior against international aid, especially private international aid. Progressives are steeped in—and react to—stories of paternalistic international aid,[3] much in the way that EAs are steeped in stories of ineffective aid (e.g., Playpumps). Interestingly, EAs and progressives will often (in fact, almost always) agree on what types of aid are objectionable. However, we tend to take very different lessons away from this. EAs will generally take away the lesson that we have to be super careful about which interventions to fund, because funding the wrong intervention can be ineffective or actively harmful. We put the interests of our intended beneficiaries first by demanding that charities demonstrably advance their beneficiaries' interests as cost-effectively as possible. Progressives tend to take a very different lesson from this. They tend to see this legacy as objectionable due to the very nature of the relationship between aid donors and recipients. Roughly, they may believe that the power differential between wealthy donors from the Global North and aid recipients in developing countries makes unobjectionable foreign aid either impossible or, at the very least, extr...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Snapshot of a career choice 10 years ago, published by Julia_Wise on the effective altruism forum. Here's a little episode from EA's history, about how much EA career advice has changed over time. Ten years ago, I wrote an angsty LessWrong post called “Career choice for a utilitarian giver.” (“Effective altruism” and “earning to give” didn't exist as terms at that point.) At the time, there was a lot less funding in EA, and the emphasis was very much on donation rather than direct work. Donation was the main way I was hoping to have an impact. I was studying to become a social worker, but I had become really worried that I should try for some higher-earning career so I could donate more. I thought becoming a psychiatrist was my best career option, since it paid significantly more than the social work career I was on track towards, and I thought I could be good at it. I prioritized donation very highly, and I estimated that going into medicine would allow me to earn enough to save 2500 lives more than I could by staying on the same path. (That number is pretty far wrong, but it's what I thought at the time.) The other high-earning options I could think of seemed to require quantitative skills I didn't have, or a level of ambition and drive I didn't have. A few people did suggest that I might work on movement building, but for some reason it didn't seem like a realistic option to me. There weren't existing projects that I could slot into, and I'm not much of an entrepreneurial type. The post resulted in me talking to a career advisor from a project that would eventually become 80,000 Hours. The advisor and I talked about how I might switch fields and try to get into medical school. I was trying not to be swayed by the sunk cost of the social work education I had already completed, but I also just really didn't want to go through medical school and residency. My strongest memory of that period is lying on the grass at my grad school, feeling awful about not being willing to put the years of work into earning more money. There were lives at stake. I was going to let thousands of people die from malaria because I didn't want to work hard and go to medical school. I felt horribly guilty. And I also thought horrible guilt was not going to be enough to motivate me through eight years of intense study and residency. After a few days of crisis, I decided to stop thinking about it all the time. I didn't exactly make a conclusive decision, but I didn't take any steps to get into med school, and after a few more weeks it was clear to me that I had no real intention to change paths. So I continued to study social work, with the belief that I was doing something seriously wrong. (To be clear, nobody was telling me I should feel this way, but I wasn't living up to my own standards.) In the meantime, I started writing Giving Gladly and continued hosting dinners at my house where people could discuss this kind of thing. The Boston EA group grew out of that. It didn't occur to me that I could work for an EA organization without moving cities. But four years later, CEA was looking for someone to do community work in EA and was willing to consider someone remote. Because of my combination of EA organizing, writing, and experience in social work, I turned out to be a good fit. I was surprised that they were willing to hire someone remote. Although I struggled at first to work out what exactly I should be doing, over time it was clear to me that I could be much more useful here than either in social work or earning to give. I don't think there's a clear moral of the story about what this means other people should do, but here are some reflections: I look back on this and think, wow, we had very little idea how to make good use of a person like me. I wonder how many other square pegs are ou...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Lessons from Running Stanford EA and SERI, published by kuhanj on the effective altruism forum. Introduction and summary Who knew a year of work could turn a 1-organizer EA group into one of the largest EA groups in the world? Especially considering that the person spearheading this growth had little experience running much of anything relevant, and very suboptimal organization skills (It's me, welp). I definitely didn't, when I started running Stanford EA in mid-2019. But I did know it was worth a shot; many universities are absolutely insane opportunities for multiplying your impact--where else can you find such dense clusters of people with the values, drive, talent, time, and career flexibility to dedicate their careers to tackling the world's most pressing, difficult, large-scale problems? Stanford EA had effectively one real organizer (Jake McKinnon), and our only real programming was weekly discussions (which weren't very well attended) and the one-off talk for a few years. This was the case until 2019, when Jake started prioritizing succession, spending lots of time talking to a few new intrigued-but-not-yet-highly-involved members (like me!) to get more involved about the potential impact we could have doing community building and for the world more broadly. Within a year, Stanford EA grew to be one of the largest groups in EA. That first year of work turned Stanford EA into a very large group, and in the second year since, I've been super proud of what our team has accomplished: Getting SERI (the Stanford Existential Risks Initiative) off the ground (which wouldn't have been possible without our faculty directors and Open Phil's support), which has inspired other x-risk initiatives at Cambridge and (coming this year) at Harvard/MIT. Running all of CEA's Virtual Programs for their first global iterations, introducing over 500 people to key concepts in EA Getting ~10 people to go from little knowledge of EA to being super dedicated to pursuing EA-guided careers, and boosting the networks, motivation, and knowledge of 5+ more who were already dedicated (At Stanford, and hopefully much more outside of Stanford since we ran a lot of global programming) Running a global academic conference, together with other SERI organizers. Running a large majority of all x-risk/longtermist internships in the world this year, together with other SERI organizers (though this is in part due to FHI being unable to run their internship this summer) Founding the Stanford Alt. Protein Project, which recently ran a well-received, nearly 100-person class on alternative proteins, and has also set up connections/grants between three Stanford professors and the Good Food Institute to conduct alternative protein research. Helping several other EA groups get off the ground, and running intro to EA talks and fellowships with them I say this, not to brag, but because I think it shows several important things: There's an incredible amount of low-hanging fruit in this area. The payoffs to doing good community-building work are huge. See also these posts for additional evidence and discussion. Maybe you think EAs are over-determined? I don't think so; perhaps half of the hardcore EAs in our group don't seem to have been (and this proportion could be even higher with more and better community building). You (yes, you!) can do similar things. We're not that special--we're mostly an unorganized team of students who care a lot. We still have so much to learn, but I think we got some things right. What's the sEAcret sauce? I try to distill it in this post, as a mix of mindsets, high-level goals, and tactics. Here's the short version/summary: EA groups have lots of room for growth and improvement, as evidenced by the rapid growth of Stanford EA (despite it still being very suboptimally run and lots o...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: The motivated reasoning critique of effective altruism, published by Linch on the effective altruism forum. Epistemic status: Half-baked at best I have often been skeptical of the value of a) critiques against effective altruism and b) fully general arguments that seem like they can apply to almost anything. However, as I am also a staunch defender of hypocrisy, I will now hypocritically attempt to make the case for applying a fully general critique to effective altruism. In this post, I will claim that: Motivated reasoning inhibits our ability to acquire knowledge and form reasoned opinions. Selection bias in who makes which arguments significantly exacerbates the problem of motivated reasoning Effective altruism should not be assumed to be above these biases. Moreover, there are strong reasons to believe that incentive structures and institutions in effective altruism exacerbate rather than alleviate these biases. Observed data and experiences in effective altruism support this theory; they are consistent with an environment where motivated reasoning and selection biases are rampant. To the extent that these biases (related to motivated reasoning) are real, we should expect the harm done to our ability to form reasoned opinions to also seriously harm the project of doing good. I will use the example of cost-effectiveness analyses as a jumping board for this argument. (I understand that effective altruism, especially outside of global health and development, has largely moved away from explicit expected value calculations and cost-effectiveness analyses. However, I do not believe this change invalidates my argument (see Appendix B)). I also list a number of tentative ways to counteract motivated reasoning and selection bias in effective altruism: Encourage and train scientific/general skepticism in EA newcomers. Try marginally harder to accept newcomers, particularly altruistically motivated ones with extremely high epistemic standards As a community, fund and socially support external (critical) cost-effectiveness analyses and impact assessments of EA orgs Within EA orgs, encourage and reward dissent of various forms Commit to individual rationality and attempts to reduce motivated reasoning Maybe encourage a greater number of people to apply and seriously consider jobs outside of EA or EA-adjacent orgs Maintain or improve the current culture of relatively open, frequent, and vigorous debate Foster a bias towards having open, public discussions of important concepts, strategies, and intellectual advances Motivated reasoning: What it is, why it's common, why it matters By motivated reasoning, I roughly mean what Julia Galef calls “soldier mindset” (H/T Rob Bensinger): In directionally motivated reasoning, often shortened to "motivated reasoning", we disproportionately put our effort into finding evidence/reasons that support what we wish were true. Or, from Wikipedia: emotionally biased reasoning to produce justifications or make decisions that are most desired rather than those that accurately reflect the evidence I think motivated reasoning is really common in our world. As I said in a recent comment: My impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA is that they are not seeking truth, and this systematically corrupts them in important ways. Non-random examples that come to mind include public health (on covid, vaping, nutrition), bioethics, social psychology, developmental econ, climate change, vegan advocacy, religion, US Democratic party, and diversity/inclusion. Moreover, these problems aren't limited to particular institutions: these problems are instantiated in academia, activist groups, media, regulatory groups and "mission-oriented" companies. What does motivated reasoning loo...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?, published by Luisa_Rodriguez on the effective altruism forum. Epistemic transparency: Confidence in conclusions varies throughout. I give rough indicators of my confidence at the section level by indicating the amount of time I spent researching/thinking about each particular subtopic, plus a qualitative description of the types of sources I rely on. In general, I consider it a first step toward understanding this threat from civilizational collapse — not a final or decisive one. Acknowledgements This research was funded by the Forethought Foundation. It was written by Luisa Rodriguez under the supervision of Arden Koehler and Lewis Dartnell. Thanks to Arden Koehler, Max Daniel, Michael Aird, Matthew van der Merwe, Rob Wiblin, Howie Lempel, and Kit Harris who provided valuable comments. Thanks also to William MacAskill for providing guidance and feedback on the larger project. Summary In this post, I explore the probability that if various kinds of catastrophe caused civilizational collapse, this collapse would fairly directly lead to human extinction. I don't assess the probability of those catastrophes occurring in the first place, the probability they'd lead to indefinite technological stagnation, or the probability that they'd lead to non-extinction existential catastrophes (e.g., unrecoverable dystopias). I hope to address the latter two outcomes in separate posts (forthcoming). My analysis is organized into case studies: I take three possible catastrophes, defined in terms of the direct damage they would cause, and assess the probability that they would lead to extinction within a generation. There is a lot more someone could do to systematically assess the probability that a catastrophe of some kind would lead to human extinction, and what I've written up is certainly not conclusive. But I hope my discussion here can serve as a starting point as well as lay out some of the main considerations and preliminary results. Note: Throughout this document, I'll use the following language to express my best guess at the likelihood of the outcomes discussed: TABLE1 Case 1: I think it's exceedingly unlikely that humanity would go extinct (within ~a generation) as a direct result of a catastrophe that causes the deaths of 50% of the world's population, but causes no major infrastructure damage (e.g. damaged roads, destroyed bridges, collapsed buildings, damaged power lines, etc.) or extreme changes in the climate (e.g. cooling). The main reasons for this are: Although civilization's critical infrastructure systems (e.g. food, water, power) might collapse, I expect that several billions of people would survive without critical systems (e.g. industrial food, water, and energy systems) by relying on goods already in grocery stores, food stocks, and fresh water sources. After a period of hoarding and violent conflict over those supplies and other resources, I expect those basic goods would keep a smaller number of remaining survivors alive for somewhere between a year and a decade (which I call the grace period, following Lewis Dartnell's The Knowledge). After those supplies ran out, I expect several tens of millions of people to survive indefinitely by hunting, gathering, and practicing subsistence agriculture (having learned during the grace period any necessary skills they didn't possess already). Case 2: I think it's very unlikely that humanity would go extinct as a direct result of a catastrophe that caused the deaths of 90% of the world's population (leaving 800 million survivors), major infrastructure damage, and severe climate change (e.g. nuclear winter/asteroid impact). While I expect that millions would starve to death in the wake of something like a globa...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Seven things that surprised us in our first year working in policy - Lead Exposure Elimination Project, published by Jack, LuciaC on the effective altruism forum. Following the interest in our post announcing the launch of Lead Exposure Elimination Project (LEEP) eight months ago, we are now sharing an update unpacking seven findings that have surprised us in our experiences so far. We hope these will be relevant to others interested in policy change or starting a new project or charity. For those who are not familiar with LEEP, we are a Charity Entrepreneurship-incubated NGO advocating for lead paint regulation in countries with large and growing burdens of lead poisoning from paint. The reasons we focus on reducing lead exposure are outlined in our introduction post. In short, we believe the problem to be neglected and tractable, and that the intervention has the potential to improve lives with a high level of cost-effectiveness. 1. The speed of progress with government has been less of a limiting factor than expected In our first target country, Malawi, we had a number of uncertainties about how quickly progress could be made. We were unsure if we would be able to get in touch with the relevant government officials, if they would be willing to engage, and whether our advocacy would lead to action in a reasonable timeframe. We found that stakeholders were far more willing to engage than we had expected. Even without existing connections or introductions, government officials replied to our emails and agreed to meetings. Beyond getting initial meetings, the tractability of achieving change was also higher than expected. After we carried out a study demonstrating high levels of lead in paint, the Malawi Bureau of Standards agreed to begin monitoring and enforcement for lead content of paint (using pre-existing but unimplemented standards), and have since confirmed that they have begun. This change occurred within three months of beginning advocacy in Malawi - significantly faster than our expected timeframe of 1-2 years. We also found a surprising willingness to cooperate from the local paint industry. Since presenting to the paint manufacturers our findings of lead in paint and the benefits of switching to non-lead, they have engaged with us and with our support identifying non-lead alternative ingredients. We will be carrying out a repeat paint study in a few months to measure how this progress relates to levels of lead paint available on the market. There are a number of factors that we think contributed to this faster traction and high level of stakeholder engagement. One is the new country-specific data that we were able to generate through a small paint sampling study. We believe that this data provided an effective opener to communications and also convincingly demonstrated that lead paint is a problem in Malawi. Our government contacts confirmed that this Malawi-specific evidence was key for their decision to take action. Generating new country-specific data through small-scale local studies seems to be an effective advocacy strategy that may be cross-applicable to other areas of policy. Other reasons why stakeholder engagement has been greater than expected might be specific to lead paint regulation advocacy. For example, lead paint regulation is not particularly expensive for governments to implement or for paint manufacturers to comply with, reducing the barrier to action for both stakeholder groups. Also, there is a strong and established evidence-base for the harms of childhood lead poisoning, increasing consensus on the issue. As well as this, there is a growing awareness of a global movement towards lead paint regulation, including examples of neighbouring countries and the support of respected international bodies such as the WHO. This may facilitate ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Why I'm concerned about Giving Green, published by alexrjl on the effective altruism forum. Disclosure I am a forecaster, and occasional independent researcher. I also work in a volunteer capacity for SoGive, which has included some analysis of the climate space in order to provide advice to some individual donors interested in this area. This work has involved ~20 hours of conference calls over the last year with donors and organisations, one of which was the Clean Air Task Force, although for the last few months my primary focus has been internal work on moral weights. I began the research for this piece in a personal capacity, and the opinions below are my own, not those of SoGive. I received input on some early drafts, for which I am extremely grateful, from Sanjay Joshi (SoGive's founder), as well as Aaron Gertler and Linch Zhang, however I again want to emphasise that the opinions expressed, and especially any mistakes in the below, are mine alone. I'm also very grateful to Giving Green for taking the time to have a call with me about my thinking here. I provided a copy of the post to them in advance, and they have indicated that they'll be providing a reponse to the below. Overview Big potential I think that Giving Green has the potential to be incredibly impactful, not just on the climate but also on the EA/Effective Giving communities. Many people, especially young people, are extremely concerned about climate change, and very excited to act to prevent it. Meta-analysis of climate charities has the chance to therefore have large first-order effects, by redirecting donations to the most effective organisations within the climate space. It also, if done well, has the potential to have large second-order effects, by introducing people to the huge multiplier on their impact that cost-effectiveness research can have, and through that to the wider EA movement. I note that at least one current CEA staff member took this exact path into EA. With this said, I am concerned about some aspects of Giving Green in its current form, and having discussed these concerns with them, felt it was worth publishing the below. Concerns about research quality Giving Green's evaluation process involves substantial evidence collection and qualitative evaluation, but eschews quantitative modelling, in favour of a combination of metrics which do not have a simple relationship to cost-effectiveness. In three cases, detailed below, I have reservations about the strength of Giving Green's recommendations. Giving Green also currently recommends the Clean Air Task Force, which I enthusiastically endorse, but who Founders Pledge had identified as promising before Giving Green's founding, and Tradewater, who I have not evaluated. What this boils down to is that in every case where I investigated an original recommendation made by Giving Green, I was concerned by the analysis to the point where I could not agree with the recommendation. Despite the unusual approach, especially compared to standard EA practice, the research and methodology are presented by Giving Green in a way which implies a level of concreteness comparable to major existing charity evaluators such as Givewell. As well as the quantitative aspect mentioned above, major evaluators are notable for the high degree of rigour in their modelling, with arguments being carefully connected to concrete outcomes, and explicit consideration of downside risks and ways that they could be wrong. One important part of the more usual approach is that it makes research much easier to critique, as causal reasoning is laid out explicitly, and key assumptions are identified and quantified. When research lacks this style, not only does the potential for error increase, but it becomes much more difficult and time-intensive to critique, meaning errors...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Good news on climate change, published by John G. Halstead, jackvaon the effective altruism forum. This post is about how much warming we should expect on current policy and assuming emissions stop at 2100. We argue the risk of extreme warming (>6 degrees) conditional on these assumptions now looks much lower than it once did. Crucially, the point of this post is about the direction of an update, not an absolute assessment of risk -- indeed, the two of us disagree a fair amount on the absolute risk, but strongly agree on the direction and relative magnitude of the update. The damage of climate change depends on three things: How much we emit The warming we get, conditional on emissions The impact of a given level of warming. The late and truly great economist Martin Weitzman argued for many years that the catastrophic risk from climate change was greater than commonly recognised. In 2015, Weitzman, along with Gernot Wagner, an economist now at New York University, released Climate Shock, which argued that the chance of more than 6 degrees of warming is worryingly high. Using the International Energy Agency's estimate of the most likely level of emissions on current policy, and the IPCC's estimate of climate sensitivity, Wagner and Weitzman estimated that the chance of more than 6 degrees is 11%, on current policy.[1] In recent years, the chance of more than 6 degrees of warming on current policy has fallen quite substantially for two reasons: Emissions now look likely to be lower The right tails of climate sensitivity have become thinner 1. Good news on emissions For a long time the climate policy and impacts community was focused on one possible ‘business as usual' emissions scenario known as Representative Concentration Pathway 8.5 (RCP8.5), a worst case against which climate action would be compared. Each representative concentration pathway can be paired with a socioeconomic story of how the world will develop in key areas such as population, income, inequality and education. These are known as ‘shared socioeconomic pathways' (SSPs). The latest IPCC report outlines five shared socioeconomic pathways. The only one that is compatible with RCP8.5 is a high economic growth fossil fuel-powered future called Shared Socioeconomic Pathway 5 (SSP5). In combination, SSP5 and RCP8.5 is called ‘SSP5-8.5'. On SSP5-8.5, we would emit a further 2.2 trillion tonnes of carbon by 2100, on top of the 0.65 trillion tonnes we have emitted so far.[2] For reference, we currently put about 10 billion tonnes of carbon into the atmosphere from fossil fuel burning and industry.[3] The other emissions pathways are shown below: IPCC, Climate Change 2021: The Physical Science Basis, Assessment Review 6, Summary for Policymakers: Figure SPM.4 However, for a variety of reasons, SSP5-RCP8.5 now looks increasingly unlikely as a ‘business as usual' emissions pathway. There are several reasons for this. Firstly, the costs of renewables and batteries have declined extremely quickly. Historically, models have been too pessimistic on cost declines for solar, wind and batteries: out of nearly 3,000 Integrated Assessment Models, none projected that solar investment costs (different to the levelised costs shown below) would decline by more than 6% per year between 2010 and 2020. In fact, they declined by 15% per year.[4] This means that renewables will play an increasing role in energy supply in the future. In part for this reason, energy systems models now suggest that high fossil fuel futures are much less likely. For example, the chart below shows emissions on current policies and pledged policies, according to the International Energy Agency. Source: Hausfather and Peters, ‘Emissions – the ‘business as usual' story is misleading', Nature, 2020. The chart above from Hausfather and Peters (2020) relies...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Small and Vulnerable, published by deluks917 on the effective altruism forum. Anyone who is dedicating the majority of their time or money to Effective Altruism needs to ask themselves why. Why not focus on enjoying life and spending your time doing what you love most? Here is my answer: I have a twin sister but neither of us had many other friends growing up. From second to fifth grade we had none. From sixth to eighth we had one friend. As you might guess I was bullied quite badly. Multiple teachers contributed to this. Despite having no friends my parents wanted us to be normal. They pressured me to play sports with the boys in the neighborhood. I was unable to play with an acceptable level of skill and was not invited to the games anyway. But we were still forced to go 'play outside' after school. We had to find ways to kill time. Often we literally rode our bicycles in a circle in a parking lot. We were forced to 'play outside' for hours most days and even longer on weekends. I was not even allowed to bring a book outside though sometimes I would hide them outside at night and find them the next day. Until high school, I had no access to the internet. After dinner, I could watch TV, read and play video games. These were the main sources of joy in my childhood. Amazingly my mom made fun of her children for being weirdos. My sister used to face a wall and stim with her fingers when she was overwhelmed. For some reason, my mom interpreted this as 'OCD'. So she made up a song titled 'OCD! Do you mean me?' It had several verses! This is just one, especially insane, example. My dad liked to 'slap me around. He usually did not hit me very hard but he would slap me in the face all the time. He also loved to call me 'boy' instead of my name. He claims he got this idea from Tarzan. It took me years to stop flinching when people raised their hands or put them anywhere near my face. I have struggled with gender since childhood. My parents did not tolerate even minor gender nonconformity like growing my hair out. I would get hit reasonably hard if I insisted on something as 'extreme' as crossing my legs 'like a girl in public. I recently started HRT and already feel much better. My family is a lot of the reason I delayed transitioning. If you go by the checklist I have quite severe ADHD. 'Very often' seemed like an understatement for most of the questions. My ADHD was untreated until recently. I could not focus on school or homework so trying to do my homework took way too much time. I was always in trouble in school and considered a very bad student. It definitely hurts when authority figures constantly, and often explicitly, treat you like a fuck up and a failure who can't be trusted. But looking back it seems amazing I was considered such a bad student. I love most of the subjects you study in school! When I finally got access to the internet I spent hours per day reading Wikipedia articles. I still spend a lot of time listening to lectures on all sorts of subjects, especially history. Why were people so cruel to a little child who wanted to learn things? Luckily things improved in high school. Once I had more freedom and distance from my parents my social skills improved a huge amount. In high school, I finally had internet access which helped an enormous amount. My parents finally connected our computer at home to the internet because they thought my sister and I needed it for school. I also had access to the computers in the high school library. By my junior year in high school, I was not really unpopular. Ironically my parent's overbearing pressure to be a 'normal kid' probably prevented me from having a social life until I got a little independence. Sadly I was still constantly in trouble in school throughout my high school years. The abuse at home was very bad. But,...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Some quick notes on "effective altruism", published by Jonas Vollmer on the effective altruism forum. Introduction I have some concerns about the "effective altruism" branding of the community. I recently posted them as a comment, and some people encouraged me to share them as a full post instead, which I'm now doing. I think this conversation is most likely not particularly useful or important to have right now, but there's some small chance it could be pretty valuable. This post is based on my personal intuition and anecdotal evidence. I would put more trust in well-run surveys of the right kinds of people or other more reliable sources of evidence. "Effective Altruism" sounds self-congratulatory and arrogant to some people: Calling yourself an "altruist" is basically claiming moral superiority, and anecdotally, my parents and some of my friends didn't like it for that reason. People tend to dislike it if others are very public with their altruism, perhaps because they perceive them as a threat to their own status (see this article, or do-gooder derogation against vegetarians). Other communities and philosophies, e.g., environmentalism, feminism, consequentialism, atheism, neoliberalism, longtermism don't sound as arrogant in this way to me. Similarly, calling yourself "effective" also has an arrogant vibe, perhaps especially among professionals in relevant areas. E.g., during the Zurich ballot initiative, officials at the city of Zurich unpromptedly asked me why I consider them "ineffective", indicating that the EA label basically implied to them that they were doing a bad job. I've also heard other professionals in different contexts react similarly. Sometimes I also get sarcastic "aaaah, you're the effective ones, you figured it all out, I see" reactions. "Effective altruism" sounds like a strong identity: Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community. By contrast, terms like "longtermism" are somewhat weaker and more about the ideas per se. Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don't self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists. I don't think the terminology was the primary concern for everyone, but it may play a role for several individuals. In general, it feels weirdly difficult to separate agreement with EA ideas from the EA identity. The way we use the term, being an EA or not is often framed as a binary choice, and it's often unclear whether one identifies as part of the community or agrees with its ideas. Some further, less important points: "Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former. A lot of people don't know what "altruism" means. "Effective altruism" often sounds pretty awkward when translated to other languages. That said, this issue also affects a lot of the alternatives. We actually care about cost-effectiveness or efficiency (i.e., impact per unit of resource input), not just about effectiveness (i.e., whether impact is non-zero). This sometimes leads to confusion among people who first hear about the term. Taking action on EA issues doesn't strictly require altruism. While I think it's important that key decisions in EA are made by people with a strong moral motivation, involvement in EA should be open to a lot of people, even if th...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: All Possible Views About Humanity's Future Are Wild, published by Holden Karnofsky on the effective altruism forum. This is a linkpost for/ Audio version is here Summary: In a series of posts starting with this one, I'm going to argue that the 21st century could see our civilization develop technologies allowing rapid expansion throughout our currently-empty galaxy. And thus, that this century could determine the entire future of the galaxy for tens of billions of years, or more. This view seems "wild": we should be doing a double take at any view that we live in such a special time. I illustrate this with a timeline of the galaxy. (On a personal level, this "wildness" is probably the single biggest reason I was skeptical for many years of the arguments presented in this series. Such claims about the significance of the times we live in seem "wild" enough to be suspicious.) But I don't think it's really possible to hold a non-"wild" view on this topic. I discuss alternatives to my view: a "conservative" view that thinks the technologies I'm describing are possible, but will take much longer than I think, and a "skeptical" view that thinks galaxy-scale expansion will never happen. Each of these views seems "wild" in its own way. Ultimately, as hinted at by the Fermi paradox, it seems that our species is simply in a wild situation. Before I continue, I should say that I don't think humanity (or some digital descendant of humanity) expanding throughout the galaxy would necessarily be a good thing - especially if this prevents other life forms from ever emerging. I think it's quite hard to have a confident view on whether this would be good or bad. I'd like to keep the focus on the idea that our situation is "wild." I am not advocating excitement or glee at the prospect of expanding throughout the galaxy. I am advocating seriousness about the enormous potential stakes. My view This is the first in a series of pieces about the hypothesis that we live in the most important century for humanity. In this series, I'm going to argue that there's a good chance of a productivity explosion by 2100, which could quickly lead to what one might call a "technologically mature"[1] civilization. That would mean that: We'd be able to start sending spacecraft throughout the galaxy and beyond. These spacecraft could mine materials, build robots and computers, and construct very robust, long-lasting settlements on other planets, harnessing solar power from stars and supporting huge numbers of people (and/or our "digital descendants"). See Eternity in Six Hours for a fascinating and short, though technical, discussion of what this might require. I'll also argue in a future piece that there is a chance of "value lock-in" here: whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.[2] If that ends up happening, you might think of the story of our galaxy[3] like this. I've marked major milestones along the way from "no life" to "intelligent life that builds its own computers and travels through space." Thanks to Ludwig Schubert for the visualization. Many dates are highly approximate and/or judgment-prone and/or just pulled from Wikipedia (sources here), but plausible changes wouldn't change the big picture. The ~1.4 billion years to complete space expansion is based on the distance to the outer edge of the Milky Way, divided by the speed of a fast existing human-made spaceship (details in spreadsheet just linked); IMO this is likely to be a massive overestimate of how long it takes to expand throughout the whole galaxy. See footnote for why I didn't use a logarithmic axis.[4] ??? That's crazy! According to me, there's a dec...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Some personal thoughts on EA and systemic change, published by CarlShulman on the effective altruism forum. DavidNash requested that I repost my comment below, on what to make of discussions about EA neglecting systemic change, as a top-level post. These are my off-the-cuff thoughts and no one else's. In summary (to be unpacked below): Actual EA is able to do assessments of systemic change interventions including electoral politics and policy change, and has done so a number of times The great majority of critics of EA invoking systemic change fail to present the simple sort of quantitative analysis given above for the interventions they claim excel, and frequently when such analysis is done the intervention does not look competitive by EA lights Nonetheless, my view is that historical data do show that the most efficient political/advocacy spending, particularly aiming at candidates and issues selected with an eye to global poverty or the long term, does have higher returns than GiveWell top charities (even ignoring nonhumans and future generations or future technologies); one can connect the systemic change critique as a position in intramural debates among EAs about the degree to which one should focus on highly linear, giving as consumption, type interventions EAs who are willing to consider riskier and less linear interventions are mostly already pursuing fairly dramatic systemic change, in areas with budgets that are small relative to political spending (unlike foreign aid) As funding expands in focused EA priority issues, eventually diminishing returns there will equalize with returns for broader political spending, and activity in the latter area could increase enormously: since broad political impact per dollar is flatter over a large range political spending should either be a very small or very large portion of EA activity In full: Actual EA is able to do assessments of systemic change interventions including electoral politics and policy change, and has done so a number of times Empirical data on the impact of votes, the effectiveness of lobbying and campaign spending work out without any problems of fancy decision theory or increasing marginal returns E.g. Andrew Gelman's data on US Presidential elections shows that given polling and forecasting uncertainty a marginal vote in a swing state average something like a 1 in 10 million chance of swinging an election over multiple elections (and one can save to make campaign contributions 80,000 Hours has a page (there have been a number of other such posts and discussion, note that 'worth voting' and 'worth buying a vote through campaign spending or GOTV' are two quite different thresholds) discussing this data and approaches to valuing differences in political outcomes between candidates; these suggest that a swing state vote might be worth tens of thousands of dollars of income to rich country citizens But if one thinks that charities like AMF do 100x or more good per dollar by saving the lives of the global poor so cheaply, then these are compatible with a vote being worth only a few hundred dollars If one thinks that some other interventions, such as gene drives for malaria eradication, animal advocacy, or existential risk interventions are much more cost-effective than AMF, that would lower the value further except insofar as one could identify strong variation in more highly-valued effects Experimental data on the effects of campaign contributions suggest a cost of a few hundred dollars per marginal vote (see, e.g. Gerber's work on GOTV experiments) Prediction markets and polling models give a good basis for assessing the chance of billions of dollars of campaign funds swinging an election If there are increasing returns to scale from large-scale spending, small donors can convert their funds into a smal...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Reality is often underpowered, published by Gregory_Lewis on the effective altruism forum. Introduction When I worked as a doctor, we had a lecture by a paediatric haematologist, on a condition called Acute Lymphoblastic Leukaemia. I remember being impressed that very large proportions of patients were being offered trials randomising them between different treatment regimens, currently in clinical equipoise, to establish which had the edge. At the time, one of the areas of interest was, given the disease tended to have a good prognosis, whether one could reduce treatment intensity to reduce the long term side-effects of the treatment whilst not adversely affecting survival. On a later rotation I worked in adult medicine, and one of the patients admitted to my team had an extremely rare cancer,[1] with a (recognised) incidence of a handful of cases worldwide per year. It happened the world authority on this condition worked as a professor of medicine in London, and she came down to see them. She explained to me that treatment for this disease was almost entirely based on first principles, informed by a smattering of case reports. The disease unfortunately had a bleak prognosis, although she was uncertain whether this was because it was an aggressive cancer to which current medical science has no answer, or whether there was an effective treatment out there if only it could be found. I aver that many problems EA concerns itself with are closer to the second story than the first. That in many cases, sufficient data is not only absent in practice but impossible to obtain in principle. Reality is often underpowered for us to wring the answers from it we desire. Big units of analysis, small samples The main driver of this problem for ‘EA topics' is that the outcomes of interest have units of analysis for which the whole population (leave alone any sample from it) is small-n: e.g. outcomes at the level of a whole company, or a whole state, or whole populations. For these big unit of analysis/small sample problems, RCTs face formidable in principle challenges: Even if by magic you could get (e.g.) all countries on earth to agree to randomly allocate themselves to policy X or Y, this is merely a sample size of ~200. If you're looking at companies relevant to cage-free campaigns, or administrative regions within a given state, this can easily fall another order of magnitude. These units of analysis tend highly heterogeneous, almost certainly in ways that affect the outcome of interest. Although the key ‘selling point' of the RCT is it implicitly controls for all confounders (even ones you don't know about), this statistical control is a (convex) function of sample size, and isn't hugely impressive at ~ 100 per arm: it is well within the realms of possibility for the randomisation happen to give arms with unbalanced allocation of any given confounding factor. ‘Roughly' (in expectation) balanced intervention arms are unlikely to be good enough in cases where the intervention is expected to have much less effect on the outcome than other factors (e.g. wealth, education, size, whatever), thus an effect size that favours one arm or the other can be alternatively attributed to one of these. Supplementing this raw randomisation by explicitly controlling for confounders you suspect (cf. block randomisation, propensity matching, etc.) has limited value when don't know all the factors which plausibly ‘swamp' the likely intervention effect (i.e. you don't have a good predictive model for the outcome but-for the intervention tested). In any case, they tend to trade-off against the already scarce resource of sample size. These ‘small sample' problems aren't peculiar to RCTs, but endemic to all other empirical approaches. The wealth of econometric and quasi-experimental methods (e.g. IVs, ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Big List of Cause Candidates, published by NunoSempere on the effective altruism forum. Many thanks to Ozzie Gooen for suggesting this project, to Marta Krzeminska for editing help and to Michael Aird and others for various comments. In the last few years, there have been many dozens of posts about potential new EA cause areas, causes and interventions. Searching for new causes seems like a worthy endeavour, but on their own, the submissions can be quite scattered and chaotic. Collecting and categorizing these cause candidates seemed like a clear next step. We —Ozzie Gooen of the Quantified Uncertainty Research Institute and I— might later be interested in expanding this work and eventually using it for forecasting —e.g., predicting whether each candidate would still seem promising after much more rigorous research. At the same time, we feel like this list itself can be useful already. Further, as I kept adding more and more cause candidates, I realized that aiming for completeness was a fool's errand, or at least too big a task for an individual working alone. Below is my current list with a simple categorization, as well as an occasional short summary which paraphrases or quotes key points from the posts linked. See the last appendix for some notes on nomenclature. If there are any entries I missed (and there will be), please say so in the comments and I'll add them. I also created the "Cause Candidates" tag on the EA Forum and tagged all of the listed posts there. They are also available in a Google Sheet. Animal Welfare and Suffering Pointer: This cause has its various EA Forum tags (farmed animal welfare, wild animal welfare, meat alternatives), where more cause candidates can be found. Brian Tomasik et al.'s Essays on Reducing Suffering are also a gift that keeps on giving for this and other cause areas. 1.Wild Animal Suffering Caused by Fires Related categories: Politics: System change, targeted change, policy reform. Wild animal suffering caused by fires and ways to prevent it: a noncontroversial intervention (@Animal_Ethics) An Animal Ethics grantee designed a protocol aimed at helping animals during and after fires. The protocol contains specific suggestions, but the path to turning these into policy is unclear. 2. Invertebrate Welfare Invertebrate Welfare Cause Profile (@Jason Schukraft) The scale of direct human impact on invertebrates (@abrahamrowe) "In this post, we apply the standard importance-neglectedness-tractability framework to invertebrate welfare to determine, as best we can, whether this is a cause area that is worth prioritizing. We conclude that it is." Note: See also Brian Tomasik's Do Bugs Feel Pain. 3. Humane Pesticides Humane Pesticides as the Most Marginally Effective Cause (@JeffMJordan) Improving Pest Management for Wild Insect Welfare (@Wild_Animal_Initiative) The post argues that insects experience consciousness, and that there are a lot of them, so we should give them significant moral weight (comments contain a discussion on this point). The post goes on to recommend subsidization of less-painful pesticides, an idea initially suggested by Brian Tomasik, who "estimates this intervention to cost one dollar per 250,000 less-painful deaths." The second post goes into much more depth. 4. Diet Change Is promoting veganism neglected and if so what is the most effective way of promoting it? (@samuel072) Animal Equality showed that advocating for diet change works. But is it cost-effective? (@Peter_Hurford, @Marcus_A_Davis) Cost-effectiveness analysis of a program promoting a vegan diet (@nadavb, @sella, @GidonKadosh, @MorHanany) Measuring Change in Diet for Animal Advocacy (@Jacob_Peacock) The first post is a stub. The second post looks at a reasonably high-powered study on individual outreach. It concludes that, based on reasonable assum...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Killing the ants, published by Joe_Carlsmith on the effective altruism forum. (Cross-posted from Hands and Cities) I. The ants Recently, my housemates and I started seeing a lot of ants in the house. They marched in long lines along the edges of the basement and the bathrooms. A few showed up in the drawers. My girlfriend put out some red pepper, which was supposed to deter them from one of their routes, but they cut a line straight through. We thought maybe they were sheltering from the rain, which had become more frequent. We had had ants before; we'd talked, then, about whether to do something about it; but we hadn't, and eventually they disappeared. We thought maybe this would happen again. It didn't. Over weeks, the problem got worse. There were hundreds of ants in the upstairs bathroom. They started to show up much more in the kitchen. We threw out various things, sealed various things. They showed up in beds. Kitchen drawers were now ant territory. We talked about what to do. We were reluctant to kill them, which was part of why we had waited. But a number of people in the house felt that the situation was getting out of hand, and that we were on track for something much harder to control. I thought of a house I had stayed at, where the ants swarmed over the coffee maker every morning, and efforts (I'm not sure how extreme) to get rid of them had failed. The most effective killing method is to poison the colony as a whole. The ants are lured into a sugary liquid that also contains borax, which is poisonous for ants, but relatively safe for humans. They then track the poison back to the colony. We talked about how bad this would be for the ants — and in particular, the fact that the poison is slow-acting. Crushing them directly, we thought, might be more humane; though it would also be more time-consuming, and less likely to solve the problem. Eventually, though without resolving all disagreements amongst housemates, we put out the poison baits (my girlfriend also tried cloves, coffee grounds, and lemon juice around that time, as well as luring the ants to some peanut butter and honey outside, away from the house). The ants in the kitchen disappeared. There are still a few in the upstairs bathroom; and inside the clear plastic baits, you can see ant bodies, in the syrup. II. Owning it At one point, on the topic of the ants, I said, in passing, something like: “may we be forgiven.” My girlfriend responded seriously, saying something like: “We won't be. There's no forgiveness.” Something about her response made me realize that the choice to kill the ants had had, for me, a quality of unreality. I had exerted some limited advocacy, in the direction of some hazy set of norms, but with no real sense of responsibility for what I was doing. There was something performative and disengaged about it — a type of disengagement in which one, for example, “feels bad” about killing the ants — and the question of whether we were doing the “right thing” was part of that. I was looking at the concepts. I was hoping for some kind of conformity, some kind of “pass” from the moral “authorities.” But I wasn't looking down my arm, at the world I was creating, and the ants that were dying as a result. I wasn't owning it. Regardless of whether our choice was right or wrong (I'm still not sure), we chose for these ants to die. We killed them. What we got, when we chose, was not a “good job” or “bad job” from the universe: what we got was this world, and not another. And this world was right there, in front of me, whether we should be “forgiven” or no. Not owning the choice was made easier, I think, by the fact that the death of the ants would mostly occur offscreen; outside of my “zone”, and not, directly, by my own hand. Indeed, I had declined to crush the ants myself, and I hadn't bee...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Cultured meat predictions were overly optimistic, published by Neil_Dullaghan on the effective altruism forum. In a 2021 MotherJones article, Sinduja Rangarajan, Tom Philpott, Allison Esperanza, and Alexis Madrigal compiled and visualized 186 publicly available predictions about timelines for cultured meat (made primarily by cultured meat companies and a handful of researchers). I added 11 additional predictions ACE had collected, and 76 other predictions I found in the course of a forthcoming Rethink Priorities project. Check out our dataset Of the 273 predictions collected, 84 have resolved - nine resolving correctly, and 75 resolving incorrectly. Additionally, another 40 predictions should resolve at the end of the year and look to be resolving incorrectly. Overall, the state of these predictions suggest very systematic overconfidence. Cultured meat seems to have been perpetually just a few years away since as early as 2010 and this track record plausibly should make us skeptical of future claims from producers that cultured meat is just a few years away. Here I am presenting the results of predictions that have resolved, keeping in mind they are probably not a representative sample of publicly available predictions, nor assembled from a systematic search. Many of these are so vaguely worded that it's difficult to resolve them positively or negatively with high confidence. Few offer confidence ratings, so we can't measure calibration. Below is the graphic made in the MotherJones article. It is interactive in the original article. The first sale of a ~70% cultured meat chicken nugget occurred in a restaurant in Singapore on 2020 December 19th for S$23 (~$17 USD) for two nugget dishes at the 1880 private member's club, created by Eat Just at a loss to the company (Update 2021 Oct 15:" 1880 has now stopped offering the chicken nuggets, owing to “delays in production,” but hopes to put them back on menus by the end of the year." (Aronoff, 2021). We have independently tried to acquire the products ourselves from the restaurant and via delivery but have been unsuccessful so far). 65 predictions made on cultured meat being available on the market or in supermarkets specifically can now be resolved. 56 were resolved negatively and in the same direction - overly optimistic (update: the original post said 52). None resolved negatively for being overly pessimistic. These could resolve differently depending on your exact interpretation but I don't think there is an order of magnitude difference in interpretations. The nine that plausibly resolved positively are listed below (I also listed nine randomly chosen predictions that resolved negatively). In 2010 "At least another five to 10 years will pass, scientists say, before anything like it will be available for public consumption". (A literal reading of this resolves correct, even though one might interpret the meaning as a product will be available soon after ten years) Mark Post of Maastricht University & Mosa Meat in 2014 stated he “believes a commercially viable cultured meat product is achievable within seven years." (It's debatable if the Eat Just nugget is commercially viable as it is understood to be sold at a loss for the company). Peter Verstate of Mosa Meat in 2016 predicted that premium priced cultured products should be available in 5 years (ACE 2017) Mark Post in 2017 "says he is happy with his product, but is at least three years from selling one" (A literal reading of this resolves correct, even though one might interpret the meaning as a product will be available soon after three years) Bruce Friedrich of the Good Food Institute in March 2018 predicted “clean-meat products will be available at a high price within two to three years” Unnamed scientists in December 2018 “say that you can buy it [meat in a labor...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Make a $100 donation into $200 (or more), published by WilliamKiely on the effective altruism forum. Latest Update: On Nov 24 at 1:01pm PT, the matching fund pool was increased to $600,000. Check the realtime dashboard to see how much is still available to allocate. Of the first $250,000 in matching funds, more than 82% went to nonprofits you all donated to: Donation Match Terms This year, starting on November 1, Every.org is offering a very attractive $250,000 true counterfactual donation match. (Realtime dashboard of remaining funds.) /@william.kiely?c=gg25 Every.org will match the first donation you make to each US 501(c)(3) nonprofit you give to 1:1 up to $100 per donor per nonprofit. Currently, Every.org will contribute an extra $10 to your donation if you click to share your donation after donating. This might change (what it was originally). The Match Terms in Every.org's words: A donor can support multiple nonprofits, but only the first donation they make to each of those nonprofits will be matched. If someone makes two $50 donations to the same organization, then only the first $50 would be matched. If someone makes a $1000 donation, then only the first $100 is matched. If someone makes ten $100 donations to different organizations, then all ten donations will be matched. Steps to Participate Join with:/@william.kiely?c=gg25 (If you're a new user, this will give you and I $25 in giving credit in addition to the match described above (Update: I believe this new user incentive was removed by Nov 24), plus help me track how many EAs participate in the match so I can share the information with the community.) Check the live dashboard to see if there are remaining matching funds. If so, donate $100[1] to a nonprofit of your choice (to get your donation automatically matched 1:1) After donating, click one of the links to share your donation (to get the extra share incentive, currently +$10) Repeat steps 3 and 4 for every nonprofit you want to support! FAQ Answers Everyone can participate, regardless of country, even if you already joined last year. Fees are low, so donate by card if it's easier for you. Or if you'd prefer to eliminate all fees you can do so by connecting your bank account. Tax receipts: You can get these easily in your account on your My Giving page. If this sounds familiar... It's because 198 of you participated in a previous donation match sponsored by the same Every.org after seeing the post Make a $10 donation into $35 in December 2020. We successfully directed $4,950 in matching funds to highly effective nonprofits during that match. It was quite popular because it only took ~3 minutes for each person to direct $25 in matching funds. I'm hopeful that even more of you will participate in Every.org's current match since it's just as easy and yet the limits are much higher. thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Are we living at the most influential time in history?, published BY WilliamKiely on the effective altruism forum. Latest Update: On Nov 24 at 1:01pm PT, the matching fund pool was increased to $600,000. Check the realtime dashboard to see how much is still available to allocate. Of the first $250,000 in matching funds, more than 82% went to nonprofits you all donated to: Donation Match Terms This year, starting on November 1, Every.org is offering a very attractive $250,000 true counterfactual donation match. (Realtime dashboard of remaining funds.) /@william.kiely?c=gg25 Every.org will match the first donation you make to each US 501(c)(3) nonprofit you give to 1:1 up to $100 per donor per nonprofit. Currently, Every.org will contribute an extra $10 to your donation if you click to share your donation after donating. This might change (what it was originally). The Match Terms in Every.org's words: A donor can support multiple nonprofits, but only the first donation they make to each of those nonprofits will be matched. If someone makes two $50 donations to the same organization, then only the first $50 would be matched. If someone makes a $1000 donation, then only the first $100 is matched. If someone makes ten $100 donations to different organizations, then all ten donations will be matched. Steps to Participate Join with:/@william.kiely?c=gg25 (If you're a new user, this will give you and I $25 in giving credit in addition to the match described above (Update: I believe this new user incentive was removed by Nov 24), plus help me track how many EAs participate in the match so I can share the information with the community.) Check the live dashboard to see if there are remaining matching funds. If so, donate $100[1] to a nonprofit of your choice (to get your donation automatically matched 1:1) After donating, click one of the links to share your donation (to get the extra share incentive, currently +$10) Repeat steps 3 and 4 for every nonprofit you want to support! FAQ Answers Everyone can participate, regardless of country, even if you already joined last year. Fees are low, so donate by card if it's easier for you. Or if you'd prefer to eliminate all fees you can do so by connecting your bank account. Tax receipts: You can get these easily in your account on your My Giving page. If this sounds familiar... It's because 198 of you participated in a previous donation match sponsored by the same Every.org after seeing the post Make a $10 donation into $35 in December 2020. We successfully directed $4,950 in matching funds to highly effective nonprofits during that match. It was quite popular because it only took ~3 minutes for each person to direct $25 in matching funds. I'm hopeful that even more of you will participate in Every.org's current match since it's just as easy and yet the limits are much higher. You can donate less than $100 and still get matched, but note that you will forfeit your ability to get the full match for that nonprofit, even if you donate again. Per the terms: "If someone makes two $50 donations to the same organization, then only the first $50 would be matched." thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: 2018-2019 Long-Term Future Fund Grantees: How did they do?, published by NunoSempere on the effective altruism forum. Introduction At the suggestion of Ozzie Gooen, I looked at publicly available information around past LTF grantees. We've been investigating the potential to have more evaluations of EA projects, and the LTFF grantees seemed to represent some of the best examples, as they passed a fairly high bar and were cleanly delimited. For this project, I personally investigated each proposal without consulting many others. This work was clearly limited by not reaching out to others directly, but requesting external involvement would have increased costs significantly. We were also partially interested in finding how much we could figure out with this limitation. Background During its first two rounds (round 1, round 2) of the LTF fund, under the leadership of Nick Beckstead, grants went mostly to established organizations, and didn't have informative write-ups. The next few rounds, under the leadership of Habryka et. al., have more informative write-ups, and a higher volume of grants, which are generally more speculative. At the time, some of the grants were scathingly criticised in the comments. The LTF at this point feels like a different, more active beast than under Nick Beckstead. I evaluated its grants from the November 2018 and April 2019 rounds, meaning that the grantees have had at least two years to produce some legible output. Commenters pointed out that the 2018 LTFF is pretty different from the 2021 LTFF, so it's not clear how much to generalize from the projects reviewed in this post. Despite the trend towards longer writeups, the reasoning for some of these grants is sometimes opaque to me, or the grant makers sometimes have more information than I do, and choose not to publish it. Summary By outcome Flag Number of grants Funding ($) More successful than expected 6 (26%) $ 178,500 (22%) As successful as expected 5 (22%) $ 147,250 (18%) Not as successful as hoped for 3 (13%) $ 80,000 (10%) Not successful 3 (13%) $ 110,000 (13%) Very little information 6 (26%) $ 287,900 (36%) Total 23 $ 803,650 Not included in the totals or in the percentages are 5 grants worth a total of $195,000 which I tagged didn't evaluate because of a perceived conflict of interest. Method I conducted a brief Google, LessWrong and EA forum search of each grantee, and attempted to draw conclusions from the search. However, quite a large fraction of grantees don't have much of an internet presence, so it is difficult to see whether the fact that nothing is findable under a quick search is because nothing was produced, or because nothing was posted online. Overall, one could spend a lot of time with an evaluation. I decided to not do that, and go for an “80% of value in 20% of the time”-type evaluation. Grantee evaluation examples A private version of this document goes by grantees one by one, and outlines what public or semi-public information there is about each grant, what my assessment of the grant's success is, and why. I did not evaluate the grants where I had personal information which people gave me in a context in which the possibility of future evaluation wasn't at play. I shared it with some current LTFF fund members, and some reported finding it at least somewhat useful. However, I don't intend to make that version public, because I imagine that some people will perceive evaluations as unwelcome, unfair, stressful, an infringement of their desire to be left alone, etc. Researchers who didn't produce an output despite getting a grant might feel bad about it, and a public negative review might make them feel worse, or have other people treat them poorly. This seems undesirable because I imagine that most grantees were taking risky bets with a high expected value, even i...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Is EA Growing? EA Growth Metrics for 2018, published by Peter Wildeford on the effective altruism forum. Is EA growing? Rather than speculating from anecdotes, I decided to collect some data. This is a continuation of the analysis started last year. For each trend, I collected the raw data and also highlighted in green where the highest point was reached (though this may be different from the period with the largest growth depending on which derivative you are looking at). You can download the raw data behind these tables here. Implications This year, I decided to separate growth stats into a few different categories, looking at how growth changes when we talk about people learning about EA through reading; increasing their commitment through joining a newsletter, joining a Facebook group, joining the EA Forum, or subscribing to a podcast; increasing engagement by committing -- self-identifying as EA on the EA Survey and/or taking a pledge; and having an impact by doing something, like donating or changing their careers[33]. When looking at this, it appears that there has been a decline of people searching and looking for EA (at least in the ways we track), with the exception of 80,000 Hours pageviews and EA Reddit page subscriptions which continued to grow but at a much lower pace. When we look at the rate of change, we can see a fairly clear decline across all metrics: We can also see that when it comes to driving initial EA readership and engagement, 80,000 Hours is very clearly leading the pack while other sources of learning about EA are declining a bit: In fact, the two sources of learning about EA that seem to best represent natural search -- Google interest and Wikipedia pageviews -- appear somewhat correlated and are now both declining together. However, there are more people consuming EA in closer ways (what I termed “joining”) -- while growth rate in the EA Newsletter and 80K Newsletter has slowed down, the EA FB is more active, the EA Reddit and total engagement from 80K continues to grow, and new avenues like Vox's Future Perfect and 80K's podcast have opened up. However, this view of growth can change depending on which derivative you look at. Looking at the next derivative makes clear that there was a large explosion of interest in 2017 in the EA Reddit and the EA Newsletter that wasn't repeated in 2018: Additionally, Founder's Pledge continues to grow and OFTW has had explosive growth, though GWWC has stalled out a bit. The EA Survey has also recovered from a sluggish 2017 to break records in 2018. Looking at the rate of change shows Founder's Pledge clearly increasing, GWWC decreasing, and OFTW's having fairly rapid growth in 2018 after a slowdown in 2017. Lastly, the part we care about most seems to be doing the strongest -- while tracking the actual impact of the EA movement is really hard and very sensitive to outliers, nearly every doing/impact metric we do track was at its strongest in either 2017 or 2018, with only GiveWell and 80K seeing a slight decline in 2018 relative to 2017. However, looking at the actual rate of change shows a bleaker picture that we may be approaching a plateau. Conclusion Like last year, it still remains a bit difficult to infer broad trends given that a decline for one year might be the start of a true plateau or decline (as appears to be the case for GWWC) or may just be a one-time blip prior to a bounce back (as appears to be the case for the EA Survey[34]). Overall, the decline in people first discovering EA (reading) and the growth of donations / career changes (doing) makes sense, as it is likely the result of the intentional effort across several groups and individuals in EA over the past few years to focus on high-fidelity messaging and growing the impact of pre-existing EAs and deliberate decisions to stop mas...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Why I find longtermism hard, and what keeps me motivated, published by Michelle_Hutchinson on the effective altruism forum. [Cross-posted from the 80,000 Hours blog] I find working on longtermist causes to be — emotionally speaking — hard: There are so many terrible problems in the world right now. How can we turn away from the suffering happening all around us in order to prioritise something as abstract as helping make the long-run future go well? A lot of people who aim to put longtermist ideas into practice seem to struggle with this, including many of the people I've worked with over the years. And I myself am no exception — the pull of suffering happening now is hard to escape. For this reason, I wanted to share a few thoughts on how I approach this challenge, and how I maintain the motivation to work on speculative interventions despite finding that difficult in many ways. This issue is one aspect of a broader issue in EA: figuring out how to motivate ourselves to do important work even when it doesn't feel emotionally compelling. It's useful to have a clear understanding of our emotions in order to distinguish between feelings and beliefs we endorse and those that we wouldn't — on reflection — want to act on. What I've found hard First, I don't want to claim that everyone finds it difficult to work on longtermist causes for the same reasons that I do, or in the same ways. I'd also like to be clear that I'm not speaking for 80,000 Hours as an organisation. My struggles with the work I'm not doing tend to centre around the humans suffering from preventable diseases in poor countries. That's largely to do with what I initially worked on when I came across effective altruism. For other people, it's more salient that they aren't actively working to prevent the barbarity of some factory farming practices. I'm not going to talk about all of the ways in which people might find it hard to focus on the long-run future — for the purposes of this article, I'm going to focus specifically on my own experience. I feel a strong pull to help people now A large part of the suffering in the world today simply shouldn't exist. People are suffering and dying for want of cheap preventative measures and cures. Diseases that rich countries have managed to totally eradicate still plague millions around the world. There's strong evidence for the efficacy of cheap interventions like insecticide-treated anti-malaria bed nets. Yet many of us in rich countries are well off financially, and spend a significant proportion of our income on non-necessity goods and services. In the face of this absurd and preventable inequity, it feels very difficult to believe that I shouldn't be doing anything to ameliorate it. Likewise, it often feels hard to believe that I shouldn't be helping people geographically close to me — such as homeless people in my town, or people who are being illegitimately incarcerated in my country. It's hard to deal with there being visible and preventable suffering that I'm not doing anything to combat. For me, putting off helping people alive today in favour of helping those in the future is even harder than putting off helping those in my country in favour of those on the other side of the world. This is in part due to the sense that if we don't take actions to improve the future, there are others coming after us who can. By contrast, if we don't take action to help today's global poor, those coming after us cannot step in and take our place. The lives we fail to save this year are certain to be lost and grieved for. Another reason this is challenging is that wealth seems to be sharply increasing over time. This means that we have every reason to believe that people in the future will be far richer than people today, and it would seem to follow that people in the future d...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Introducing the Legal Priorities Project, published by jonasschuett on the effective altruism forum. We're excited to introduce a new EA organization: the Legal Priorities Project. About us The Legal Priorities Project is an independent, global research project founded by researchers from Harvard University. Our mission is to conduct legal research that tackles the world's most pressing problems. This currently leads us to focus on the protection of future generations. The project is led by Christoph Winter (ITAM/Harvard); the other founding team members are Cullen O'Keefe (OpenAI/GovAI), Eric Martínez (MIT), and Jonas Schuett (Goethe University Frankfurt). For more information about our team, visit our website. The idea was born at the EA group at Harvard Law School in Fall 2018. Since then, we raised two rounds of funding from Ben Delo at the advice of Effective Giving, built a highly motivated and mission-aligned core team, registered as a 501(c)(3) nonprofit, hosted a seminar at Harvard Law School, and organized our first summer research fellowship. Besides that, we worked on our research agenda and a number of other research projects. We're currently assessing the desirability and feasibility of having a formal affiliation with a university. We consider founding a center or institute at a leading law school in the US or UK within the coming 2 years. Our research We aim to establish “legal priorities research” as a new research field. At the meta-level, we determine which problems legal scholars should work on in order to tackle the world's most pressing problems. At the object-level, we conduct legal research on the identified problems. Our approach to legal priorities research is influenced by the longtermism paradigm. Consequently, we are currently focusing on the following cause areas: (1) improving the governance of advanced artificial intelligence, (2) mitigating risks from synthetic biology, (3) mitigating extreme risks from climate change, and (4) improving institutional design and decision-making. Legal priorities research can be viewed as a subset of global priorities research. While global priorities research is located at the intersection of philosophy and economics, legal priorities research focuses primarily on legal studies, although it is still highly interdisciplinary. We are currently working on a research agenda for legal priorities research. The agenda will be divided by cause areas and will contain a list of promising research projects for legal scholars. We hope to publish the agenda in December 2020. Sign up to our newsletter, if you want to receive an email when it gets published. We are also working on a number of object-level research projects. Please get in touch, if you want to collaborate with us on future research projects. You may also want to fill out our expression of interest form. Further information Website: legalpriorities.org Email: hello@legalpriorities.org LinkedIn: linkedin.com/company/legalpriorities Twitter: twitter.com/legalpriority Facebook: facebook.com/legalpriorities thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Can I have impact if I'm average?, published by Fabienne on the effective altruism forum. A friend of mine who is into EA said a few days ago that he thinks most people cannot have an impact, because to have an impact you need to be among the 0.1%-1% best in your field. I have encountered this thought in quite a few people in/interested in EA, some of whom say that this thought has dragged them down a lot. When I led an EA career workshop for students who received the Studienstiftung scholarship, one of my participants who had just realised he could have a lot more impact if he switched career paths, said to me something along the lines of: “Oh man, what I have been doing was worthless”. I replied: “Ehm no it wasn't? :) You seem to have improved lives noticeably. The fact that there are better opportunities than what you did does not take away that value. In fact, it's because improving even a single life is valuable that the best opportunities are so incredibly valuable.” 80k (probably rightly so) seeks to focus on the top 1%, but that does not mean that you cannot have (a lot of) impact if you are less good at what you do. Here is what I think is going on when people despair about their impact. I think our ability to feel what “unusually high impact” means is very limited. Our head knows that there is a big difference between saving a few people and saving a multitude of people, but our heart doesn't quite get it. So what some people in EA then seem to do is this: They assign the value level “maximally valuable” to the most impactful thing someone could do - so far, so good. But then when they encounter a lower level of impact (such as saving one life), they reduce their value judgment by however lower the impact is compared to the highest possible impact. This leaves them with an inappropriately low judgement of value for this impact, because our judgement of value for the highest impact possible was way too low to start with. It's the opposite to what people outside of EA tend to do - (correctly) give a lot of value to saving one life but not scaling this judgement up appropriately. I think it's possible to avoid both mistakes - at least, I think that I am able to avoid them both. I think underestimating the value of significant but non-maximal impact is a problem. For one thing, it's a misconception and misconceptions are rarely helpful. Second, I think this is probably bad for the mental health and productivity of our movement, because it de-motivates and saddens people. Third, it probably affects not only people who have “average” talent, whatever that means, but also those who are in fact excellent at something but who think of themselves as average. There seem to be a lot of people like this in EA. Fourth, I think it's bad for public relations because it can make people feel useless and can come across as arrogant. How do we fix this misconception? I hope that this post helps a bit with that - what follows are some other ideas. Perhaps the idea I'm presenting here, or related ones, could be included in the mental health workshops at EA conferences together with CBT and ACT methods for those who want more help emotionally distancing themselves from this or other unhelpful thoughts. Movement builders could watch out for this misunderstanding and correct it, like I did at the workshop I mentioned above. Maybe EA-related websites could include the idea somewhere, such as in their FAQs. I don't know how much these things would help, but my personal experience with clarifying this misconception to people has been positive. Thanks to Rob Long for helping me improve this post! thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: A new strategy for broadening the appeal of effective giving (GivingMultiplier.org), published by Lucius_Caviola on the effective altruism forum. In this post, I introduce an ongoing research project with the aim of bringing effective giving to a wider range of altruists. The strategy combines 1) donation bundling (splitting between your favorite and an effective charity), 2) asymmetrical matching (offering higher matching rates for allocating more to an effective charity), 3) a form of donor coordination (to provide the matching). After conducting a series of experiments, we will test our strategy in the real-world using our new website GivingMultiplier.org. This project is a collaboration with Prof Joshua Greene and is supported by an EA Funds grant. Background It is difficult to motivate people to give more effectively. Presenting people with information about charity effectiveness can increase effective giving to some extent (Caviola, Schubert, et al., 2020a; 2020b). However, the effect is limited because most people prefer to give to their favorite charity even when they know that other charities are more effective (Berman et al., 2018). This is because people are motivated by ‘warm glow' of giving (Andreoni, 1990), which isn't a good proxy for effectiveness. Another issue is that most people aren't motivated to proactively seek out information about the most effective charities. But making people care more about effectiveness is difficult. In multiple studies I have found that presenting people with moral arguments makes little to no difference. (Though moral arguments might work for some people and under the right circumstances, cf. Lindauer et al., 2020; Schwitzgebel et al., 2020.) Therefore, the approach we take here is to work with people's preferences instead of trying to change them. The strategy Below is a short summary of the set of techniques our strategy relies on. In our experiments, 2,000 (Amazon MechanicalTurk) participants made probabilistically implemented decisions involving real money. If you are interested in more details about our studies and results, you can find an early working draft here. 1) Donation bundling We found that donations to effective charities can be increased by up to 75% when people are offered the option to split their donation between their favorite and a highly effective charity (Study 1). We call this technique donation bundling. Most donors find such bundle options appealing because they enjoy nearly all the warm-glow of giving exclusively to their favorite charity, but also gain the satisfaction of giving more effectively and fairly (Study 2). Likewise, we find that third-parties perceive bundle donors as both highly warm and highly competent, as compared to donors who give exclusively to an emotionally appealing charity (warm, but less competent) or exclusively to a highly effective charity (competent, but less warm) (Study 3). 2) Asymmetrical matching The bundling technique can be enhanced by offering matching funds in an asymmetrical way, i.e. the matching rate increases as more is allocated to the effective charity. In our studies, participants were offered higher matching rates, the more they would give to the effective charity as opposed to their favorite charity. For example, they might get a 10% matching rate for giving 50% to their favorite and 50% to the effective charity, but a 20% matching rate for giving 100% to the effective charity. We found that asymmetrical matching can increase donations to effective charities by an additional 55% (Study 4). A key advantage of offering donation matching is that it provides people with no prior interest in effective giving to visit the site and choose to support a highly effective charity. 3) Matching as donor coordination Where does the matching funding come from? We ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: SHIC Will Suspend Outreach Operations, published by cafelow on the effective altruism forum. A Q1 update and 2018 in review By Baxter Bullock and Catherine Low Since launching in 2016, Students for High-Impact Charity (SHIC), a project of Rethink Charity, has focused on educational outreach for high school students (primarily ages 16-18) through our interactive content. In January 2018, we began implementing instructor-led workshops, mostly in the Greater Vancouver area. Below, we summarize our experiences of 2018 and explain why we are choosing to suspend our outreach operations. Summary 2018 saw strong uptake, but difficulty securing long-term engagement - Within a year of instructor-led workshops, we presented 106 workshops, reaching 2,580 participants at 40 (mostly high school) institutions. We experienced strong student engagement and encouraging feedback from both teachers and students. However, we struggled in getting students to opt into advanced programming, which was our behavioral proxy for further engagement. By the end of April, SHIC outreach will function in minimal form, requiring very little staff time - Over the next two months, our team will gradually wind down delivered workshops at schools. We plan on maintaining a website with resources and fielding inquiries through a contact form for those who are looking for information on how best to implement EA education. The most promising elements of SHIC may be incorporated into other high-impact projects - The SHIC curriculum could likely be repurposed for other high-impact projects within the wider Rethink Charity umbrella. For example, it could be a tool for engaging potential high-net-worth donors, or as content to provide local group leaders. We believe in the potential of educational outreach and hope to revisit this in the future - While we acknowledge the possibility that poor attendance at advanced workshops is indicative of general interest level in our program and/or EA in general, it's also possible that the methods we used to facilitate long term engagement were inadequate. We think that under the right circumstances, educational outreach could be more fruitful. SHIC will release an exhaustive evaluation of our experience with educational outreach in the coming months. 2018 in review In late 2017 we made a strategic shift towards a high-fidelity model of student engagement through instructor-led workshops. We tested this model throughout 2018, with our instructors visiting schools in Greater Vancouver, Canada[1]. Most students (56%) participated in a single-session workshop lasting approximately 80 minutes. These workshops consisted of a giving game[2], followed by an overview of the core ideas of effective altruism[3], including coverage of key cause areas. The remaining 44% of participants participated in multi-session (typically three), in-depth workshops which usually included a giving game, interactive explorations of the topics mentioned above, a cause prioritization activity, and a discussion of effective career paths. Our goal for the second half of 2018 was to identify high-potential students from our school visits, and engage them further with supplementary advanced workshops at a central location in Vancouver. To gauge interest initially, we began with an opt-in approach for all interested students who provided an email address in order to obtain more information. We ran a workshop in November which primarily consisted of an in-depth activity on cause prioritization, and a workshop in December focused on effectively creating online fundraisers for the holidays. Our results The metrics we identified to gauge our success were: Teachers and school uptake Student survey results indicating shifts of opinion and/or behavior The number of students who continue to engage with the material...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Avoiding Munich's Mistakes: Advice for CEA and Local Groups, published by Larks on the effective altruism forum. If all mankind minus one, were of one opinion, and only one person were of contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind. John Stuart Mill, On Liberty, p23 We strive to base our actions on the best available evidence and reasoning about how the world works. We recognise how difficult it is to know how to do the most good, and therefore try to avoid overconfidence, to seek out informed critiques of our own views, to be open to unusual ideas, and to take alternative points of view seriously. ... We are a community united by our commitment to these principles, not to a specific cause. Our goal is to do as much good as we can, and we evaluate ways to do that without committing ourselves at the outset to any particular cause. We are open to focusing our efforts on any group of beneficiaries, and to using any reasonable methods to help them. If good arguments or evidence show that our current plans are not the best way of helping, we will change our beliefs and actions. Excepted from The Guiding Principles of Effective Altruism Introduction This post argues that Cancel Culture is a significant danger to the potential of EA project, discusses the mistakes that were made by EA Munich and CEA in their deplatforming of Robin Hanson, and provides advice on how to avoid such issues in the future. As ever, I encourage you to use the navigation pane to jump to the parts of the article that are most relevant to you. In particular, if you are already convinced you might skip the 'examples' and 'quotes' sections. Background The Nature of Cancel Culture In the past couple of years, there's been much damage done to the norms around free speech and inquiry, in substantial parts due to what's often called cancel culture. Of relevance to the EA community is that there have been an increasing number of highly public threats and attacks on scientists and public intellectuals, where researchers are harassed online, disinvited from conferences, had their papers retracted, and fired, because of mass online mobs reacting to an accusation over slight wording on topics of race, gender, and other issues of identity, or guilt-by-association with other people who have also been attacked by such mobs. This is colloquially called ‘cancelling', after the hashtags that have formed saying #CancelX or #xisoverparty, where X is some person, company or other entity, hashtags which are commonly trending on Twitter. While such mobs cannot attack every person who speaks in public, they can attack any person who speaks in public, leading to chilling effects where nobody wants to talk about the topics that can lead to cancelling. Cancel Culture essentially involves the following steps: A victim, often a researcher, says or does something that irks someone online. This critic then harshly criticises the person using attacks that are hard to respond to in our culture - the accusation of racism is a common one. The goal of this attack is to signal to a larger mob that they should pile on, with the hope of causing massive damage to the person's private and professional lives. Many more people then join in the attack online, including (often) contacting their employer. People who defend the victim are attacked as also being guilty of a similar crime. Seeing this dynamic, many associates of the victim prefer to sever their relationship, rather than be subject to this abuse. This may also include their employer, for whom the loss of one employee seems a relatively small cost for maintaining PR. The online crowd may swiftly move on; however, the victim now lives under a cloud of suspicion that is hard to...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Thoughts on whether we're living at the most influential time in history, published by Buck on the effective altruism forum. (thanks to Claire Zabel, Damon Binder, and Carl Shulman for suggesting some of the main arguments here, obviously all flaws are my own; thanks also to Richard Ngo, Greg Lewis, Kevin Liu, and Sidney Hough for helpful conversation and comments.) Will MacAskill has a post, Are we living at the most influential time in history?, about what he calls the “hinge of history hypothesis” (HoH), which he defines as the claim that “we are living at the most influential time ever.” Whether HoH is true matters a lot for how longtermists should allocate their effort. In his post, Will argues that we should be pretty skeptical of HoH. EDIT: Will recommends reading this revised article of his instead of his original post. I appreciate Will's clear description of HoH and its difference from strong longtermism, but I think his main argument against HoH is deeply flawed. The comment section of Will's post contains a number of commenters making some of the same criticisms I'm going to make. I'm writing this post because I think the rebuttals can be phrased in some different, potentially clearer ways, and because I think that the weaknesses in Will's argument should be more widely discussed. Summary: I think Will's arguments mostly lead to believing that you aren't an “early human” (a human who lives on Earth before humanity colonizes the universe and flourishes for a long time) rather than believing that early humans aren't hugely influential, so you conclude that either humanity doesn't have a long future or that you probably live in a simulation. I sometimes elide the distinction between the concepts of “x-risk” and “human extinction”, because it doesn't matter much here and the abbreviation is nice. (This post has a lot of very small numbers in it. I might have missed a zero or two somewhere.) EDIT: Will's new post Will recommends reading this revised article of his instead of his original post. I believe that his new article doesn't make the assumption about the probability of civilization lasting for a long time, which means that my criticism "This argument implies that the probability of extinction this century is almost certainly negligible" doesn't apply to his new post, though it still applies to the EA Forum post I linked. I think that my other complaints are still right. The outside-view argument This is the argument that I have the main disagreement with. Will describes what he calls the “outside-view argument” as follows: 1. It's a priori extremely unlikely that we're at the hinge of history 2. The belief that we're at the hinge of history is fishy 3. Relative to such an extraordinary claim, the arguments that we're at the hinge of history are not sufficiently extraordinarily powerful Given 1, I agree with 2 and 3; my disagreement is with 1, so let's talk about that. Will phrases his argument as: The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n. The unconditional prior probability over whether this is the most influential century would then depend on one's priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that's the claim we can focus our discussion on. I have several disagreements with this argument. This argument implies that the probability of exti...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Opinion: Digital marketing is under-utilized in EA, published by JSWinchell on the effective altruism forum. In this post I will make the case that digital marketing is under-utilized by EA orgs as well as provide some example use cases. My hope is that this post leads to EA orgs testing the below or similar strategies. A large part of what Effective Altruism is trying to do is to change people's beliefs and behaviors. Digital advertising is one tool for achieving this goal. The fact that corporations, governments, and nonprofits repeatedly invest millions of dollars in digital marketing programs is evidence of their efficacy. A couple notes: I work at Google/YouTube helping large advertisers run Google and YouTube Ads. For that reason this post does not touch on Facebook/Instagram//Twitter/TikTok, but I am sure there are large opportunities there as well. This post is focused on paid advertising. Cost estimates are based on previous experience and industry benchmarks, but costs vary based on the geography, season, tactic etc. If you plan on running any of the strategies described below, please reach out to me so we can coordinate with other charities that are planning on running similar strategies. If your EA org would like to explore running a digital marketing campaign please don't hesitate to reach out to me at j.s.winchell@gmail.com. Search Ads Every year, millions of people ask Google questions related to charity, poverty, animal welfare, AI safety, etc. If we can direct those people to EA websites, they will get EA answers to their questions. Google gives registered charities $10K/month in free advertising credits. Of a sample of ~10 EA charities, only one was fully using this Google Ads Grant. With a little extra knowledge, spending the Google Ads Grant becomes much easier than described in previous posts (1, 2). If you work at an EA org and would like help spending your full Google Ads Grant please fill out this survey. Image/display Ads If your organization has a target audience and you know of websites that audience visits, display ads could be a very cost-effective way for your charity to achieve its goals. Example use case: Founders Pledge wants to spread the word about their pledge. They identify three websites frequented by founders and advertise on those websites. A standard benchmark for an image/display ad is $2 per 1,000 impressions (an impression is when an ad is served). This strategy would break even at one pledge per 1,000,000,000 impressions served. (Assumes advertising cost of $2 per 1,000 impressions and an average pledge size of $2M, which is based on ~$3B pledged and ~1,500 pledgers). While the optimal number of impressions served will be much less than 1,000,000,000, it is very likely higher than 0. In addition to sending founders to the Founders Pledge website, this tactic would also increase awareness of the pledge thereby making it easier for their outreach team to sign on new members. Video Ads YouTube Ads are a powerful and inexpensive way to deliver visual and audio messages to targeted audiences. You can target users using any combination of the following: Geography (radius targeting, zip/postal code, state) Household income (top 10%, 11-20%, etc.) Search history on Google and YouTube (e.g. users that searched for “best charity” or “factory farming” in the last 30 days) Types of websites visited (e.g. users that have visited the websites of large nonprofits) YouTube channels being watched at the time the impression serves (contextual targeting) Demographics (age, gender) Example use cases: Org A wants to boost enrollment in EA University groups. They have a member student record a simple 6 second selfie video inviting students to the group. They target 18-24 year-olds within a 10 mile radius of their target universities. They could...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Collection of good 2012-2017 EA forum posts, published by saulius on the effective altruism forum. I feel that older EA forum posts are not read nearly as much as they should. Hence, I collected the ones that seemed to be the most useful and still relevant today. I recommend going through this list in the same way you would go through the frontpage of this forum: reading the titles and clicking on the ones that seem interesting and relevant to you. Note that you can hover over links to see more details about each post. Also note that many of these posts have lower karma scores than most posts posted nowadays. This is in large part because until September 2018, all votes were worth only one karma point, and before September 2014 there was no karma system at all. Furthermore, the forum readership used to be lower. Hence, I don't advise to pay too much attention to karma when choosing which of these posts to read. Most of these posts had a significantly higher karma than other posts posted around the same time. To create this list, I skimmed through the titles (and sometimes the contents) of all posts posted between 2012 and 2017. I relied on my intuition to decide which posts to include. Undoubtedly, I missed some good ones. Please feel free to point them out in the comments. Also note that in some cases the information in these posts might be outdated, or no longer reflect the opinions of their authors. Communication Supportive Scepticism See also: Supportive scepticism in practice Some Thoughts on Public Discourse Six Ways To Get Along With People Who Are Totally Wrong If you don't have good evidence one thing is better than another, don't pretend you do You have a set amount of "weirdness points". Spend them wisely. General reasoning In defence of epistemic modesty Integrity for consequentialists Beware surprising and suspicious convergence Cause-prioritization Why I'm skeptical about unproven causes (and you should be too) Follow up: Where I've Changed My Mind on My Approach to Speculative Causes How we can make it easier to change your mind about cause areas Cause prioritization for downside-focused value systems Five Ways to Handle Flow-Through Effects Post series by Micheal Dickens Long-term Future AI Safety Literature reviews by Larks: 2016, 2017, 2018, 2019 Will we eventually be able to colonize other stars? Notes from a preliminary review The timing of labour aimed at reducing existential risk Cognitive Science/Psychology As a Neglected Approach to AI Safety Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk Why might the future be good? Personal thoughts on careers in AI policy and strategy My current thoughts on MIRI's "highly reliable agent design" work Improving disaster shelters to increase the chances of recovery from a global catastrophe Why long-run focused effective altruism is more common sense Advice on how to think about altruism Effective Altruism is a Question (not an ideology) Response: Effective Altruism is an Ideology, not (just) a Question Cheerfully Effective Altruism is Not a Competition Personal consumption changes as charity (a suggestion about how to decide whether to buy more expensive but more ethical products) Aim high, even if you fall short An embarrassment of riches Parenthood and effective altruism How much does it cost to have a child? EA is the best choice I've ever made Room for Other Things: How to adjust if EA seems overwhelming On everyday altruism and the local circle Helping other altruists Effective altruism as the most exciting cause in the world For more advice on how to think about altruism, see excellent blogs Minding our way (by Nate Soares) and Giving Gladly (by Julia Wise) Movement strategy Keeping the effective altruist movement welcoming The perspectives...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Objections to Value-Alignment between Effective Altruists, published by CarlaZoeC on the effective altruism forum. With this post I want to encourage an examination of value-alignment between members of the EA community. I lay out reasons to believe strong value-alignment between EAs can be harmful in the long-run. The EA mission is to bring more value into the world. This is a rather uncertain endeavour and many questions about the nature of value remain unanswered. Errors are thus unavoidable, which means the success of EA depends on having good feedback mechanisms in place to ensure mistakes can be noticed and learned from. Strong value-alignment can weaken feedback mechanisms. EAs prefer to work with people who are value-aligned because they set out to maximse impact per resource expended. It is efficient to work with people who agree. But a value-aligned group is likely intellectually homogenous and prone to breed implicit assumptions or blind spots. I also noticed particular tendencies in the EA community (elaborated in section: homogeneity, hierarchy and intelligence), which generate additional cultural pressures towards value-alignment, make the problem worse over time and lead to a gradual deterioration of the corrigibility mechanisms around EA. Intellectual homogeneity is efficient in the short-term, but counter-productive in the long-run. Value-alignment allows for short-term efficiency, but the true goal of EA – to be effective in producing value in the long- term – might not be met. Disclaimer All of this is based on my experience of EA over the timeframe 2015-2020. Experiences differ and I share this to test how generalisable my experiences are. I used to hold my views lightly and I still give credence to other views on developments in EA. But I am getting more, not less worried over time, particularly because others members have expressed similar views and worries to me but have not spoken out about them because they fear losing respect or funding. This is precisely the erosion of critical feedback mechanism that I point out here. I have a solid but not unshakable belief about the theoretical mechanism I outline is correct but I do not know to what extent it takes effect in EA. But I'm also not sure whether those who will disagree with me will know to what extent this mechanism is at work in their own community. What I am sure of however (on the basis of feedback from people who have read this post pre-publication) is that my impressions of EA are shared by others within the community, that they are the reason why some have left EA or never quite dared to enter. This alone is reason for me to share this - in the hope that a healthy approach to critique and a willingness to change in response to feedback from the external world is still intact. I recommend the impatient reader to skip forward to the section on Feedback Loops and Consequences. Outline I will outline reasons that lead EAs to prefer value-alignment and search for definitions of value-alignment. I then describe cultural traits of the community which play a role in amplifying this preference and finally evaluate what effect value-alignment might have on EAs feedback loops and goals. Axiomaticity Movements make explicit and obscure assumptions. They make explicit assumptions: they stand for something and exist with some purpose. An explicit assumption is, by my definition, one that was examined and consciously agreed upon. EA explicitly assumes that one should maximise the expected value of one's actions in respect to a goal. Goals differ between members but mostly do not diverge greatly. They may be a reduction of suffering, the maximisation of hedons in the universe or the fulfilment of personal preferences, and others. But irrespective of individual goals EAs mostly agree that resources sh...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: List of ways in which cost-effectiveness estimates can be misleading, published by saulius on the effective altruism forum. In my cost-effectiveness estimate of corporate campaigns, I wrote a list of all the ways in which my estimate could be misleading. I thought it could be useful to have a more broadly-applicable version of that list for cost-effectiveness estimates in general. It could maybe be used as a checklist to see if no important considerations were missed when cost-effectiveness estimates are made or interpreted. The list below is probably very incomplete. If you know of more items that should be added, please comment. I tried to optimize the list for skimming. How cost estimates can be misleading Costs of work of others. Suppose a charity purchases a vaccine. This causes the government to spend money distributing that vaccine. It's unclear whether the costs of the government should be taken into account. Similarly, it can be unclear whether to take into account the costs that patients have to spend to travel to a hospital to get vaccinated. This is closely related to concepts of leverage and perspective. More on it can be read in Byford and Raftery (1998), Karnofsky (2011), Snowden (2018), and Sethu (2018). It can be unclear whether to take into account the fixed costs from the past that will not have to be spent again. E.g., costs associated with setting up a charity that are already spent and are not directly relevant when considering whether to fund that charity going forward. However, such costs can be relevant when considering whether to found a similar charity in another country. Some guidelines suggest annualizing fixed costs. When fixed costs are taken into account, it's often unclear how far to go. E.g., when estimating the cost of distributing a vaccine, even the costs of roads that were built partly to make the distribution easier could be taken into account. Not taking future costs into account. E.g., an estimate of corporate campaigns may take into account the costs of winning corporate commitments, but not future costs of ensuring that corporations will comply with these commitments. Future costs and effects may have to be adjusted for the possibility that they don't occur. Not taking past costs into account. In the first year, a homelessness charity builds many houses. In the second year, it finds homeless people to live in those houses. In the first year, the impact of the charity could be calculated as zero. In the second year, it could be calculated to be unreasonably high. But the charity wouldn't be able to sustain the cost-effectiveness of the second year. Not adjusting past or future costs for inflation. Not taking overhead costs into account. These are costs associated with activities that support the work of a charity. It can include operational, office rental, utilities, travel, insurance, accounting, administrative, training, hiring, planning, managerial, and fundraising costs. Not taking costs that don't pay off into account. Nothing But Nets is a charity that distributes bednets that prevent mosquito-bites and consequently malaria. One of their old blog posts, Sauber (2008), used to claim that "If you give $100 of your check to Nothing But Nets, you've saved 10 lives." While it may be true that it costs around $10 or less[1] to provide a bednet, and some bednets save lives, costs of bednets that did not save lives should be taken into account as well. According to GiveWell's estimates, it currently costs roughly $3,500 for a similar charity (Against Malaria Foundation) to save one life by distributing bednets. Wiblin (2017) describes a survey in which respondents were asked "How much do you think it would cost a typical charity working in this area on average to prevent one child in a poor country from dying unnecessarily, by ...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Introducing High Impact Athletes, published by Marcus Daniell on the effective altruism forum. Hi all, A while back I posted on here asking if there were any other pro athlete aspiring EAs. The response (while not including other pro athletes) was amazing, and the conversations and contacts that manifested from this forum were myriad. Thank you deeply for being such an awesome community! Now I am very pleased to say that High Impact Athletes has launched. We are an EA aligned non-profit run by pro athletes. HIA aims to channel donations to the most effective, evidence-based charities in the world in the areas of Global Health & Poverty and Environmental Impact. We will harness the wealth, fame, and social influence of professional athletes to bring as many new people in to the effective altruism framework as possible and create the biggest possible snowball of donations to the places where they can do the most good. You can poke around on the website to learn more at/ Feedback is welcomed, and even more welcome is a follow on any of the socials. I'm terrible at social media and could use all the help I can get to build an audience. Instagram: high.impact.athletes Twitter: HIAorg Facebook: @HIAorg On that note, if anyone is interested in helping out with the social media side of things or knows anyone who would be please do get in touch either on here or at marcus@highimpactathletes.com Thank you once again, you're all awesome. Cheers, Marcus Daniell thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Parenting: Things I wish I could tell my past self, published by Michelle_Hutchinson on the effective altruism forum. I have a baby who's nearly 10 months old. I've been thinking about what I'd like to be able to go back and tell myself before I embarked on this journey. I suspect that some of the differences between how I experienced it and what I had read in books correlates with ways that other effective altruists might also experience things. I also generally felt that finding decent no-nonsense information about parenting was hard, and that the signal to noise ratio when googling for answers was peculiarly bad. Probably the most useful advice I got was from EA friends with kids. So I thought it might be useful to jot down some thoughts for other EAs likely to have kids soon (or hoping to support others who are!). Note that these are just my experiences. I've been surprised how easy it is when it comes to mothering I hear ‘this is how I did it' as ‘if you're not doing the same you're doing it wrong'. I mean no such implication! Your mileage may vary on all of the below. Things I was surprised about: Not changing much as a person: The biggest uncertainty I had starting out was how much my interests and priorities would change when I had a baby. Various people I talked to confidently expected they would substantially change once the baby came along, for example that I would find being at home looking after a baby more interesting than it sounded in the abstract. A lot of the advice I read on the internet likewise indicated that people tended to want more maternity leave than they expected, and to be more inclined to go part time after having children. For those reasons, I roughly planned to take 3 months of maternity leave, but to be prepared for actually wanting more leave. In the actual event, I was really surprised by how little my inclinations changed. Far from wanting more maternity leave than I expected, I was keen throughout to be in touch with my colleagues and hear how things were going in the office, and wanted to get back to doing bits of work really quite soon after having Leo. This seemed in marked contrast with the other mothers I was meeting at baby groups, who had expected to want to hear about what was happening in their offices, but actually weren't at all interested once the baby came along. I think I did too much assuming that when I had a baby I'd turn into a different kind of person, and not enough simply thinking about ‘given the kind of person I am, how do I expect having a baby to interface with that?'. Also, I did too much looking at the average of how people change, rather than noticing that people react in widely differing ways, which include ‘not changing much at all'. Overall it's rather a relief to feel I'm still the same person, but now with a cute small person to spend time with. Finding childcare was harder than I expected. When I got to it, I wanted to go back to work before three months. My husband had committed to finishing various pieces of work before starting paternity leave (3 months in). For that reason, we were keen to arrange some child care for Leo when he was younger than three months. That turned out to be more difficult than I expected. Nurseries don't tend to take kids that young and the agency we wrote to had trouble finding us someone who would be short term (and took a while to get back to us at each step). We got a recommendation for someone on care.com, which almost worked out, except they found out their current contract precluded them from also working for us. The process also felt intimidating, at a time when we were already learning a lot of new things, which slowed down how well we did at it. I think I should have approached it more with the mindset of ‘we need to hire someone, and hiring is hard!' than I d...