Podcasts about ea forum

  • 13PODCASTS
  • 451EPISODES
  • 15mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Dec 12, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ea forum

Latest podcast episodes about ea forum

Effective Altruism Forum Podcast
“EA Forum audio: help us choose the new voice” by peterhartree, TYPE III AUDIO

Effective Altruism Forum Podcast

Play Episode Listen Later Dec 12, 2024 1:44


We're thinking about changing our narrator's voice.There are three new voices on the shortlist. They're all similarly good in terms of comprehension, emphasis, error rate, etc. They just sound different—like people do. We think they all sound similarly agreeable. But, thousands of listening hours are at stake, so we thought it'd be worth giving listeners an opportunity to vote—just in case there's a strong collective preference. Listen and votePlease listen here:https://files.type3.audio/ea-forum-poll/ And vote here:https://forms.gle/m7Ffk3EGorUn4XU46 It'll take 1-10 minutes, depending on how much of the sample you decide to listen to.We'll collect votes until Monday December 16th. Thanks! ---Outline:(00:47) Listen and vote(01:11) Other feedback?The original text contained 1 footnote which was omitted from this narration. --- First published: December 10th, 2024 Source: https://forum.effectivealtruism.org/posts/Bhd5GMyyGbusB22Hp/ea-forum-audio-help-us-choose-the-new-voice --- Narrated by TYPE III AUDIO.

Effective Altruism Forum Podcast
“I'm grateful for you” by Sarah Cheng

Effective Altruism Forum Podcast

Play Episode Listen Later Dec 5, 2024 2:40


I recently wrote up some EA Forum-related strategy docs for a CEA team retreat, which meant I spent a bunch of time reflecting on the Forum and why I think it's worth my time to work on it. Since it's Thanksgiving here in the US, I wanted to share some of the gratitude that I felt.

Effective Altruism Forum Podcast
“Why you should allocate more of your donation budget to effective giving organisations” by Luke Moore

Effective Altruism Forum Podcast

Play Episode Listen Later Nov 15, 2024 29:27


This post is written in my personal capacity, but is based on insights that I've gained through my work as Effective Giving Global Coordinator and Incubator at Giving What We Can since I took on the role in June 2023. Tl;dr In my view the average reader of the EA Forum should be giving more to meta-charities like effective giving (EG) organisations. EG organisations play a crucial role in directing funds to highly impactful charities, but many are facing significant funding constraints and/or a lack of diversified funding. Supporting these meta-charities can have a multiplier effect on your donations, potentially leading to extraordinary growth in effective giving. Consider allocating a portion of your donation budget to EG organisations this giving season. Introduction When I first heard about EA from a TED talk by Peter Singer in 2017, I was inspired by the idea that we could carefully use evidence [...] ---Outline:(00:21) Tl;dr(00:57) Introduction(02:56) Why EG orgs are funding constrained(05:28) Why should you donate to EG organisations?(05:38) The multiplier effect(07:01) Positive indirect impact(07:43) Potential for significant growth(08:35) Addressing future funding constraints(09:08) The impact of additional funding(10:28) Why you might not want to donate to EG organisations(11:29) Where to give?(11:48) Giving What We Can(14:49) Effektiv Spenden(16:42) Founders Pledge(18:28) Ge Effektivt(20:09) Giving Multiplier(21:49) The Life You Can Save(22:46) Other established EG organisations(25:22) New EG organisations(28:55) Call to Action--- First published: November 8th, 2024 Source: https://forum.effectivealtruism.org/posts/fMcpbGRWBtq3QBEyA/why-you-should-allocate-more-of-your-donation-budget-to-1 --- Narrated by TYPE III AUDIO.

Effective Altruism Forum Podcast
“Reflections and lessons from Effective Ventures” by Zachary Robinson

Effective Altruism Forum Podcast

Play Episode Listen Later Oct 29, 2024 97:28


I became the CEO of EV US in January 2023. I worked alongside the EV US and UK teams, former EV UK CEO Howie Lempel, and current EV UK CEO Rob Gledhill to recover and reform Effective Ventures and improve the robustness of the EA ecosystem in the aftermath of FTX's collapse. Amidst these efforts, I and others learned or fortified lessons that I think aren't unique to EV and could be valuable to the wider EA community. Being able to look at hard problems, discuss them with candor, and update based on what we learn are values that I admire, and I see them as a positive and necessary mechanism for doing good. I want to act on those values here. Goals of this post Provide an update to create communal knowledge: Clarify what reforms have taken place at EV in recent years and the reasoning behind [...] ---Outline:(00:50) Goals of this post(01:48) Some high-level notes on FTX-related reflection(06:04) Scope of this post(11:53) Summary of reforms and actions taken at EV(14:54) Background on EV(18:42) Reforms and other actions taken(18:46) Hiring CEOs (and other non-board personnel) for EV US and EV UK(24:00) Changes to EV US and EV UK board(30:04) FTX-related investigations(38:43) Instituting financial reforms(41:23) Improving donor due diligence(45:36) Adopting a restrictive communications policy(57:19) Streamlining whistleblowing policies(59:44) Updating anti-harassment and misconduct policies(01:03:04) Improving the COI policy(01:06:22) Clarifying the level of separation between the EV US and EV UK entities(01:10:44) Initiating EV shut down(01:15:11) Some of the lessons EA can learn from the EV experience(01:15:31) Brief summary of these lessons(01:16:10) The lessons(01:16:14) Organizational governance and compliance can have serious implications(01:17:08) It's important to carefully think through and explicitly communicate your organizational risk strategy, and pay attention to it as your organization develops. This is particularly true for a fiscal sponsor(01:19:10) EA organizations often underrate experience relative to “intelligence” and “value alignment”(01:22:09) Vetting external counsel is important(01:23:02) Crisis prep is underrated relative to crisis response(01:24:43) Invest in capacity building early(01:26:34) Communicate early (and have the resources to do so)(01:27:31) Acknowledgments(01:29:12) Appendix: other content about and reflections on FTX on the EA Forum(01:29:47) Information(01:31:24) Reflections(01:33:18) Investigation-related(01:33:55) StatementsThe original text contained 12 footnotes which were omitted from this narration. The original text contained 1 image which was described by AI. --- First published: October 28th, 2024 Source: https://forum.effectivealtruism.org/posts/AuSah98NtR5qv8zQA/reflections-and-lessons-from-effective-ventures-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

The Nonlinear Library
EA - Dispelling the Anthropic Shadow by Eli Rose

The Nonlinear Library

Play Episode Listen Later Sep 9, 2024 1:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dispelling the Anthropic Shadow, published by Eli Rose on September 9, 2024 on The Effective Altruism Forum. This is a linkpost for Dispelling the Anthropic Shadow by Teruji Thomas. Abstract: There are some possible events that we could not possibly discover in our past. We could not discover an omnicidal catastrophe, an event so destructive that it permanently wiped out life on Earth. Had such a catastrophe occurred, we wouldn't be here to find out. This space of unobservable histories has been called the anthropic shadow. Several authors claim that the anthropic shadow leads to an 'observation selection bias', analogous to survivorship bias, when we use the historical record to estimate catastrophic risks. I argue against this claim. Upon a first read, I found this paper pretty persuasive; I'm at >80% that I'll later agree with it entirely, i.e. I'd agree that "the anthropic shadow effect" is not a real thing and earlier arguments in favor of it being a real thing were fatally flawed. This was a significant update for me on the issue. Anthropic shadow effects are one of the topics discussed loosely in social settings among EAs (and in general open-minded nerdy people), often in a way that assumes the validity of the concept[1]. To the extent that the concept turns out to be completely not a thing - and for conceptual rather than empirical reasons - I'd find that an interesting sociological/cultural fact. 1. ^ It also has a tag on the EA Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - The EA Hub is retiring by Vaidehi Agarwalla

The Nonlinear Library

Play Episode Listen Later Sep 1, 2024 1:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Hub is retiring, published by Vaidehi Agarwalla on September 1, 2024 on The Effective Altruism Forum. The EA Hub will retire on the 6th of October 2024. In 2022 we announced that we would stop further feature development and maintain the EA Hub until it is ready to be replaced by a new platform. In July this year, the EA Forum launched its People Directory, which offers a searchable directory of people involved in the EA movement, similar to what the Hub provided in the past. We believe the Forum has now become better positioned to fulfil the EA Hub's mission of connecting people in the EA community online. The Forum team is much better resourced and users have many reasons to visit the Forum (e.g. for posts and events), which is reflected in it having more traffic and users. The Hub's core team has also moved on to other projects. EA Hub users will be informed of this decision via email. All data will be deleted after the website has been shut down. We recommend that you use the EA Forum's People Directory to continue to connect with other people in the effective altruism community. The feature is still in beta mode and the Forum team would very much appreciate feedback about it here. We would like to thank the many people who volunteered their time in working on the EA Hub. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - AMA with AIM's Director of Recruitment (and MHI co-founder), Ben Williamson by Ben Williamson

The Nonlinear Library

Play Episode Listen Later Aug 20, 2024 3:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA with AIM's Director of Recruitment (and MHI co-founder), Ben Williamson, published by Ben Williamson on August 20, 2024 on The Effective Altruism Forum. As AIM's Director of Recruitment, I'm running an AMA to answer any questions you may have about applying for our programs, as well as any questions that may be of interest from my other experience (such as co-founding Maternal Health Initiative). Ambitious Impact (formerly Charity Entrepreneurship) currently has applications open until September 15th for two of our programs. Charity Entrepreneurship Incubation Programme AIM Founding to Give You can read more about both programs in this earlier EA Forum post. Please consider applying! Why a personal AMA? Answers to questions can often be subjective. I do not want to claim to speak for every member of AIM's team. As such, I want to make clear that I will be answering in a personal capacity. I think this has a couple of notable benefits: My answers can be a little more candid since I don't have to worry (as much) that I'll say something others may significantly disagree with Application season is busy for us! This saves coordination time in getting agreement on how to respond to any tricky questions It also means that people can ask me questions through this AMA that go beyond AIM's recruitment process and application round... A little about me I've been working at AIM since April 2024. Before that, I co-founded the Maternal Health Initiative with Sarah Eustis-Guthrie. We piloted a training program with the Ghana Health Service to improve the quality of postpartum family planning counseling in the country. In March, we made the decision to shut down the organisation as we do not believe that postpartum family planning is likely to be as cost-effective as other family planning or global health interventions. You can read more about that decision in a recent piece for Asterisk magazine, as well as an earlier EA Forum post. I started Maternal Health Initiative through the 2022 Charity Entrepreneurship Incubation Program. I spent the year prior to this founding and running Effective Self-Help, a project researching the best interventions for individuals to increase their wellbeing and productivity. My job history before that is far more potted and less relevant - from waiting tables and selling hiking shoes to teaching kids survival skills and planting vineyards. Things you could ask me Any questions you may have about what AIM looks for in candidates for our programs and how we select people Questions about getting into entrepreneurship - why to pursue it; how to test fit; paths to upskilling; lessons I've learned from my own (mis)adventures Questions about Maternal Health Initisomething in my experience ative - what we did; lessons I learned; how it feels to shut down More general questions about building a career in impactful work if something in my experience suggests I might be a good person to ask! How the AMA works 1. You post a comment here[1] 2. You wait patiently while I'm on holiday until August 28th[2] 3. I reply to comments on August 29th and 30th 1. ^ If you have a question you'd like to ask in private, you can email me: ben@ charityentrepreneurship [dot] com 2. ^ This was coincidental rather than planned but has the wonderful benefit of ensuring I avoid spending a week refreshing this article feverishly waiting for questions... Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - FarmKind's Illusory Offer by jefftk

The Nonlinear Library

Play Episode Listen Later Aug 9, 2024 4:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FarmKind's Illusory Offer, published by jefftk on August 9, 2024 on LessWrong. While the effective altruism movement has changed a lot over time, one of the parts that makes me most disappointed is the steady creep of donation matching. It's not that donation matching is objectively very important, but the early EA movement's principled rejection of a very effective fundraising strategy made it clear that we were committed to helping people understand the real impact of their donations. Over time, as people have specialized into different areas of EA, with community-building and epistemics being different people from fundraising, we've become less robust against the real-world incentives of "donation matching works". Personally, I would love to see a community-wide norm against EA organizations setting up donation matches. Yes, they bring in money, but at the cost of misleading donors about their impact and unwinding a lot of what we, as a community, are trying to build. [1] To the extent that we do have them, however, I think it's important that donors understand how the matching works. And not just in the sense of having the information available on a page somewhere: if most people going through your regular flow are not going to understand roughly what the effect of their choices are, you're misleading people. Here's an example of how I don't think it should be done: I come to you with an offer. I have a pot with $30 in it, which will go to my favorite charity unless we agree otherwise. If you're willing to donate $75 to your favorite charity and $75 to mine, then I'm willing to split my $30 pot between the two charities. How should you think about this offer? As presented, your options are: Do nothing, and $30 goes from the pot to my favorite charity. Take my offer, and: $75 goes from your bank account to your favorite charity $75 goes from your bank account to my favorite charity $15 leaves the pot for your favorite charity $15 leaves the pot for my favorite charity While this looks nice and symmetrical, satisfying some heuristics for fairness, I think it's clearer to (a) factor out the portion that happens regardless and (b) look at the net flows of money. Then if you take the offer: $150 leaves your bank account $90 goes to your favorite charity $60 goes to my favorite charity If I presented this offer and encouraged you to take it because of my "match", that would be misleading. While at a technical level I may be transferring some of my pot to your favorite charity, it's only happening after I'm assured that a larger amount will go to mine: you're not actually influencing how I spend my pot in any real sense. Which is why I'm quite disappointed that Charity Entrepreneurship, after considering these arguments, decided to build FarmKind: This is essentially a white-labeled GivingMultiplier. [2] It's not exactly the same, in part because it has a more complex function for determining the size of the match, [3] but it continues to encourage people to give by presenting the illusion that the donor is influencing the matcher to help fund the donor's favorite charity. While setting up complex systems can cause people to donate more than they would otherwise, we should not be optimizing for short-term donations at the expense of donor agency. I shared a draft of this post with FarmKind and GivingMultiplier for review before publishing, and before starting this post I left most of these points as comments on the EA Forum announcement. [1] I think participating in existing donation match systems is generally fine, and often a good idea. I've used employer donation matching and donated via Facebook's Giving Tuesday match, and at a previous employer fundraised for GiveWell's top charities through their matching system. In the latter case, in my fundraising I explicitly ...

Effective Altruism Forum Podcast
“Wild Animal Initiative has urgent need for more funding and more donors” by Cameron Meyer Shorb

Effective Altruism Forum Podcast

Play Episode Listen Later Aug 7, 2024 25:41


Our room for more funding is bigger and more urgent than ever before. Our organizational strategy will be responsive both to the total amount raised and to how many people donate, so smaller donors will have an especially high impact this year. Good Ventures recently decided to phase out funding for several areas (GV blog, EA Forum post), including wild animal welfare. That's a pretty big shock to our movement. We don't know what exactly the impact will be, except that it's complicated. The purpose of this post is to share what we know and how we're thinking about things — primarily to encourage people to donate to Wild Animal Initiative this year, but also for anyone else who might be interested in the state of the wild animal welfare movement more broadly. Summary Track record Our primary goal is to support the growth of a [...] ---Outline:(00:53) Summary(03:44) What we've accomplished so far(03:48) Background(04:12) Strategy(05:39) Progress(11:56) Why we need more funding, especially from smaller donors(12:02) We always have room to grow, but now we need to make sure we don't shrink.(13:11) We raised less last year than we did the year before.(13:59) We're losing our biggest donor.(16:37) Another donor could make up the funds, but not the security.(18:25) Smaller donors have a bigger role to play.(20:04) What we'll do with your donation(24:15) ConclusionThe original text contained 1 footnote which was omitted from this narration. --- First published: August 6th, 2024 Source: https://forum.effectivealtruism.org/posts/idhTjyNTsyxobijyJ/wild-animal-initiative-has-urgent-need-for-more-funding-and --- Narrated by TYPE III AUDIO.

The Nonlinear Library
EA - Wild Animal Initiative has urgent need for more funding and more donors by Cameron Meyer Shorb

The Nonlinear Library

Play Episode Listen Later Aug 6, 2024 21:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wild Animal Initiative has urgent need for more funding and more donors, published by Cameron Meyer Shorb on August 6, 2024 on The Effective Altruism Forum. Our room for more funding is bigger and more urgent than ever before. Our organizational strategy will be responsive both to the total amount raised and to how many people donate, so smaller donors will have an especially high impact this year. Good Ventures recently decided to phase out funding for several areas (GV blog, EA Forum post), including wild animal welfare. That's a pretty big shock to our movement. We don't know what exactly the impact will be, except that it's complicated. The purpose of this post is to share what we know and how we're thinking about things - primarily to encourage people to donate to Wild Animal Initiative this year, but also for anyone else who might be interested in the state of the wild animal welfare movement more broadly. Summary Track record Our primary goal is to support the growth of a self-sustaining interdisciplinary research community focused on reducing wild animal suffering. Wild animal welfare science is still a small field, but we're really happy with the momentum it's been building. Some highlights of the highlights: We generally get a positive response from researchers (particularly in animal behavior science and ecology), who tend to see wild animal welfare as a natural extension of their interest in conservation (unlike EAs, who tend to see those two as conflicting with each other). Wild animal welfare is increasingly becoming a topic of discussion at scientific conferences, and was recently the subject of the keynote presentation at one. Registration for our first online course filled to capacity (50 people) within a few hours, and just as many people joined the waitlist over the next few days. Room for more funding This is the first year in which our primary question is not how much more we can do, but whether we can avoid major budget cuts over the next few years. We raised less in 2023 than we did in 2022, so we need to make up for that gap. We're also going to lose our biggest donor because Good Ventures is requiring Open Philanthropy to phase out their funding for wild animal welfare. Open Phil was responsible for about half of our overall budget. The funding from their last grant to us will last halfway through 2026, but we need to decide soon how we're going to adapt. To avoid putting ourselves back in the position of relying on a single funder, our upcoming budgeting decisions will depend on not only how much money we raise, but also how diversified our funding is. That means gifts from smaller donors will have an unusually large impact. (The less you normally donate, the more disproportionate your impact will be, but the case still applies to basically everyone who isn't a multi-million-dollar foundation.) Specifically, our goal is to raise $240,000 by the end of the year from donors giving $10k or less. Impact of marginal donations We're evaluating whether we need to reduce our budget to a level we can sustain without Open Philanthropy. The more we raise this year - and the more donors who pitch in to make that happen - the less we'll need to cut. Research grants and staff-associated costs make up the vast majority of our budget, so we'd need to make cuts in one or both of those areas. Donations would help us avoid layoffs and keep funding external researchers. What we've accomplished so far Background If you're not familiar with Wild Animal Initiative, we're working to accelerate the growth of wild animal welfare science. We do that through three interconnected programs: We make grants to scientists who take on relevant projects, we conduct our own research on high-priority questions, and we do outreach through conferences and virtual events. Strategy...

The Nonlinear Library
LW - Fear of centralized power vs. fear of misaligned AGI: Vitalik Buterin on 80,000 Hours by Seth Herd

The Nonlinear Library

Play Episode Listen Later Aug 5, 2024 11:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fear of centralized power vs. fear of misaligned AGI: Vitalik Buterin on 80,000 Hours, published by Seth Herd on August 5, 2024 on LessWrong. Vitalik Buterin wrote an impactful blog post, My techno-optimism. I found this discussion of one aspect on 80,00 hours much more interesting. The remainder of that interview is nicely covered in the host's EA Forum post. My techno optimism apparently appealed to both sides, e/acc and doomers. Buterin's approach to bridging that polarization was interesting. I hadn't understood before the extent to which anti-AI regulation sentiment is driven by fear of centralized power. I hadn't thought about this risk before since it didn't seem relevant to AGI risk, but I've been updating to think it's highly relevant. [this is automated transcription that's inaccurate and comically accurate by turns :)] Rob Wiblin (the host) (starting at 20:49): what is it about the way that you put the reasons to worry that that ensured that kind of everyone could get behind it Vitalik Buterin: [...] in addition to taking you know the case that AI is going to kill everyone seriously I the other thing that I do is I take the case that you know AI is going to take create a totalitarian World Government seriously [...] [...] then it's just going to go and kill everyone but on the other hand if you like take some of these uh you know like very naive default solutions to just say like hey you know let's create a powerful org and let's like put all the power into the org then yeah you know you are creating the most like most powerful big brother from which There Is No Escape and which has you know control over the Earth and and the expanding light cone and you can't get out right and yeah I mean this is something that like uh I think a lot of people find very deeply scary I mean I find it deeply scary um it's uh it is also something that I think realistically AI accelerates right One simple takeaway is that recognizing and addressing that motivation for anti-regulation and pro-AGI sentiment when trying to work with or around the e/acc movement. But a second is whether to take that fear seriously. Is centralized power controlling AI/AGI/ASI a real risk? Vitalik Buterin is from Russia, where centralized power has been terrifying. This has been the case for roughly half of the world. Those that are concerned with of risks of centralized power (including Western libertarians) are worried that AI increases that risk if it's centralized. This puts them in conflict with x-risk worriers on regulation and other issues. I used to hold both of these beliefs, which allowed me to dismiss those fears: 1. AGI/ASI will be much more dangerous than tool AI, and it won't be controlled by humans 2. Centralized power is pretty safe (I'm from the West like most alignment thinkers). Now I think both of these are highly questionable. I've thought in the past that fears AI are largely unfounded. The much larger risk is AGI. And that is an even larger risk if it's decentralized/proliferated. But I've been progressively more convinced that Governments will take control of AGI before it's ASI, right?. They don't need to build it, just show up and inform the creators that as a matter of national security, they'll be making the key decisions about how it's used and aligned.[1] If you don't trust Sam Altman to run the future, you probably don't like the prospect of Putin or Xi Jinping as world-dictator-for-eternal-life. It's hard to guess how many world leaders are sociopathic enough to have a negative empathy-sadism sum, but power does seem to select for sociopathy. I've thought that humans won't control ASI, because it's value alignment or bust. There's a common intuition that an AGI, being capable of autonomy, will have its own goals, for good or ill. I think it's perfectly coherent for it...

The Nonlinear Library
EA - Take the 2024 EA Forum user survey by Sarah Cheng

The Nonlinear Library

Play Episode Listen Later Aug 1, 2024 1:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the 2024 EA Forum user survey, published by Sarah Cheng on August 1, 2024 on The Effective Altruism Forum. One year ago, the EA Forum team ran a survey for EA Forum users. The results have been very helpful for us to clarify our strategy, understand our impact, and decide what to prioritize working on in the past 12 months. From today through August 20[1], we're running an updated version of the survey for 2024. All questions are optional, as is a new section about other CEA programs. We estimate the survey will take you 7-15 minutes to complete (less if you skip questions, longer if you write particularly detailed answers). Your answers will be saved in your browser, so you can pause and return to it later. We appreciate you taking the time to fill out the survey, and we read every response. Even if you do not use the Forum, your response is helpful for us to compare different populations (and you will see a significantly shorter survey). All of us who work on the Forum do so because we want to have a positive impact, and it's important that we have a clear understanding of what (if any) work on the Forum fulfills that goal. We spend our time on other projects as well[2], and your responses will inform what our team works on in the next 12 months. 1. ^ By default, we plan to close the survey on Tuesday August 20, but we may decide to leave it open longer. 2. ^ Let us know if you would be interested in partnering with us! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Video AMA and transcript: Beast Philanthropy's Darren Margolias by Toby Tremlett

The Nonlinear Library

Play Episode Listen Later Jul 31, 2024 75:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Video AMA and transcript: Beast Philanthropy's Darren Margolias, published by Toby Tremlett on July 31, 2024 on The Effective Altruism Forum. This is a video and transcript of an AMA with Darren Margolias, the Executive Director of Beast Philanthropy. The interviewer is CEA's Emma Richter. See the questions Darren is answering in the AMA announcement post. If you'd like to support the Beast Philanthropy x GiveDirectly collaboration, you can donate here. The first $150,000 of donations will be matched. GiveDirectly shares their thinking on donation matching here. Video AMA- Emma interviews Darren Short transcript This transcript was cleaned up and shortened by ChatGPT. To my eye it seems to accurately represent Emma's questions and Darren's answers, but I've included the full, un-edited transcript at the end of this post so that you can cross-reference, get accurate quotes from Darren, or look for more detail. All errors in the GPT summary are my (Toby's) own (by a transitive property). Emma Hello everyone, I'm Emma from CEA. Welcome to our Forum AMA with Darren Margolias, the executive director of Beast Philanthropy. I'll be asking Darren the questions you posted on the EA Forum. For some context, Beast Philanthropy is a YouTube channel and organization started by Mr. Beast, the most watched YouTuber in the world. During Darren's four years at Beast Philanthropy, he's grown the channel to over 25 million subscribers and expanded the scope of what they do, from running food banks to building houses, funding direct cash transfers, and curing diseases. Speaking of cash transfers, Beast Philanthropy recently collaborated with GiveDirectly to make a video and transfer $200,000 to Ugandans in poverty. Darren, thanks for joining us. It's really exciting to have you and to ask all these questions. I'll start with some intro context questions. Could you give us a quick overview of what Beast Philanthropy does and your role? Darren Beast Philanthropy started out with Jimmy's initial idea to create a channel that would generate revenue to fund a food bank and feed our local community. We hoped to spread it across the country and maybe even other countries. We quickly realized there was much more to what we were doing than we initially contemplated, and it's grown far beyond that. As the executive director, I'm the person who gets all the blame when things go wrong. Emma Yeah, fair enough. So I'm curious, what first got you interested in working as an executive director for a charity? Darren I was actually a real estate developer. In 2002, a friend found some kittens under her front deck and couldn't find a shelter that wouldn't euthanize them if they weren't adopted within a week. She decided to find them homes, and I said I'd pay for all the bedding and everything. Out of that, we started an animal rescue organization, which has grown into the biggest no-cage, no-kill facility in the southeastern United States. It's been very successful. Along the way, I realized that the endless pursuit of making more money wasn't fulfilling. In 2008, I had a realization that I wanted to do more in my life than work hard and buy stuff that didn't make me happy. The animal project brought me fulfillment, so I decided to sell my real estate portfolio and start doing things that mattered to me. We built the animal charity, then I started another charity for severely affected autistic kids. One day, Jimmy's CEO called and asked me to meet Jimmy. I didn't know who Mr. Beast was at the time. Darren He told me they wanted to get every dog in a dog shelter adopted and were considering our shelter. When I got to North Carolina, I started my presentation right away, but Jimmy interrupted and said, "We've already approved the video. It's being done at your shelter. You're here for another reason." He th...

The Nonlinear Library
EA - Data from the 2023 EA Forum user survey by Sarah Cheng

The Nonlinear Library

Play Episode Listen Later Jul 26, 2024 14:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data from the 2023 EA Forum user survey, published by Sarah Cheng on July 26, 2024 on The Effective Altruism Forum. The purpose of this post is to share data from a survey that the EA Forum team ran last year. Though we used this survey as one of many sources of information for internal analyses, I did not include any particular takeaways from this survey data in this post. I leave that as an exercise for the reader. Overview In August 2023, the EA Forum team ran a survey to learn more about how people use the Forum and how the Forum impacted them. We got 609 valid responses. Thank you to everyone who responded - we really appreciate you taking the time. The results have been important for helping us understand the ways that the Forum creates value and disvalue that are otherwise hard for us to track. We've used it to evaluate the impact of the Forum and the marginal impact of our work, update our team's strategy, and prioritize the work we've done in the past 12 months. The person who ran the survey and wrote up the analysis is no longer at CEA, but I figured people might be interested in the results of the survey, so I'm sharing some of the data in this post. Most of the information here comes from that internal analysis, but when I use "I" that is me (Sarah) editorializing. This post is not comprehensive, and does not include all relevant data. I did not spend time double checking any of the information from that analysis. We plan to run another (updated) survey soon for 2024. Some Forum usage data, for context The Forum had 4.5k monthly active and 13.7k annually active logged in users in the 12 months ending on Sept 4 2023. We estimate that the total number of users was closer to 20-30k (since about 50% of traffic is logged out). Here's a breakdown of usage data for logged-in users in those 12 months: 13.7k distinct logged in users 8.5k users with 3 distinct days of activity 5.7k users with 10 distinct days of activity 3.1k users who were active during of all months 1.7k users who were active during of all weeks 388 users who were active during on of all days 4.4k distinct commenters 171 distinct post authors It's important to note that August 2022 - August 2023 was a fairly unusual time for EA, so while you can (and we have) used this survey data to estimate things like "the value the Forum generates per year", you might think that August 2023 - August 2024 is a more typical year, and so the data from the next survey may be more representative. Demographic reweighting[1] Rethink Priorities helped us with the data analysis, which included adjusting the raw data by weighting the responses to try to get a more representative view of the results. All charts below include both the raw and weighted[2] data. The weighting factors were: 1. Whether the respondent had posted (relative to overall Forum usage) 2. Whether the respondent had commented (relative to overall Forum usage) 3. How frequently the respondent used the Forum (relative to overall Forum usage) 4. The respondent's EA engagement level (relative to the Forum statistics from the 2020 EA Survey) 5. The respondent's gender (relative to the Forum statistics from the 2020 EA Survey) Some effects of the reweighting: A significantly higher proportion of respondents have posted or commented on the Forum, relative to actual overall Forum usage, so reweighting decreases those percentages and the percentages of other actions (such as voting on karma). A plurality (around 45%) of respondents said they visit the Forum about 1-2 times a week. This is more frequent than the overall Forum population, so reweighting decreases things like the percentage of users who applied for a job due to the Forum, and the mean rating of "significantly changed your thinking". Overall, respondents tended to be more highly engaged than the ...

The Nonlinear Library
EA - Forum update: User database, card view, and more (Jul 2024) by Sarah Cheng

The Nonlinear Library

Play Episode Listen Later Jul 25, 2024 4:11


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum update: User database, card view, and more (Jul 2024), published by Sarah Cheng on July 25, 2024 on The Effective Altruism Forum. Highlights since our last feature update: People directory Google Doc import Card view for the Frontpage User profile updates Updates to sequences We've also hosted 3 events, significantly improved the site speed, and made many more improvements across the Forum. As always, we'd love feedback on these changes. People directory We've built a filterable database where anyone can find Forum users they might want to hire, collaborate with, or read. You can filter the database by role, organization, interests and more. If you want people to easily find you, consider adding more information to your Forum profile. Google Doc import You can now import posts directly from Google Docs. Footnotes and images will be imported along with the text. How it works: Write and format your post in Google Docs Share the Google doc with eaforum.posts@gmail.com Copy the link to your doc, click "Import Google doc" on a new Forum post, and paste the link For more details, check out Will Howard's quick take. Card view for the Frontpage Card view is a new way to view the Frontpage, allowing you to see post images and a preview of the content. Try it out by changing to card view in the dropdown to the right of "Customize feed". User profile updates We've redesigned the Edit profile page, and moved "Display name" to this page (from Account settings). Feel free to update your display name to add a GWWC pledge diamond. We've also re-enabled topic interests. This helps possible collaborators or hiring managers find you in our new People directory, as well as letting you make your profile a bit more information dense and personalized. Updates to sequences You can now get notified when posts are added to a sequence. We've also updated the sequence editor, including adding the ability to delete sequences. Other updates Forum performance is greatly improved We've made several behind the scenes changes which have sped up the Forum loading times. This should make your experience of using the Forum a lot smoother. Improved on-site audio experience While the previous version was static, attached to the top of the post, the new audio player sticks to the bottom of the page while the reader scrolls through the article. Once you have clicked the speaker icon to open the audio player, you can click the play button next to any header to start playing the audio from there. We've fixed our Twitter bot Our Twitter bot tweets posts when they hit 40 karma. It's been out of commission for a while, but now it's back up and running! We encourage you to follow, share, and retweet posts that you think are valuable. Updated UI for post stats We've simplified the UI and added high level metrics to your post stats. Before and after: Self-service account deletion There is now a section at the bottom of your account settings allowing you to permanently delete your personal data from the Forum. Events since March Draft Amnesty Week We hosted a Draft Amnesty Week to encourage people to publish posts that had been languishing in draft or procrastinated states. We had around 50 posts published. We also made some design changes to how we show events on the Forum, which will make future events more visible and navigable. "Ways the world is getting better" banner This banner was on the Forum for a week. Users could add emojis to the banner, which would show a piece of good news when someone hovered over them. When you clicked the emojis, you would be linked to an article explaining the good news. AI Welfare Debate Week Our first interactive debate week had over 500 Forum users voting on our interactive banner, around 30 posts, and much discussion in comments and quick takes. We received a lot of enco...

The Nonlinear Library
EA - Applications Now Open for AIM's 2025 Programs by CE

The Nonlinear Library

Play Episode Listen Later Jul 25, 2024 10:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications Now Open for AIM's 2025 Programs, published by CE on July 25, 2024 on The Effective Altruism Forum. In short: Applications for AIM's two upcoming programs - the Charity Entrepreneurship Incubation Program and AIM's new Founding to Give Program - are now open. You can apply to multiple programs with one joint application form. The deadline to apply for all programs is September 15th. Over the past five years, AIM has incubated 40 highly effective nonprofits and secured over $3.9 million in seed grants. These organizations now reach over 35 million people and have the potential to improve the lives of more than 1 billion animals through their interventions. We are incredibly proud of the success achieved by these organizations and their dedicated founders. We believe founding an organization is the most impactful career path for fast-moving, entrepreneurial individuals. In our search for more impact-driven individuals to found field-leading organizations, we are excited to announce that our applications are now open. Dates: Charity Entrepreneurship Incubation Program: February-March 2025 (8 weeks) July - August 2025 (8 weeks) AIM Founding to Give: 6th of January - 28th of March 2025 (12 weeks) AIM Research Fellowship - Expression of Interest (dates TBD, likely early 2025) About the Charity Entrepreneurship Incubation Program Why apply? Put simply, we believe that founding a charity is likely one of the most impactful and exciting career options for people who are a good fit. In just a few years of operation, our best charities have gone from an idea to organizations with dozens of staff improving the lives of millions of people and animals every year. We provide the training, funding, research, and mentorship to ensure that people with the right aptitudes and a relentless drive to improve the world can start a charity, no matter their background or prior experience. This could be you! Our initial application form takes as little as 30 minutes - take a look at our applicant resources and apply now. Who is this program for? Individuals who want to make a huge impact with their careers. Charity entrepreneurs are ambitious, fast-moving, and prioritize impact above all. They are focused on cost-effectiveness and are motivated to pilot and scale an evidence-backed intervention. We have found that those from consulting backgrounds, for-profit entrepreneurship, effective NGOs, or recent graduates perform well in this program. What we offer: 2-month full-time training with two weeks in-person in London. Stipend of £1900/month during (and potentially up to 2 months after) the program. Incredibly talented individuals to co-found your new project with. Possibility to apply for $100,000 - $200,000 seed funding (~80% of projects get funded). Membership of the AIM alumni network, connecting you to mentorship, funders, and a community of other founders. The ideas: We are excited to announce our top charity ideas for the upcoming CE incubator. These ideas are the results of a seven-stage research process. To be brief, we have sacrificed nuance. In the upcoming weeks, full reports will be announced in our newsletter, published on our website, and posted on the EA Forum. Cage-free in the Middle East: An organization focused on good-cop cage-free corporate campaigning in neglected countries in the Middle East (United Arab Emirates, Saudi Arabia, and Egypt). Keel Bone Fractures: A charity working on how farmers can reduce the prevalence of keel bone fractures (KBF) in cage-free layer hens, ideally through outreach to certifiers to update their certification standards to include an outcome-based limit on KBF. Fish Welfare East Asia - We continue to recommend an organization that works with farmers in neglected, high-priority countries in East Asia (Philippines, Taiwan, and Indo...

The Nonlinear Library
EA - AI companies are not on track to secure model weights by Jeffrey Ladish

The Nonlinear Library

Play Episode Listen Later Jul 19, 2024 28:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI companies are not on track to secure model weights, published by Jeffrey Ladish on July 19, 2024 on The Effective Altruism Forum. This post is a write-up of my talk from EA Global: Bay Area 2024, which has been lightly edited for clarity. Speaker background Jeffrey is the Executive Director of Palisade Research, a nonprofit that studies AI capabilities to better understand misuse risks from current systems and how advances in hacking, deception, and persuasion will affect the risk of catastrophic AI outcomes. Palisade is also creating concrete demonstrations of dangerous capabilities to advise policymakers and the public of the risks from AI. Introduction and context Do you want the good news first or the bad news? The bad news is what my talk's title says: I think AI companies are not currently on track to secure model weights. The good news is, I don't think we have to solve any new fundamental problems in science in order to solve this problem. Unlike AI alignment, I don't think that we have to go into territory that we've never gotten to before. I think this is actually one of the most tractable problems in the AI safety space. So, even though I think we're not on track and the problem is pretty bad, it's quite solvable. That's exciting, right? I'm going to talk about how difficult I think it is to secure companies or projects against attention from motivated, top state actors. I'm going to talk about what I think the consequences of failing to do so are. And then I'm going to talk about the so-called incentive problem, which is, I think, one of the reasons why this is so thorny. Then, let's talk about solutions. I think we can solve it, but it's going to take some work. I was already introduced, so I don't need to say much about that. I was previously at Anthropic working on the security team. I have some experience working to defend AI companies, although much less than some people in this room. And while I'm going to talk about how I think we're not yet there, I want to be super appreciative of all the great people working really hard on this problem already - people at various companies such as RAND and Pattern Labs. I want to give a huge shout out to all of them. So, a long time ago - many, many years ago in 2022 [audience laughs] - I wrote a post with Lennart Heim on the EA Forum asking, "What are the big problems information security might help solve?" One we talked about is this core problem of how to secure companies from attention from state actors. At the time, Ben Mann and I were the only security team members at Anthropic, and we were part time. I was working on field-building to try to find more people working in this space. Jarrah was also helping me. And there were a few more people working on this, but that was kind of it. It was a very nervous place to be emotionally. I was like, "Oh man, we are so not on track for this. We are so not doing well." Note from Jeffrey: I left Anthropic in 2022, and I gave this talk in Feb 2024, ~5 months ago. My comments about Anthropic here reflect my outside understanding at the time and don't include recent developments on security policy. Here's how it's going now. RAND is now doing a lot of work to try to map out what is really required in this space. The security team at Anthropic is now a few dozen people, with Jason Clinton leading the team. He has a whole lot of experience at Google. So, we've gone from two part-time people to a few dozen people - and that number is scheduled to double soon. We've already made a tremendous amount of progress on this problem. Also, there's a huge number of events happening. At DEF CON, we had about 100 people and Jason ran a great reading group to train security engineers. In general, there have been a lot more people coming to me and coming to 80,000 Hours really intere...

The Nonlinear Library
EA - Abandoning functionalism: Some intuition pumps by Alfredo Parra

The Nonlinear Library

Play Episode Listen Later Jul 12, 2024 28:11


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abandoning functionalism: Some intuition pumps, published by Alfredo Parra on July 12, 2024 on The Effective Altruism Forum. There seems to be a widely-held view in popular culture that no physicist really understands quantum mechanics. The meme probably gained popularity after Richard Feynman famously stated in a lecture (transcribed in the book "The Character of Physical Law") "I think I can safely say that nobody understands quantum mechanics", though many prominent physicists have expressed a similar sentiment. Anyone aware of the overwhelming success of quantum mechanics will recognize that the seeming lack of understanding of the theory is primarily about how to interpret its ontology, and not about how to do the calculations or run the experiments, which clearly many physicists understand extremely well. But even the ontological confusion is debatable. With the proliferation of interpretations of quantum mechanics - each varying in terms of, among others, which classical intuitions should be abandoned - at least some physicists seem to think that there isn't anything weird or mysterious about the quantum world. So I suspect there are plenty of physicists who would politely disagree that it's not possible to really understand quantum mechanics. Sure, it might take them a few decades of dedicated work in theoretical physics and a certain amount of philosophical sophistication, but there surely are physicists out there who (justifiably) feel like they grok quantum mechanics both technically and philosophically, and who feel deeply satisfied with the frameworks they've adopted. Carlo Rovelli (proponent of the relational interpretation) and Sean Carroll (proponent of the many-worlds interpretation) might be two such people. This article is not about the controversial relationship between quantum mechanics and consciousness. Instead, I think there are some lessons to learn in terms of what it means and feels like to understand a difficult topic and to find satisfying explanations. Maybe you will relate to my own journey. See, for a long time, I thought of consciousness as a fundamentally mysterious aspect of reality that we'd never really understand. How could we? Is there anything meaningful we can say about why consciousness exists, where it comes from, or what it's made of? Well, it took me an embarrassingly long time to just read some books on philosophy of mind, but when I finally did some 10 years ago, I was captivated: What if we think in terms of the functions the brain carries out, like any other computing system? What if the hard problem is just ill-defined? Perhaps philosophical zombies can teach us meaningful things about the nature of consciousness? Wow. Maybe we can make progress on these questions after all! Functionalism in particular - the position that any information system is conscious if it computes the appropriate outputs given some inputs - seemed a particularly promising lens. The floodgates of my curiosity were opened. I devoured as much content as I could on the topic - Dennett, Dehaene, Tononi, Russell, Pinker; I binge-read Brian Tomasik's essays and scoured the EA Forum for any posts discussing consciousness. Maybe we can preserve our minds by uploading their causal structure? Wow, yes! Could sufficiently complex digital computers become conscious? Gosh, scary, but why not? Could video game characters matter morally? I shall follow the evidence wherever it leads me. The train to crazy town had departed, and I wanted to have a front-row seat. Alas, the excitement soon started to dwindle. Somehow, the more I learned about consciousness, the more confused and dissatisfied I felt. Many times in the past I'd learned about a difficult topic (for instance, in physics, computer science, or mathematics) and, sure, the number of questions would mul...

The Nonlinear Library
EA - Thank You For Your Time: Understanding the Experiences of Job Seekers in Effective Altruism by Julia Michaels

The Nonlinear Library

Play Episode Listen Later Jul 10, 2024 24:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thank You For Your Time: Understanding the Experiences of Job Seekers in Effective Altruism, published by Julia Michaels on July 10, 2024 on The Effective Altruism Forum. Summary The purpose of this research is to understand the experiences of job seekers who are looking for high-impact roles, particularly roles at organizations aligned with the Effective Altruism movement (hereafter referred to as "EA job seekers" and "EA organizations," respectively).[1] Organizations such as 80,000 Hours, Successif, High Impact Professionals, and others already provide services to support professionals in planning for and transitioning into high-impact careers. However, it's less clear how successful job seekers are at landing roles that qualify as high impact. Anecdotally, prior EA Forum posts suggest that roles at EA organizations are very competitive, even for well-qualified individuals, with an estimated range of ~47-124 individuals rejected for every open role. Given that some EA thought leaders (Ben West, for example) have suggested working in high-impact roles as a pathway to increasing one's individual lifetime impact (in addition to "earning to give"), it seems important to understand how well this strategy is working. By surveying and speaking with job seekers who identify as effective altruists directly, I have identified common barriers and possible solutions that might be picked up by job seekers, support organizations (such as those listed above), and EA organizations. The tl;dr summary of findings: A majority of EA job seekers are recent graduates or early career professionals with

The Nonlinear Library
EA - High Impact Engineers is Transitioning to a Volunteer-Led Model by Jessica Wen

The Nonlinear Library

Play Episode Listen Later Jul 2, 2024 7:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High Impact Engineers is Transitioning to a Volunteer-Led Model, published by Jessica Wen on July 2, 2024 on The Effective Altruism Forum. Summary After over 2 years of operations, High Impact Engineers (HI-Eng) is reverting to a volunteer-led organisational model due to a middling impact outcome and a lack of funding. We wanted to thank all our subscribers, supporters, and contributors for being the driving force behind HI-Eng's achievements, which you can read about in our Impact Report. What is High Impact Engineers? High Impact Engineers (HI-Eng for short, pronounced high-enj) is an organisation dedicated to helping (physical - i.e. non-software) engineers increase their ability to have an outsized positive impact through their work. Why Is HI-Eng Winding Down? In December 2023, we sent out a community survey and solicited case studies and testimonials to evaluate our impact, which we wrote up in our Impact Report. As shown in the report, there is some evidence of behavioural and attitudinal changes in our members towards more impactful career outcomes due to interactions with our programmes, as well as some ongoing career transitions that we supported to some extent, but even after consultations with grantmakers and other community builders, we found it difficult to determine if this amount of impact would meet the bar for ongoing funding. As a result, we decided to (re-)apply for funding from the major EA funds (i.e. EAIF and Open Philanthropy), and they ended up deciding to not fund High Impact Engineers. Since our runway from the previous funding round was so short, we decided against trying to hire someone else to take over running HI-Eng, and the team is moving on to new opportunities. However, we still believe that engineers in EA are a valuable and persistently underserved demographic, and that this latent potential can be realised by providing a hub for engineers in EA to meet other like-minded engineers and find relevant resources. Therefore, we decided to maintain the most valuable and impactful programmes through the help of volunteers. Lessons Learnt There are already many resources available for new community builders (e.g. the EA Groups Resource Centre, this, this, this, and this EA Forum post, and especially this post by Sofia Balderson), so we don't believe that there is much we can add that hasn't already been said. However, here are some lessons we think are robustly good: 1. Having a funding cycle of 6 months is too short. 2. If you're looking to get set up and running quickly, getting a fiscal sponsor is great. We went with the Players Philanthropy Fund, but there are other options (including Rethink Priorities and maybe your national EA group). 3. Speak to other community builders, and ask for their resources! They're often more than happy to give you a copy of their systems, processes and documentation (minus personal data). 4. Pay for monthly subscriptions to software when setting up, even if it's cheaper to get an annual subscription. You might end up switching to a different software further down the line, and it's easier (and cheaper) to cancel a monthly subscription. 5. Email each of your subscriptions' customer service to ask for a non-profit discount (if you have non-profit status). They can save you up to 50% of the ticket price. (Jessica will write up her own speculative lessons learnt in a future forum post). What Will HI-Eng Look Like Going Forward? Jessica will continue managing HI-Eng as a volunteer, and is currently implementing the following changes in our programmes: Email newsletter: the final HI-Eng newsletter was sent in May. Future impactful engineering opportunities can be found on the 80,000 Hours job board or the EA Opportunities board. Any other impactful engineering jobs can be submitted to these boards ( submission...

The Nonlinear Library
EA - Why so many "racists" at Manifest? by Austin

The Nonlinear Library

Play Episode Listen Later Jun 18, 2024 9:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why so many "racists" at Manifest?, published by Austin on June 18, 2024 on The Effective Altruism Forum. Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average response to "would you recommend to a friend" was a 9.0/10. Reviewers said nice things like "one of the best weekends of my life" and "dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams" and "I've always found tribalism mysterious, but perhaps that was just because I hadn't yet found my tribe." Arnold Brooks running a session on Aristotle's Metaphysics. More photos of Manifest here. However, a recent post on The Guardian and review on the EA Forum highlight an uncomfortable fact: we invited a handful of controversial speakers to Manifest, whom these authors call out as "racist". Why did we invite these folks? First: our sessions and guests were mostly not controversial - despite what you may have heard Here's the schedule for Manifest on Saturday: (The largest & most prominent talks are on the left. Full schedule here.) And here's the full list of the 57 speakers we featured on our website: Nate Silver, Luana Lopes Lara, Robin Hanson, Scott Alexander, Niraek Jain-sharma, Byrne Hobart, Aella, Dwarkesh Patel, Patrick McKenzie, Chris Best, Ben Mann, Eliezer Yudkowsky, Cate Hall, Paul Gu, John Phillips, Allison Duettmann, Dan Schwarz, Alex Gajewski, Katja Grace, Kelsey Piper, Steve Hsu, Agnes Callard, Joe Carlsmith, Daniel Reeves, Misha Glouberman, Ajeya Cotra, Clara Collier, Samo Burja, Stephen Grugett, James Grugett, Javier Prieto, Simone Collins, Malcolm Collins, Jay Baxter, Tracing Woodgrains, Razib Khan, Max Tabarrok, Brian Chau, Gene Smith, Gavriel Kleinwaks, Niko McCarty, Xander Balwit, Jeremiah Johnson, Ozzie Gooen, Danny Halawi, Regan Arntz-Gray, Sarah Constantin, Frank Lantz, Will Jarvis, Stuart Buck, Jonathan Anomaly, Evan Miyazono, Rob Miles, Richard Hanania, Nate Soares, Holly Elmore, Josh Morrison. Judge for yourself; I hope this gives a flavor of what Manifest was actually like. Our sessions and guests spanned a wide range of topics: prediction markets and forecasting, of course; but also finance, technology, philosophy, AI, video games, politics, journalism and more. We deliberately invited a wide range of speakers with expertise outside of prediction markets; one of the goals of Manifest is to increase adoption of prediction markets via cross-pollination. Okay, but there sure seemed to be a lot of controversial ones… I was the one who invited the majority (~40/60) of Manifest's special guests; if you want to get mad at someone, get mad at me, not Rachel or Saul or Lighthaven; certainly not the other guests and attendees of Manifest. My criteria for inviting a speaker or special guest was roughly, "this person is notable, has something interesting to share, would enjoy Manifest, and many of our attendees would enjoy hearing from them". Specifically: Richard Hanania - I appreciate Hanania's support of prediction markets, including partnering with Manifold to run a forecasting competition on serious geopolitical topics and writing to the CFTC in defense of Kalshi. (In response to backlash last year, I wrote a post on my decision to invite Hanania, specifically) Simone and Malcolm Collins - I've enjoyed their Pragmatist's Guide series, which goes deep into topics like dating, governance, and religion. I think the world would be better with more kids in it, and thus support pronatalism. I also find the two of them to be incredibly energetic and engaging speakers IRL. Jonathan Anomaly - I attended a talk Dr. Anomaly gave about the state-of-the-art on polygenic embryonic screening. I was very impressed that something long-considered scien...

The Nonlinear Library
EA - Announcing AI Welfare Debate Week (July 1-7) by Toby Tremlett

The Nonlinear Library

Play Episode Listen Later Jun 18, 2024 5:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI Welfare Debate Week (July 1-7), published by Toby Tremlett on June 18, 2024 on The Effective Altruism Forum. July 1-7 will be AI Welfare Debate Week on the EA Forum. We will be discussing the debate statement: "AI welfare [1] should be an EA priority[2]". The Forum team will be contacting authors who are well-versed in this topic to post, but we also welcome posts, comments, quick takes and link-posts from any Forum user who is interested. Feedback on this post may influence the exact wording of the debate statement and its footnotes[3], but it will be held fixed enough to ensure that someone writing a post immediately after this announcement is published can be confident that it will still be relevant to the debate on July 1st. We will be experimenting with a banner where users can mark how strongly they agree or disagree with the debate statement, and a system which uses the posts we record as changing your mind to produce a list of the most influential posts. Should AI welfare be an EA priority? AI welfare - the capacity of digital minds to feel pleasure, pain, happiness, suffering, satisfaction, frustration, or other morally significant welfare states - appears in many of the best and worst visions of the future. If we consider the value of the future from an impartial welfarist perspective, and if digital minds of comparable moral significance to humans are far easier to create than humans, then the majority of future moral patients may be digital. Even if they don't make up the majority of minds, the total number of digital minds in the future could be vast. The most tractable period to influence the future treatment of digital minds may be limited. We may have decades or less to advocate against the creation of digital minds (if that were the right thing to do), and perhaps not much longer than that to advocate for proper consideration of the welfare or rights of digital minds if they are created. Therefore, gaining a better understanding of the likely paths in front of us, including the ways in which the EA community could be involved, is crucial. The sooner, the better. My hopes for this debate Take these all with a pinch of salt, the debate is for you, these are my (Toby's) opinions. I'd like to see discussion focus on digital minds and AI welfare rather than AI in general. There will doubtless be valuable discussion comparing artificial welfare to other causes, but the most interesting arguments are likely to focus on the merits or demerits of this cause. In other words, it'd be less interesting (for me at least) to see familiar arguments that one cause should dominate EA funding or that another cause should not be funded by EA, even though both arguments would be ways to push towards agree or disagree on the debate statement. I'd rather we didn't spend too high a percentage of the debate on the question of whether AI will ever be sentient, although we will have to decide how to deal with the uncertainty here. FAQs How does the banner work? The banner will show the distribution of the EA Forum's opinion on the debate question. Users can place their icon anywhere on the axis to indicate their opinion, and can move it as many times as they like during the week. How are the "most influential posts" calculated? Under the banner, you'll be able to see a leaderboard of "most influential posts". These are ranked based on a metric I'm calling "delta-points". You get delta points when someone changes their mind- moving their marker along the agree/ disagree line which will appear at the bottom of your post, if it is tagged "digital minds debate week". Do I have to write in the style of a debate? No. The aim of this debate week is to elicit interesting content which changes the audience's mind. This could be in the form of a debate-style argument for a...

Effective Altruism Forum Podcast
“Why so many ‘racists' at Manifest?” by Austin

Effective Altruism Forum Podcast

Play Episode Listen Later Jun 18, 2024 10:50


Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average response to “would you recommend to a friend” was a 9.0/10. Reviewers said nice things like “one of the best weekends of my life” and “dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams” and “I've always found tribalism mysterious, but perhaps that was just because I hadn't yet found my tribe.” Arnold Brooks running a session on Aristotle's Metaphysics. More photos of Manifest here. However, a recent post on The Guardian and review on the EA Forum highlight an uncomfortable fact: we invited a handful of controversial speakers to Manifest, whom these authors call out as “racist”. Why did we invite these folks? First: our sessions and guests were mostly not controversial — [...] ---Outline:(01:01) First: our sessions and guests were mostly not controversial — despite what you may have heard(03:03) Okay, but there sure seemed to be a lot of controversial ones…(06:03) Bringing people together with prediction markets(07:31) Anyways, controversy bad(08:57) Aside: Is Manifest an Effective Altruism event?--- First published: June 18th, 2024 Source: https://forum.effectivealtruism.org/posts/34pz6ni3muwPnenLS/why-so-many-racists-at-manifest --- Narrated by TYPE III AUDIO.

The Nonlinear Library
EA - My experience at the controversial Manifest 2024 by Maniano

The Nonlinear Library

Play Episode Listen Later Jun 17, 2024 8:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My experience at the controversial Manifest 2024, published by Maniano on June 17, 2024 on The Effective Altruism Forum. My experience at the recently controversial conference/festival on prediction markets Background I recently attended the triple whammy of rationalist-adjacent events of LessOnline, Summer Camp, and Manifest 2024. For the most part I had a really great time, and had more interesting conversations than I can count. The overlap between the attendees of each event was significant, and the topics discussed were pretty similar. The average attendee for these events is very smart, well-read, and most likely working in tech, consulting, or finance. People were extremely friendly, and in general the space initially felt like a high-trust environment approaching that of an average EAGlobal conference (which also has overlap with the rational-ish communities, especially when it comes to AI risks), even if the number of EA people there was fairly low-the events were very rationalist-coded. Nominally, Manifest was about prediction markets. However, the organizers had selected for multiple quite controversial speakers and presenters, who in turn attracted a significant number of attendees who were primarily interested in these controversial topics, most prominent of which was eugenics. This human biodiversity (HBD) or "scientific racism" curious crowd engaged in a tiring game of carefully trying the waters with new people they interacted with, trying to gauge both how receptive their conversation partner is to racially incendiary topics and to which degree they are "one of us". The ever-changing landscape of euphemisms for I-am-kinda-racist-but-in-a-high-IQ-way have seemed to converge to a stated interest in "demographics"-or in less sophisticated cases the use of edgy words like "based", "fag", or "retarded" is more than enough to do the trick. If someone asks you what you think of Bukele, you can already guess where he wants to steer the conversation to. The Guardian article I While I was drafting this post, The Guardian released an article on Lightcone, who hosted these events at Lighthaven, a venue that a certain lawsuit claims was partially bought with FTX money (which Oliver Habryka from Lightcone denies). The article detailed some of the scientific racism special guests these past three events had. In the past, The Guardian has released a couple of articles on EA that were a bit hit-piece-y, or tried to connect nasty things that are not really connected to EA at all to EA, framing them as representative of the entire movement. Sometimes the things presented were relevant to other loosely EA-connected communities, or some of the people profiled had tried to interact with the EA community at some point (like in the case of the Collinses, who explicitly do not identify as EA despite what The Guardian says. Collinses attempt to present their case for pro-natalism on the EA Forum was met mostly with downvotes), but a lot of the time the things presented were non-central at best. Despite this, this article doesn't really feel like a hit-piece to me. Some of the things in it I might object to (describing Robin Hanson as misogynistic in particular registers a bit unfair to me, even if he has written some things in bad taste), but for the most part I agree with how it describes Manifest. What is up with all the racists? II The article names some people who are quite connected to eugenics, HBD, or are otherwise highly controversial. They missed quite a few people[1], including a researcher who has widely collaborated with the extreme figure Emil O. W. Kirkegaard, the personal assistant of the anti-democracy, anti-equality figure Curtis Yarvin, and the highly controversial rationalist Michael Vassar, who has been described as "a cult leader" involved in some people ...

The Nonlinear Library
EA - EA EDA: Looking at Forum trends across 2023 by JWS

The Nonlinear Library

Play Episode Listen Later Jun 12, 2024 18:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA EDA: Looking at Forum trends across 2023, published by JWS on June 12, 2024 on The Effective Altruism Forum. tl;dr: AI Safety became the biggest thing in EA Forum discussion last year, and overall Forum engagement trended downards. What that means for the wider EA movement/philosophy is up for interpretation. If you have your own questions, let me know and I'll dive in (or try to share the data) 1. Introduction This is a follow-up to my previous post using the EA Forum API to analyse trends in the Forum. Whereas that last post was a zoomed-out look at Forum use for as far back as the API goes, this is a specific look at the Forum aggregations and trends in 2023 alone. Needless to say, last year was a tumultous year for EA, and the Forum is one of (if not the) primary online hubs for the Community to discuss issues and self-organise. I hoped to see if any of these trends could be spotted in the data available, and also see where the data led me on some more general questions. I'm sharing this now in a not-quite-perfect state but I'd rather post and see the discussion it promotes than have it languishing in my drafts for much longer, and as noted in section 3.4.2 if you have a query that I can dive into, just ask! 2. Methodology (For more detail in the general method, see the previous post) On Monday 6th May I ran two major queries to the EA Forum API: 1) The first scraped all posts in Forum history. I then subselected these to find only posts that were in the 2023 calender year. 2) I ran a secondary query for all of these postIds to find all comments on these posts, and again filtered to only count comments made in 2023. Any discrepancy with ground truth might be because of mistakes of my part during doing the data collection. Furthermore my data is snapshot as of how the 2023 Forum looked at May 6th this year, so any Forum engagement that was deleted (or users who deleted their account) at the point of collection will not be sampled. I'll leave more specific methods to the relevant graphs and tables below. I used Python entirely for this, and am happy to talk about the method in more coding-detail for those interested. I'm trying to resussicate my moribund GitHub profile this summer, and this code may make its way up there. 3. Results 3.1 - Overall Trends in 2023 3.1.1 - Posts and Comments over time This graph shows the a rolling 21 day mean of total posts and comments made in 2023, indexed to 1.0 at the start,[1] so be aware it is a lagging indicator. Both types of engagment show a decline over the course of the year, though the beginning of 2023 was when the Community was still reeling from the FTX scandal, and the Forum seemed to be the primary online place to discuss this. This was causing so much discussion that the Forum team decided to move Community discussions off the front page, so while I've indexed to 1.0 at the beginning for the graph, it's worth noting that January/February 2023 were very unusual times for the Forum. There is also a different story to be told for the individual engagement types. Posts seem to drop from the beginning of the year, tick up in the spring (due to April Fools'), and then drop away towards the end of the year. Comments, on the other hand, rapidly drop away, presumably as a result of engagement burning out after the FTX-Bostrom-Doing EA Better-FLI-Sexual Harrassment-OCB perfect storm. They then settle to some sort of baseline around May, and then pick up again sometimes in spurts due to highly-engaging posts. I think the September-October one is due to the Nonlinear controversy, the December Spike is the response from Tracing and Nonlinear themselves. There didn't seem to be any clear candidate for the spikes in the Summer though.[2] 3.1.2 - Which topics were popular This is just an overview, I have more topic results to sha...

The Nonlinear Library
EA - My first EAG: a mix of feelings by Lovkush

The Nonlinear Library

Play Episode Listen Later Jun 12, 2024 11:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My first EAG: a mix of feelings, published by Lovkush on June 12, 2024 on The Effective Altruism Forum. TLDR I had a mix of feelings before and throughout EAG London 2024. Overall, the experience was excellent and I am more motivated and excited about my next steps in EA and AI safety. However, I am actually unsure if I will attend EAG next year, because I am yet to exhaust other means of networking, especially since I live in London. Why might this be useful for you? This is a narrative that is different to most others. Depending on your background/personality, this will reduce the pressure to optimise every aspect of your time at EAG. I am not saying to do no optimization, but that there is a different balance for different people. If you have not been to an EAG, this provides a flavour of the interactions and feelings - both positive and negative - that are possible. My background I did pure maths from undergraduate to PhD, then lectured maths for foundation year students for a few years, then moved to industry and have been a data scientist at Shell for three years. I took the GWWC pledge in 2014, but I had not actively engaged with the community or chosen a career based on EA principles. A few years ago I made an effort to apply EA principles to my career. I worked through the 80000 Hours career template with AI safety being the obvious top choice, took the AI Safety Fundamentals course, applied to EAG London (and did not get accepted, which was reasonable), and also tried volunteering for SoGive for a couple of months. Ultimately the arguments for AI doom overwhelmed me and put me into defeatist mindset ('How can you out-think a god-like super intelligence?') so I just put my head in the sand instead of contributing. In 2023, with ChatGPT and the prominence of AI, my motivation to contribute came back. I did take several actions, but spread out over several months: I finally learned enough PyTorch to train my first CNN and RNN. I attended an EA hackathon for software engineers and contributed to Stampy. The contributions were minimal though: shock-horror, the coding one does as a data scientist is not the same as what software engineers do! I applied to some AI safety roles (Epoch AI Analyst, Quantum Leap founding learning engineer, Cohere AI Data Trainer) I joined a Mech Interp Discord and within that a reading group for Mathematics for Machine Learning. I go into these details to illustrate a key way I differ from the prototypical EA: I am not particularly agentic! Somebody more rational would have created more concrete plans, accountability systems, and explored more thoroughly the options and actions available. Despite being familiar with rationality / EA for several years, I had not absorbed the ideas enough to apply them in my life. I was a Bob who waits for opportunities to arise, and thus ends up making little progress. The breakthrough came when I got accepted into ML4Good. I have written my thoughts on that experience, but the relevant thing is it gave me a huge boost in motivation and confidence to work on AI safety. Preparing for EAG I actually did not plan to attend EAG London! My next steps in AI Safety were clear (primarily upskilling by getting hands-on experience on projects) and I was unsure what I could bring to the table for other participants. However, three weeks before EAG, somebody in my ML4Good group chat asked who was going, so I figured I may as well apply and see what happens. Given I am writing this, I was accepted! When reading the recommended EA Forum posts for EAG first-timers, I was taken aback by how practical and strategic these people were. This had a two-sided effect for me: it was intimidating and made me question how valuable I could be to other EAG participants, but it did also help me be more agentic and help me push mys...

The Nonlinear Library
EA - 170 to 700 billion fish raised and fed live to mandarin fish in China per year by MichaelStJules

The Nonlinear Library

Play Episode Listen Later Jun 8, 2024 41:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 170 to 700 billion fish raised and fed live to mandarin fish in China per year, published by MichaelStJules on June 8, 2024 on The Effective Altruism Forum. Summary 1. Around 1.2 to 1.9 trillion fish fry have been produced by artificial propagation (breeding) annually in China, and it seems those farmed for direct human consumption and pre-harvest mortality can only account for at most around 460 billion of them (more). 2. I estimate that 170 billion to 700 billion animals (my 80% credible interval) - probably almost all artificially propagated fish - are fed live to mandarin fish (or Chinese perch), Siniperca chuatsi, in China annually, with 9 to 55 billion of them (my 80% credible interval) alive at any time (more, Guesstimate model). 3. By contrast, the number of farmed fish produced globally for direct human consumption is around 111 billion per year, and 103 billion alive at a time, with another 35 to 150 billion raised and stocked per year (Šimčikas, 2020). 4. It's unclear how bad their deaths are as live feed to mandarin fish, but I'd guess they die by suffocation, digestion (stomach acid, enzymes), or mechanical injury, e.g. crushing, after being swallowed live, and probably a common way for aquatic animals to die by predation by fish in the wild (more). 5. It's unclear if there's much we can do to help these feed fish. There's been some progress in substituting artificial diets (including dead animal protein) for live feed for mandarin fish, but this has been a topic of research for over 20 years. Human diet interventions would need to be fairly targeted to be effective. I give a shallow overview of some possible interventions and encourage further investigation (more). Acknowledgements Thanks to Vasco Grilo, Saulius Šimčikas and Max Carpendale for feedback. All errors are my own. Fish fry production in China One of the early developmental stages of fish is the fry stage (Juvenile fish - Wikipedia). Šimčikas (2019, EA Forum), in his appendix section, raised the question of why hundreds of billions of fish fry were produced artificially (via artificial breeding, i.e. artificial propagation) in China in each of multiple years, yet only "28-92 billion" farmed fish were produced in China in 2015, "according to an estimate from Fishcount". He found that if the apparent discrepancy were due to pre-slaughter mortality, then this would indicate unusually low survival rates. He left open the reason for the apparent discrepancy and recommended further investigation. Before going into potential explanations for the discrepancy, I share some more recent numbers for the artificial propagation of fish: 1.9143 trillion fish fry in China in 2013 (Li & Xia, 2018) and 1.252 trillion freshwater fry and 167 million marine fish fry in China in 2019 (Hu et al., 2021). The 2019 numbers seem substantially lower than in 2013, so the trend may have reversed, one of these numbers is inaccurate, there's high variance across years or one of the years was unusual. Li and Xia (2018) also plot the trend over time up to 2013, along with total freshwater aquaculture: 28 to 92 billion farmed fish produced in China in 2015 (Fishcount) from 1 to 2 trillion artificially propagated fish fry, would suggest a pre-slaughter/pre-harvest survival rate of 1.4% to 9.2% (from the fry stage on). Survival rates are typically at least 20% for the most commonly farmed species, including carps and tilapias (Animal Charity Evaluators, 2020, table 4), for which China accounts for most production. And Šimčikas (2019, EA Forum) notes: Since hatchery-produced juveniles are already past the first stage of their lives in which pre-slaughter mortality is the highest, mortality during the grow-out period shouldn't be that high. Under fairly generous assumptions for an explanation based on pre-harvest mortality, using ...

The Nonlinear Library
EA - In favor of an AI-powered translation button on the EA Forum by Alix Pham

The Nonlinear Library

Play Episode Listen Later Jun 7, 2024 0:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favor of an AI-powered translation button on the EA Forum, published by Alix Pham on June 7, 2024 on The Effective Altruism Forum. It'd be great if anyone landing on the forum could have the opportunity to seamlessly translate its content into their language with the click of a button. A bit like Wikipedia, but for all languages, and powered by AI; a bit like copy-pasting the URL into Google Translate, but without copy-pasting, and with better translation. I don't know to what extent it's easy to implement into the Forum, but I would have expected it wouldn't be much more complicated than the AI-generated readings of the posts. But I could be wrong about this, of course. Image from Freepik Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - Now THIS is forecasting: understanding Epoch's Direct Approach by Elliot Mckernon

The Nonlinear Library

Play Episode Listen Later May 4, 2024 34:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Now THIS is forecasting: understanding Epoch's Direct Approach, published by Elliot Mckernon on May 4, 2024 on LessWrong. Happy May the 4th from Convergence Analysis! Cross-posted on the EA Forum. As part of Convergence Analysis's scenario research, we've been looking into how AI organisations, experts, and forecasters make predictions about the future of AI. In February 2023, the AI research institute Epoch published a report in which its authors use neural scaling laws to make quantitative predictions about when AI will reach human-level performance and become transformative. The report has a corresponding blog post, an interactive model, and a Python notebook. We found this approach really interesting, but also hard to understand intuitively. While trying to follow how the authors derive a forecast from their assumptions, we wrote a breakdown that may be useful to others thinking about AI timelines and forecasting. In what follows, we set out our interpretation of Epoch's 'Direct Approach' to forecasting the arrival of transformative AI (TAI). We're eager to see how closely our understanding of this matches others'. We've also fiddled with Epoch's interactive model and include some findings on its sensitivity to plausible changes in parameters. The Epoch team recently attempted to replicate DeepMind's influential Chinchilla scaling law, an important quantitative input to Epoch's forecasting model, but found inconsistencies in DeepMind's presented data. We'll summarise these findings and explore how an improved model might affect Epoch's forecasting results. This is where the fun begins (the assumptions) The goal of Epoch's Direct Approach is to quantitatively predict the progress of AI capabilities. The approach is 'direct' in the sense that it uses observed scaling laws and empirical measurements to directly predict performance improvements as computing power increases. This stands in contrast to indirect techniques, which instead seek to estimate a proxy for performance. A notable example is Ajeya Cotra's Biological Anchors model, which approximates AI performance improvements by appealing to analogies between AIs and human brains. Both of these approaches are discussed and compared, along with expert surveys and other forecasting models, in Zershaaneh Qureshi's recent post, Timelines to Transformative AI: an investigation. In their blog post, Epoch summarises the Direct Approach as follows: The Direct Approach is our name for the idea of forecasting AI timelines by directly extrapolating and interpreting the loss of machine learning models as described by scaling laws. Let's start with scaling laws. Generally, these are just numerical relationships between two quantities, but in machine learning they specifically refer to the various relationships between a model's size, the amount of data it was trained with, its cost of training, and its performance. These relationships seem to fit simple mathematical trends, and so we can use them to make predictions: if we make the model twice as big - give it twice as much 'compute' - how much will its performance improve? Does the answer change if we use less training data? And so on. If we combine these relationships with projections of how much compute AI developers will have access to at certain times in the future, we can build a model which predicts when AI will cross certain performance thresholds. Epoch, like Convergence, is interested in when we'll see the emergence of transformative AI (TAI): AI powerful enough to revolutionise our society at a scale comparable to the agricultural and industrial revolutions. To understand why Convergence is especially interested in that milestone, see our recent post 'Transformative AI and Scenario Planning for AI X-risk'. Specifically, Epoch uses an empirically measured scaling ...

The Nonlinear Library
EA - Today is World Malaria Day (April 25) by tobytrem

The Nonlinear Library

Play Episode Listen Later Apr 25, 2024 3:07


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Today is World Malaria Day (April 25), published by tobytrem on April 25, 2024 on The Effective Altruism Forum. Malaria is massive. Our World in Data writes: "Over half a million people died from the disease each year in the 2010s. Most were children, and the disease is one of the leading causes of child mortality." Or, as Rob Mather, CEO of the Against Malaria Foundation (AMF) phrases it: the equivalent of seven jumbo jets full of children die of Malaria each day. But I don't see malaria in the news that much. This is partly because it was eradicated from Western countries over the course of the 20th century, both because of intentional interventions such as insecticide, and because of the draining of swamp lands and building of better housing. But it's also because malaria is a slow catastrophe, like poverty, and climate change. We've dealt with it to varying degrees throughout history, and though it is an emergency to anyone affected by it, to the rest of us, it's a tropical disease which has been around forever. It can be hard to generate urgency when a problem has existed for so long. But there is a lot that we can do. Highly effective charities work on malaria; the Against Malaria Foundation (AMF) distributes insecticide treated bed-nets, and a Malaria Consortium program offers seasonal malaria chemoprevention treatment- both are GiveWell Top Charities. Two malaria vaccines, RTS,S and the cheaper R21[1], have been developed in recent years[2]. Malaria is preventable. Though malaria control and eradication is funded by international bodies such as The Global Fund, there isn't nearly enough money being spent on it. AMF has an immediate funding gap of $185.78m. That's money for nets they know are needed. And though vaccines are being rolled out, progress has been slower than it could be, and the agencies distributing them have been criticised for lacking urgency. Malaria is important, malaria is neglected, malaria is tractable. If you want to do something about malaria today, consider donating to Givewell's recommendations: AMF, or the Malaria Consortium: Related links I recommend Why we didn't get a malaria vaccine sooner; an article in Works in Progress. WHO's World Malaria Day 2024 announcement. The Our World in Data page on malaria. Audio AMA, with Rob Mather, CEO of AMF (transcript). From SoGive, an EA Forum discussion of the cost-effectiveness of malaria vaccines, with cameos from 1DaySooner and GiveWell. For more info, see GiveWell's page on malaria vaccines. The story of Tu Youyou, a researcher who helped develop an anti-malarial drug in Mao's China. What is an Emergency? The Case for Rapid Malaria Vaccination, from Marginal Revolution. More content on the Forum's Malaria tag. ^ R21 offers up to 75% reduction of symptomatic malaria cases when delivered at the right schedule. ^ Supported by Open Philanthropy and GiveWell. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Space settlement and the time of perils: a critique of Thorstad by Matthew Rendall

The Nonlinear Library

Play Episode Listen Later Apr 14, 2024 6:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Space settlement and the time of perils: a critique of Thorstad, published by Matthew Rendall on April 14, 2024 on The Effective Altruism Forum. Given the rate at which existential risks seem to be proliferating, it's hard not to suspect that unless humanity comes up with a real game-changer, in the long run we're stuffed. David Thorstad has recently argued that this poses a major challenge to longtermists who advocate prioritising existential risk. The more likely an x-risk is to destroy us, Thorstad notes, the less likely there is to be a long-term future. Nor can we solve the problem by mitigating this or that particular x-risk - we would have to reduce all of them. The expected value of addressing x-risks may not be so high after all. There would still be an argument for prioritising them if we are passing through a 'time of perils' after which existential risk will sharply fall. But this is unlikely to be the case. Thorstad raises a variety of intriguing questions which I plan to tackle in a later post, picking up in part on Owen Cotton-Barratt's insightful comments here. In this post I'll focus on a particular issue - his claim that settling outer space is unlikely to drive the risk of human extinction low enough to rescue the longtermist case. Like other species, ours seems more likely to survive if it is widely distributed. Some critics, however, argue that space settlements would still be physically vulnerable, and even writers sympathetic to the project maintain they would remain exposed to dangerous information. Certainly many, perhaps most, settlements would remain vulnerable. But would all of them? First let's consider physical vulnerability. Daniel Deudney and Phil (Émile) Torres have warned of the possibility of interplanetary or even interstellar conflict. But once we or other sentient beings spread to other planets, it would render travel between them time-consuming. On the one hand, that would seem to preclude any United Federation of Planets to keep the peace, as Torres notes, but it would also make warfare difficult and - very likely - pointless, just as it once was between Europe and the Americas. It's certainly possible, as Thorstad notes, that some existential threat could doom us all before humanity gets to this point, but it doesn't seem like a cert. Deudney seems to anticipate this objection, and argues that 'the volumes of violence relative to the size of inhabited territories will still produce extreme saturation….[U]ntil velocities catch up with the enlarged distances, solar space will be like the Polynesian diaspora - with hydrogen bombs.' But if islands are far enough apart, the fact that weapons could obliterate them wouldn't matter if there were no way to deliver the weapons. It would still matter, but less so, if it took a long time to deliver the weapons, allowing the targeted island to prepare. Ditto, it would seem, for planets. Suppose that's right. We might still not be out of the woods. Deudney warns that 'giant lasers and energy beams employed as weapons might be able to deliver destructive levels of energy across the distances of the inner solar system in times comparable to ballistic missiles across terrestrial distances.' But he goes on to note that 'the distances in the outer solar system and beyond will ultimately prevent even this form of delivering destructive energy at speeds that would be classified as instantaneous.' That might not matter so much if the destructive energy reached its target in the end. Still, I'd be interested whether any EA Forum readers know whether interstellar death rays of this kind are feasible at all. There's also the question of why war would occur. Liberals maintain that economic interdependence promotes peace, but as critics have long pointed out, it also gives states something to fight abou...

The Nonlinear Library
LW - Things Solenoid Narrates by Solenoid Entity

The Nonlinear Library

Play Episode Listen Later Apr 13, 2024 3:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things Solenoid Narrates, published by Solenoid Entity on April 13, 2024 on LessWrong. I spend a lot of time narrating various bits of EA/longtermist writing. The resulting audio exists in many different places. Surprisingly often, people who really like one thing don't know about the other things. This seems bad.[1] A few people have requested a feed to aggregate 'all Solenoid's narrations.' Here it is. (Give it a few days to be up on the big platforms.) I'll update it ~weekly.[2] And here's a list of things I've made or am working on, shared in the hope that more people will discover more things they like: Human Narrations Astral Codex Ten Podcast ~920 episodes so far including all non-paywalled ACX posts and SSC archives going back to 2017, with some classic posts from earlier. Archive. Patreon. LessWrong Curated Podcast Human narrations of all the Curated posts. Patreon. AI Safety Fundamentals Narrations of most of the core resources for AISF's Alignment and Governance courses, and a fair few of the additional readings. Alignment, Governance 80,000 Hours Many pages on their website, plus their updated career guide. EA Forum Curated podcast This is now AI narrated and seems to be doing perfectly well without me, but lots of human narrations of classic EA forum posts can be found in the archive, at the beginning of the feed. Metaculus Journal I'm not making these now, but I previously completed many human narrations of Metaculus' 'fortified essays'. Radio Bostrom: I did about half the narration for Radio Bostrom, creating audio versions of some of Bostrom's key papers. Miscellaneous: Lots of smaller things. Carlsmith's Power-seeking AI paper, etc. AI Narrations Last year I helped TYPE III AUDIO to create high-quality AI narration feeds for EA Forum and LessWrong, and many other resources. Every LessWrong post above 30 karma is included on this feed. Spotify Every EA Forum post above 30 karma is included on this feed: Spotify Also: ChinAI AI Safety Newsletter Introduction to Utilitarianism Other things that are like my thing Eneasz is an absolute unit. Carlsmith is an amazing narrator of his own writing. There's a partially complete (ahem) map of the EA/Longtermist audio landscape here. There's an audiobook of The Sequences, which is a pretty staggering achievement. The Future I think AI narration services are already sharply reducing the marginal value of my narration work. I expect non-celebrity[3] human narration to be essentially redundant within 1-2 years. AI narration has some huge advantages too, there's no denying it. Probably this is a good thing. I dance around it here. Once we reach that tipping point, I'll probably fall back on the ACX podcast and LW Curated podcast, and likely keep doing those for as long as the Patreon income continues to justify the time I spend. ^ I bear some responsibility for this, first because I generally find self-promotion cringey[4] and enjoy narration because it's kind of 'in the background', and second because I've previously tried to maintain pseudonymity (though this has become less relevant considering I've released so much material under my real name now.) ^ It doesn't have ALL episodes I've ever made in the past (just a lot of them), but going forward everything will be on that feed. ^ As in, I think they'll still pay Stephen Fry to narrate stuff, or authors themselves (this is very popular.) ^ Which is not to say I don't have a little folder with screenshots of every nice thing anyone has ever said about my narration... Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Some underrated reasons why the AI safety community should reconsider its embrace of strict liability by Cecil Abungu

The Nonlinear Library

Play Episode Listen Later Apr 9, 2024 20:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some underrated reasons why the AI safety community should reconsider its embrace of strict liability, published by Cecil Abungu on April 9, 2024 on The Effective Altruism Forum. Introduction It is by now a well-known fact that existing AI systems are already causing harms like discrimination, and it's also widely expected that the advanced AI systems which the likes of Meta and OpenAI are building could also cause significant harms in the future. Knowingly or unknowingly, innocent people have to live with the dire impacts of these systems. Today that might be a lack of equal access to certain opportunities or the distortion of democracy but in future it might escalate to more concerning security threats. In light of this, it should be uncontroversial for anyone to insist that we need to establish fair and practically sensible ways of figuring out who should be held liable for AI harms. The good news is that a number of AI safety experts have been making suggestions. The not-so-good news is that the idea of strict liability for highly capable advanced AI systems still has many devotees. The most common anti-strict liability argument out there is that it discourages innovation. In this piece, we won't discuss that position much because it's already received outsize attention. Instead, we argue that the pro-strict liability argument should be reconsidered for the following trifecta of reasons: (i) In the context of highly capable advanced AI, both strict criminal liability and strict civil liability have fatal gaps, (ii) The argument for strict liability often rests on faulty analogies and (iii) Given the interests at play, strict liability will struggle to gain traction. Finally, we propose that AI safety-oriented researchers working on liability should instead focus on the most inescapably important task-figuring out how to transform good safety ideas into real legal duties. AI safety researchers have been pushing for strict liability for certain AI harms The few AI safety researchers who've tackled the question of liability in-depth seem to have taken a pro-strict liability for certain AI harms, especially harms that are a result of highly capable advanced AI. Let's consider some examples. In a statement to the US Senate, the Legal Priorities Project recommended that AI developers and deployers be held strictly liable if their technology is used in attacks on critical infrastructure or a range of high-risk weapons that result in harm. LPP also recommended strict liability for malicious use of exfiltrated systems and open-sourced weights. Consider as well the Future of Life Institute's feedback to the European Commission, where it calls for a strict liability regime for harms that result from high-risk and general purpose AI systems. Finally, consider Gabriel Weil's research on the promise that tort law has for regulating highly capable advanced AI (also summarized in his recent EA Forum piece) where he notes the difficulty of proving negligence in AI harm scenarios and then argues that strict liability can be a sufficient corrective for especially dangerous AI. The pro-strict liability argument In the realm of AI safety, arguments for strict liability generally rest on two broad lines of reasoning. The first is that historically, strict liability has been applied for other phenomena that are somewhat similar to highly capable advanced AI, which means that it would be appropriate to apply the same regime to highly capable advanced AI. Some common examples of these phenomena include new technologies like trains and motor vehicles, activities which may cause significant harm such as the use of nuclear power and the release of hazardous chemicals into the environment and the so-called 'abnormally dangerous activities' such as blasting with dynamite. The second line of r...

Comme un poisson dans l'eau
#31 Crevettes : c'est pas tout rose ! - Elisa Autric & Léa Guttmann

Comme un poisson dans l'eau

Play Episode Listen Later Apr 3, 2024 60:07


Comme chaque année depuis que j'ai lancé le podcast, je consacre un épisode entier aux animaux aquatiques, à l'occasion de la Journée mondiale pour la fin de la pêche et des élevages aquacoles, qui a lieu le dernier samedi du mois de mars. Une fois par an, ce n'est déjà pas assez par rapport au nombre de victimes, mais c'est le strict minimum que je me suis fixé.  Saviez-vous que l'émoji crevettes qui représente une jolie crevette toute rose et courbée ne représente en fait pas du tout les crevettes telles qu'elles sont de leur vivant, mais les crevettes une fois cuites et prêtes à manger ? En fait, scoop mais : les crevettes ne sont le plus souvent pas roses du tout !  Et autre information que vous ne saviez probablement pas : ce sont très certainement les animaux les plus nombreux à être exploités pour la consommation humaine...  Avec mon deux invitées Léa Guttmann de Shrimp Welfare Project et Elisa Autric de Rethink Priorities, on décortique (ok jeu de mots de très mauvais goût, j'avoue) le sort des crevettes exploitées, à la fois celles qui sont pêchées et celles qui sont élevées.  On ne lâche rien pour tous les animaux aquatiques sentients ! Pour plus d'informations sur les meilleurs moyens d'aider les crevettes ou de construire des campagnes sur cette question, n'hésitez pas à contacter Elisa Autric : elisa.autric@gmail.com ________________________________ Références et sources citées dans l'entretien :  - Journée Mondiale pour la Fin de la Pêche et des élevages aquacoles (la JMFP) - Shrimp Welfare Project - Rethink Priorities - Charity Entrepreneurship Program - Rapport "Shrimp: The animals most commonly used and killed for food production" co-écrit par Daniela R. Waldhorn et Elisa Autric - Fishcount - Food and Agriculture Organization (FAO), agence des Nations Unies : - Épisode de Comme un poisson dans l'eau avec Tom Bry-Chevalier sur l'altruisme efficace - Article "Pre-slaughter mortality of farmed shrimp" écrit par Hannah McKay et William McAuliffe - Super webinaire avec Hannah McKay - Rapport "Welfare considerations for farmed shrimp" écrit par Hannah McKay, William McAuliffe et Daniela R. Waldhorn pour Rethink Priorities - Shrimp Welfare Report - Article de vulgarisation écrit par Léa Guttmann dans la revue de la Fondation Droit Animal - Épisode de Comme un poisson dans l'eau "Laissons les poisson dans l'eau" - Le principe de précaution vis-à-vis de la sentience, formulé par Jonathan Birch Recommandations d'Elisa Autric et de Léa Guttmann : - le post du EA Forum "Strange Love - Developing Empathy With Intention" - Le podcast How I Learned To Love Shrimp par Amy Odene et James Ozden ________________________________ SOUTENIR : https://linktr.ee/poissonpodcast Comme un poisson dans l'eau est un podcast indépendant et sans publicité : votre soutien est indispensable pour qu'il puisse continuer à exister. Merci d'avance ! Les comptes Instagram, Twitter, Facebook, Bluesky et Mastodon du podcast sont également à retrouver dans le link tree ! ________________________________ CRÉDITS Comme un poisson dans l'eau est un podcast indépendant créé et animé par Victor Duran-Le Peuch. Charte graphique : Ivan Ocaña Générique : Synthwave Vibe par Meydän Musique : Flying High par Fredji

The Nonlinear Library
EA - Reasons for optimism about measuring malevolence to tackle x- and s-risks by Jamie Harris

The Nonlinear Library

Play Episode Listen Later Apr 2, 2024 16:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reasons for optimism about measuring malevolence to tackle x- and s-risks, published by Jamie Harris on April 2, 2024 on The Effective Altruism Forum. Reducing the influence of malevolent actors seems useful for reducing existential risks (x-risks) and risks of astronomical suffering (s-risks). One promising strategy for doing this is to develop manipulation-proof measures of malevolence. I think better measures would be useful because: We could use them with various high-leverage groups, like politicians or AGI lab staff. We could use them flexibly (for information-only purposes) or with hard cutoffs. We could use them in initial selection stages, before promotions, or during reviews. We could spread them more widely via HR companies or personal genomics companies. We could use small improvements in measurements to secure early adopters. I think we can make progress on developing and using them because: It's neglected, so there will be low-hanging fruit There's historical precedent for tests and screening We can test on EA orgs Progress might be profitable The cause area has mainstream potential So let's get started on some concrete research! Context ~4 years ago, David Althaus and Tobias Baumann posted about the impact potential of "Reducing long-term risks from malevolent actors". They argued that: Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history. Malevolent individuals in positions of power could negatively affect humanity's long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. Malevolent humans with access to advanced technology - such as whole brain emulation or other forms of transformative AI - could cause serious existential risks and suffering risks… Further work on reducing malevolence would be valuable from many moral perspectives and constitutes a promising focus area for longtermist EAs. I and many others were impressed with the post. It got lots of upvotes on the EA Forum and 80,000 Hours listed it as an area that they'd be "equally excited to see some of our readers… pursue" as their list of the most pressing world problems. But I haven't seen much progress on the topic since. One of the main categories of interventions that Althaus and Baumann proposed was "The development of manipulation-proof measures of malevolence… [which] could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs." Anecdotally, I've encountered scepticism that this would be either tractable or particularly useful, which surprised me. I seem to be more optimistic than anyone I've spoken to about it, so I'm writing up some thoughts explaining my intuitions. My research has historically been of the form: "assuming we think X is good, how do we make X happen?" This post is in a similar vein, except it's more 'initial braindump' than 'research'. It's more focused on steelmanning the case for than coming to a balanced, overall assessment. I think better measures would be useful We could use difficult-to-game measures of malevolence with various high-leverage groups: Political candidates Civil servants and others involved in the policy process Staff at A(G)I labs Staff at organisations inspired by effective altruism. Some of these groups might be more tractable to focus on first, e.g. EA orgs. And we could test in less risky environments first, e.g. smaller AI companies before frontier labs, or bureaucratic policy positions before public-facing political roles. The measures could be binding or used flexibly, for information-only purposes. For example, in a hiring process, there could either be some malevolence threshold above which a candidate is rejected without question, or test(s) for malevol...

The Nonlinear Library
EA - Post-mortem on Wytham Abbey by WythamAbbey

The Nonlinear Library

Play Episode Listen Later Apr 1, 2024 4:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Post-mortem on Wytham Abbey, published by WythamAbbey on April 1, 2024 on The Effective Altruism Forum. In response to requests for a post-mortem on the Wytham Abbey project, we[1] have decided to publish this in full. This EA Forum exclusive will include a blow-by-blow account of the decision-making process. TL;DR: We determined that the project was too controversial. The primary source of controversy was the name, especially with regard to how to pronounce "Wytham". First some background. We decided to hold a brainstorming session to determine the best way forward for the project. This was held in Wytham Abbey, of course. We consider this brainstorming session to be a success, and reaped the benefits of having an " immersive environment which was more about exploring new ideas than showing off results was just very good for intellectual progress". When considering the name, we observed that some people pronounce it "with-ham". This is incorrect, however there was an entire breakout room in Wytham Abbey given over to discussing this pronunciation. Most members of the Wytham Abbey team considered it offensive, because we all have broad moral circles and object to things being "with ham". We also considered changing the name to "Wythout-ham", however anything that foregrounded ham was simply unappealing to many people in our team. One person, a certain Hamilton B. Urglar[2] proposed that the caterers might bring in some ham immediately so everyone could try some, just to make sure we were right to be opposed to it. Someone else threatened to put a post on the forum entitled "Sharing Information about Hamilton Urglar". It all got a bit tense, but then the Hamburglar offered to buy vegan burgers for everyone. Nobody was really reassured by this until he offered to provide screenshot evidence that the burgers had, indeed, been bought; provide a 200 page document justifying his actions; and put it on the EA Forum together with pictures of Wytham Abbey. There then followed a breakout session dedicated to the pronunciation "White-ham". One member of the project team proposed changing the spelling of Wytham to "White-ham" to avoid further confusion.[3] Another person thought this was stupid, and said we may as well change the name to "Blackham Abbey". We needed some more time in the immersive environment of Wytham Abbey, but we finally concluded that: " Blackham is a more stupid name than White-ham or Wytham". Someone wrote this sentence on a blackboard. Thanks in no small part to the immersive environment of the glorious abbey, we harmoniously came to the conclusion that "We like this sentence and think it is true". Someone then suggested that it should have been written on a whiteboard instead of a blackboard. Then people started arguing. All hell broke loose. After further arguing, it seems that comparing "Blackham" to "Whiteham" was more controversial than any of us realised. Who knew?! As a result, the EV board decided to oust Wytham Abbey from its position in the portfolio. It did not seem wise to foreground all the controversies at the heart of our decision-making process, so the board simply stated that Wytham Abbey was "not consistently candid in its communications with the board, hindering the board's ability to exercise its responsibilities". Unfortunately, there then followed a sustained campaign with the rallying cry "Effective Ventures is nothing without its castles"[4], and half the EV board got sacked, and Wytham Abbey got reinstated. The End.[5] [6] ^ We have carefully avoided specifying who we mean by "we". For more details, see footnote 6. ^ Hamilton B. Urglar is sometimes known as the Hamburglar, and also sometimes known simply as "Ham". Some might argue that this biases him to be more favourable to ham. The Hamburglar argued that he could counter all the...

Effective Altruism Forum Podcast
“Unflattering aspects of Effective Altruism” by NunoSempere

Effective Altruism Forum Podcast

Play Episode Listen Later Mar 25, 2024 1:09


I've been writing a few posts critical of EA over at my blog. They might be of interest to people here: Unflattering aspects of Effective Altruism Alternative Visions of Effective Altruism Auftragstaktik Hurdles of using forecasting as a tool for making sense of AI progress Brief thoughts on CEA's stewardship of the EA Forum Why are we not harder, better, faster, stronger? ...and there are a few smaller pieces on my blog as well. I appreciate comments and perspectives anywhere, but prefer them over at the individual posts at my blog, since I have a large number of disagreements with the EA Forum's approach to moderation, curation, aesthetics or cost-effectiveness. --- First published: March 15th, 2024 Source: https://forum.effectivealtruism.org/posts/coWvsGuJPyiqBdrhC/unflattering-aspects-of-effective-altruism --- Narrated by TYPE III AUDIO.

The Nonlinear Library
EA - Unflattering aspects of Effective Altruism by NunoSempere

The Nonlinear Library

Play Episode Listen Later Mar 15, 2024 0:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unflattering aspects of Effective Altruism, published by NunoSempere on March 15, 2024 on The Effective Altruism Forum. I've been writing a few posts critical of EA over at my blog. They might be of interest to people here: Unflattering aspects of Effective Altruism Alternative Visions of Effective Altruism Auftragstaktik Hurdles of using forecasting as a tool for making sense of AI progress Brief thoughts on CEA's stewardship of the EA Forum Why are we not harder, better, faster, stronger? ...and there are a few smaller pieces on my blog as well. I appreciate comments and perspectives anywhere, but prefer them over at the individual posts, since I disagree with the EA Forum's approach to life. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Hire the CEA Online team for software development & consulting by Will Howard

The Nonlinear Library

Play Episode Listen Later Mar 14, 2024 6:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hire the CEA Online team for software development & consulting, published by Will Howard on March 14, 2024 on The Effective Altruism Forum. TL;DR: We are offering (paid) software consulting and contracting services to impactful projects. If you think we might be a good fit for your project please fill in the form here and we'll get back to you within a couple of days. The CEA Online team has recently taken on a few (3) projects for external organisations, in addition to our usual work which consists of developing the EA Forum[1]. We think these external projects have gone well, so we're trying out offering these services publicly. We are mainly[2] looking to help with projects that we think have a large potential for impact. As we're in the early stages of exploring this we are quite open in the sorts of projects we would consider, but just to give you some concrete ideas: Building a website that is more than a simple landing page. Think of the AMF website, which in spite of its deceptive appearance actually makes a lot of data available in a very transparent way. Or the Rethink Priorities Cross-Cause Cost-Effectiveness Model. Building an internal tool to streamline some common process in your organisation. Building a web platform that requires handling user data. E.g. a site that matches volunteers with relevant tasks. Or, if you can imagine it, some kind of web forum. If you have a project where you're not sure if we'd be a good fit, please do reach out to us anyway. You will be providing a service to us just by telling us about it, as we want to understand what the needs in the community are. We might also be able to point you in the right direction or set you up with another contractor even if we aren't suitable ourselves. In the projects we have completed so far, the thing that has been suggestive of this work being very worthwhile is that in most cases the project would have progressed much more slowly without us, or potentially not happened at all. This was either due to the people involved already having too many demands on their time, or having run into problems with their existing development process. We're excited to do more in the hope that having this available as a reliable service will push people to do software-dependent projects that they wouldn't have done otherwise, or execute them to a higher quality (and faster). What we could do for you We're open to anything from "high-level consulting" to "taking on a project entirely". In the "high-level consulting" case we would do some initial setup and give you advice on how to proceed, so you could then take over the project or continue with other contractors (who we might be able to set you up with). In the "taking on a project entirely" case we would be the contractors and would write all the code. The projects we have done so far have been more on the taking-on-entirely end of the spectrum. We also have an excellent designer, Agnes (@agnestenlund), and the capacity[3] to do product development work, such as conducting user interviews to gather requirements and then designing a solution based on those. This might be especially valuable if you have a lot of demands on your time and want to be able to hand off a project to a safe pair of hands. As mentioned above, we are hoping to help with projects that are high-impact, so we'll decide what to work on based on our fit for the project, as well the financial and impact cases. As such, these are some characteristics that make a project more likely to be a good fit (non-exhaustive): Being motivated by EA principles and/or heavily embedded in the EA ecosystem. Cases where your organisation is at low capacity, such that you have projects that you think would be valuable to do but don't have the time to commit to yourselves. Cases where we can help you navigate...

The Nonlinear Library
EA - This is why people are reluctant to write on the EA Forum by Stan Pinsent

The Nonlinear Library

Play Episode Listen Later Mar 9, 2024 2:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: This is why people are reluctant to write on the EA Forum, published by Stan Pinsent on March 9, 2024 on The Effective Altruism Forum. Four days ago I posted a question Why are you reluctant to write on the EA Forum?, with a link to Google Form. I received 20 responses. This post is in three parts: Summary of reasons people are reluctant to write on the EA Forum Suggestions for making it easier Positive feedback for the EA Forum Replies in full Summary of reasons people are reluctant to write on the EA Forum The form received 20 responses over four days. All replies included a reason for being reluctant or unable to write on the EA Forum. Only a minority of replies included a concrete suggestion for improvement. I have attempted to tally how many times each reason appeared across the 20 responses[2]: Suggestions for making it easier to contribute I give all concrete suggestions for helping people be less reluctant to contribute to the forum, in chronological order in which they were received: More discourse on increasing participation: "more posts like these which are aimed at trying to get more people contributing" Give everyone equal Karma power: "If the amount of upvotes and downvotes you got didn't influence your voting power (and was made less prominent), we would have less groupthink and (pertaining to your question) I would be reading and writing on the EA-forum often and happily, instead of seldom and begrudgingly." Provide extra incentives for posting: "Perhaps small cash or other incentives given each month for best posts in certain categories, or do competitions, or some such measure? That added boost of incentive and the chance that the hours spent on a post may be reimbursed somehow." "Discussions that are less tied to specific identities and less time-consuming to process - more Polis like discussions that allow participants to maintain anonymity, while also being able to understand the shape of arguments." Lower the stakes for commenting: "I'm not sure if comment section can include "I've read x% of the article before this comment"?" Positive feedback for the EA Forum The question invited criticism of the Forum, but it did nevertheless garner some positive feedback. For an internet forum it's pretty good. But it's still an internet forum. Not many good discussions happen on the internet. Forum team do a great job :) Responses in full All responses can be found here. ^ ^ You can judge for yourself here whether I correctly classified the responses. I considered lumping "too time-consuming" and "lack of time" together, but decided against this because the former seems to imply "bar is very high", while the latter is merely a statement on how busy the respondent's life is. The form collected two responses: Why are you reluctant to write on the EA Forum? What would make it easier? Is there anything else you would like to share? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research by David Kristoffersson

The Nonlinear Library

Play Episode Listen Later Mar 8, 2024 7:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research, published by David Kristoffersson on March 8, 2024 on The Effective Altruism Forum. Cross-posted on LessWrong. Executive Summary We're excited to introduce Convergence Analysis - a research non-profit & think-tank with the mission of designing a safe and flourishing future for humanity in a world with transformative AI. In the past year, we've brought together an interdisciplinary team of 10 academics and professionals, spanning expertise in technical AI alignment, ethics, AI governance, hardware, computer science, philosophy, and mathematics. Together, we're launching three initiatives focused on conducting Scenario Research, Governance Recommendations Research, and AI Awareness. Our programs embody three key elements of our Theory of Change and reflect what we see as essential components of reducing AI risk: (1) understanding the problem, (2) describing concretely what people can do, and (3) disseminating information widely and precisely. In some more detail, they do the following: Scenario Research: Explore and define potential AI scenarios - the landscape of relevant pathways that the future of AI development might take. Governance Recommendations Research: Provide concrete, detailed analyses for specific AI governance proposals that lack comprehensive research. AI Awareness: Inform the general public and policymakers by disseminating important research via books, podcasts, and more. In the next three months, you can expect to see the following outputs: Convergence's Theory of Change: A report detailing an outcome-based, high-level strategic plan on how to mitigate existential risk from TAI. Research Agendas for our Scenario Research and Governance Recommendations initiatives. 2024 State of the AI Regulatory Landscape: A review summarizing governmental regulations for AI safety in 2024. Evaluating A US AI Chip Registration Policy: A research paper evaluating the global context, implementation, feasibility, and negative externalities of a potential U.S. AI chip registry. A series of articles on AI scenarios highlighting results from our ongoing research. All Thinks Considered: A podcast series exploring the topics of critical thinking, fostering open dialogue, and interviewing AI thought leaders. Learn more on our new website. History Convergence originally emerged as a research collaboration in existential risk strategy between David Kristoffersson and Justin Shovelain from 2017 to 2021, engaging a diverse group of collaborators. Throughout this period, they worked steadily on building a body of foundational research on reducing existential risk, publishing some findings on the EA Forum and LessWrong, and advising individuals and groups such as Lionheart Ventures. Through 2021 to 2023, we laid the foundation for a research institution and built a larger team. We are now launching Convergence as a strong team of 10 researchers and professionals with a revamped research and impact vision. Timelines to advanced AI have shortened, and our society urgently needs clarity on the paths ahead and on the right courses of action to take. Programs Scenario Research There are large uncertainties about the future of AI and its impacts on society. Potential scenarios range from flourishing post-work futures to existential catastrophes such as the total collapse of societal structures. Currently, there's a serious dearth of research to understand these scenarios - their likelihood, causes, and societal outcomes. Scenario planning is an analytical tool used by policymakers, strategists, and academics to explore and prepare for the landscape of possible outcomes in domains defined by uncertainty. Such research typically defines specific parameters that are likely to cause certain scenarios, and id...

The Nonlinear Library
EA - Invest in ACX Grants projects! by Saul Munn

The Nonlinear Library

Play Episode Listen Later Mar 7, 2024 5:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Invest in ACX Grants projects!, published by Saul Munn on March 7, 2024 on The Effective Altruism Forum. TLDR So, you think you're an effective altruist? Okay, show us what you got - invest in charitable projects, then see how you do over the coming year. If you pick winners, you get (charitable) prizes; otherwise, you lose your (charitable) dollars. Also, you get to fund impactful projects. Win-win. Click here to see the projects and to start investing! What's ACX/ACX Grants? Astral Codex Ten (ACX) is a blog written by Scott Alexander on topics like effective altruism, reasoning, science, psychiatry, medicine, ethics, genetics, AI, economics, and politics. ACX Grants is a grants program in which Scott Alexander helps fund charitable and scientific projects - see the 2022 cohort here and his retrospective on ACX Grants 2022 here. What do you mean by "invest in ACX Grants projects"? In ACX Grants 2024, some of the applications were given direct grants and the rest were given the option to participate in an impact market. Impact markets are an alternative to grants or donations as a way to fund charitable projects. A collection of philanthropies announces that they'll be giving out prizes for the completion of successful, effectively altruistic projects that solve important problems the philanthropies care about. Project creators then strike out to build projects that solve those problems. If they need money to get started, investors can buy a "stake" in the project's possible future prize winnings, called an "impact certificate." (You can read more about how impact markets generally work here, and a canonical explanation of impact certificates on the EA Forum here.) Four philanthropic funders have expressed interest in giving prizes to successful projects in this round: ACX Grants 2025 The Long Term Future Fund The EA Infrastructure Fund The Survival and Flourishing Fund So, after a year, the above philanthropies will review the projects in the impact market to see which ones have had the highest impact. Okay, but why would I want to buy impact certificates? Why not just donate directly to the project? Giving direct donations is great! But purchasing impact certificates can also have some advantages over direct donations: Better feedback Direct donation can have pretty bad feedback loops about what sorts of things end up actually being effective/successful. After a year, the philanthropies listed above will review the projects to see which ones are impactful - and award prizes to the ones that they find most impactful. You get to see how much impact per-dollar your investments returned, giving you grounded feedback. Improving your modeling of grant-makers Purchasing impact certificates forces you to put yourself in the eyes of a grant-maker - you can look through a bunch of projects that might be impactful, and, with your donation budget, select the ones you expect to have the most impact. It also pushes you to model the philanthropies with great feedback. What sorts of things do they care about? Why? What are their primary theories of change? How will the project sitting in front of you relevantly improve the world in a way they actually care about? Make that charitable moolah If you invest in projects that end up being really impactful, then you'll get a share of the charitable prize funding that projects win! All of this remains as charitable funding, so you'll be able to donate it to whatever cause you think is most impactful. For example, if you invest $100 into a project that wins a prize worth 2x it's original valuation, you can then choose to donate $200 to a charity or project of your choice! Who's giving out the prizes at the end? Four philanthropic funders have expressed interest in giving prizes[1] to successful projects that align with their interests: AC...

The Nonlinear Library
EA - Why are you reluctant to write on the EA Forum? by Stan Pinsent

The Nonlinear Library

Play Episode Listen Later Mar 5, 2024 0:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why are you reluctant to write on the EA Forum?, published by Stan Pinsent on March 5, 2024 on The Effective Altruism Forum. It has come to my attention that some people are reluctant to post or comment on the EA Forum, even some people who read the forum regularly and enjoy the quality of discourse here. What stops you posting? What might make it easier? You can give an anonymous answer on this Google Form. I intend to share responses in a follow-up post. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Forum feature updates: add buttons, see your stats, send DMs easier, and more (March '24) by tobytrem

The Nonlinear Library

Play Episode Listen Later Mar 1, 2024 6:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum feature updates: add buttons, see your stats, send DMs easier, and more (March '24), published by tobytrem on March 1, 2024 on The Effective Altruism Forum. It's been a while since our last feature update in October. Since then, the Forum team has done a lot. This post includes: New tools for authors See all your stats on one page Add buttons to posts Improved tables for posts Improved notifications, messaging and onboarding More comprehensive and interactive notifications Subscribe to a user's comments Improved onboarding for new users: Redesign of Forum messaging Other updates and projects Job recommendations Making it easier to fork our codebase and set up new forums Giving Season A Giving portal A custom voting tool for the Donation Election An interactive banner Forum Wrapped 2023 Miscellaneous You can now hide community quick takes Clearer previews of posts, topics, and sequences A new opportunities tab on the Frontpage New tools for authors See all your stats on one page We've built a page which collects the stats for your posts into one overview. These stats include: how many people have viewed/read your posts, how much karma they've accrued, and the number of comments. You can access analytics for a particular post by clicking 'View detailed stats' from your post stats page. Add buttons to posts If you have a "call to action" in a post or comment, you can now add it as a button. Your button could be a link to an application, a survey, a calendar, or any other link you'd like to stand out. Improved tables for posts We've improved tables to make them much more readable and less prone to awkward word cut-offs such as "Wei ght" Improved notifications, messaging and onboarding More comprehensive and interactive notifications Notifications now have their own page, accessed by clicking the bell icon: On this page, you can directly reply to comments. We have also: Created notifications for Reactions Moved message notifications from the notifications list to this symbol in the top bar. Made a notifications summary showing on hover so that you can check your notifications before clicking the bell to clear them (pictured below) We aim to make notifications informative without making them addictive or incentivising behaviour you don't endorse. If notifications bother you, you can: Change your settings to batch your notifications differently (or not be notified at all). Give us feedback. Subscribe to a user's comments You can now subscribe to be notified every time a user comments. We've also clarified subscriptions' functionality, with a new "Get notified" menu. Improved onboarding for new users: We've changed the process a user goes through when signing up for a Forum account. This has already increased the number of users giving information about their role, signing up for the Digest, and subscribing to topics. The new process looks like this: Redesign of Forum messaging The DM page has undergone a total redesign, making it easier to start individual and group messages and navigate your message history. You can now create a new message by clicking the new conversation symbol ( ) and selecting a Forum user (you used to have to navigate to their account). You can also create a group message by adding multiple Forum users in the search box: Other updates and projects Job recommendations You may have noticed that we've recently been exploring ways we could help Forum users hire and be hired. As part of this project, we're experimenting more with targeted job recommendations. We're selecting high-impact jobs and showing each job to a small set of users that we have reason to believe may be interested in it. For example, if the job is limited to a specific country, we use your account location to help determine if it's relevant to you. We'll continue to iterate on thi...

The Nonlinear Library
EA - Farmed animal funding towards Africa is growing but remains highly neglected by AnimalAdvocacyAfrica

The Nonlinear Library

Play Episode Listen Later Feb 21, 2024 12:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Farmed animal funding towards Africa is growing but remains highly neglected, published by AnimalAdvocacyAfrica on February 21, 2024 on The Effective Altruism Forum. This post is a summary of our recent report "Mapping the Charitable Funding Landscape for Animal Welfare in Africa: Insights and Opportunities for Farmed Animal Advocacy". We only included the most relevant parts for EA Forum readers here and invite interested readers to consult the report for more details and insights. TL;DR Funding towards the farmed animal advocacy movement in Africa has grown significantly over the past years, especially from EA-aligned funders. Despite these increases, farmed animal advocacy remains underfunded. We hope that this report can help us and other stakeholders to more rapidly and effectively build the farmed animal advocacy movement in Africa. We aim to use and amplify the growing momentum identified in this report and call on any individual or organisations interested in contributing to this cause to contact us and/or increase their resources and focus dedicated towards farmed animal welfare in Africa. Motivation Industrial animal agriculture is expanding rapidly in Africa, with the continent projected to account for the largest absolute increase in farmed land animal numbers of any continent between 2012 and 2050 ( Kortschak, 2023). Despite its vast scale, the issue is highly neglected by charitable funding. Lewis Bollard ( 2019) estimated that farmed animal advocacy work in Africa received only USD 1 million in 2019, less than 1% of global funding for farmed animal advocacy. Farmed Animal Funders ( 2021) estimated funding to Africa at USD 2.5 million in 2021, a significantly higher but still very low amount. Accordingly, activists and organisations on the ground cite a lack of funding as the main bottleneck for their work ( Tan, 2021). Since 2021, Animal Advocacy Africa (AAA) has actively worked towards strengthening the farmed animal advocacy movement in Africa, with some focus on improving funding. With this report, we aim to understand the funding landscape for farmed animal advocacy in Africa in depth, identifying key actors, patterns, and trends. Notably, we focus on charitable grants and exclude any government funding that might be relevant as well. Our research aims to build transparency and enhance information on what is being done to help animals in Africa, which can help various stakeholders to make better decisions. While we focus on farmed animals, we also provide context on other animal groups. We hope that the findings from our analysis can contribute to funders shifting some of their resources from less neglected and potentially lower-impact projects to more neglected and potentially higher-impact ones. Data basis Based on the funding records of 131 funders that we suspected might have funded African animal causes in the past, we created a database of 2,136 records of grants towards animal projects in Africa. This grant data allowed us to base our analysis on real-world data, which provides an important improvement to previous research, which was typically based on self-reported surveys with funders and/or charities. Findings Overall funding levels We estimate at an 80% confidence level that the funders in scope for this analysis granted a total of USD 25 to 35 million to animal-related causes in Africa in 2020. These grants had substantially increased from 2018 to 2020. Funding for animal causes in Africa shows interesting patterns that contrast, to a certain extent, with trends observed in the animal advocacy movement globally ( Animal Charity Evaluators, 2024). Wild animal and conservation efforts receive the most funding. Notably, the projects in this category do not follow the wild animal suffering approach typically discussed in Effective Altruism ...

The Nonlinear Library
EA - My lifelong pledge to give away 10% of my income each year (and where I donated in 2023) by James Özden

The Nonlinear Library

Play Episode Listen Later Feb 13, 2024 17:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My lifelong pledge to give away 10% of my income each year (and where I donated in 2023), published by James Özden on February 13, 2024 on The Effective Altruism Forum. Note: Cross-posted from my blog. There is probably not much new here for long-time members of the EA community, some of whom have been giving 10%+ of their income for over a decade. However, I thought it might be interesting for newer EA Forum readers, and for folks who are more left-wing/progressive than average (like me), to see what arguments are compelling to someone of that worldview. In November 2022, I took the Giving What We Can pledge to give away 10% of my pre-tax income, to the most effective charities and opportunities, for the rest of my life. I'm very proud of taking the pledge, and feel great about finishing my first full year! I wanted to share some thoughts on how it's been for me, as well as some concrete places I donated to. Broadly, I feel like I've been committed to doing the most good ( whatever that means) for several years now, but it took some time for me to get going with my donations. One big factor is that I haven't been earning too much, especially when I was working full-time with Animal Rebellion/Extinction Rebellion, where people used to get paid between £400-1000 per month. Otherwise, I thought it would be a significant financial burden, even when my salary increased, that would make it difficult for me to build a financial safety net. But primarily, it's a reasonably big commitment, so I think taking some time to stew on it can be useful. Despite this, I've been surprised by how quickly the Giving What We Can (GWWC) pledge has become a part of my identity. Now, I'm so happy that I've pledged, and feel amazing that I'm able to support great projects to improve the world (you can tell because I'm already preaching about it - sorry not sorry). Importantly, I don't think donating is the only way for people to improve the world, and not necessarily the most impactful. But, I don't see it as an either/or, but rather a both/and. Simply, I don't think the decision is whether to dedicate your career to highly impactful work OR dedicate your free time (or career) to political activism OR donate some proportion of your income to effective projects. Rather, I think one can both pursue a high-impact career and give a lot, as donating often gives you the ability to have a huge impact with relatively little time investment. Tangibly, I've probably spent between 5-10 hours to donate around £3,000 this year, which I think will have a lot of positive impact with a relatively small time investment on my side (this was helped partially with the use of expert funds and my prior knowledge in a given area, but more on that later). However, I want to speak about some of the key points that convinced me to give 10% of my income for the rest of my life, namely: I am better off than 98% of the world, for no great reason besides that I grew up in a wealthy country, and it is a huge travesty if I don't use some percentage of this luck to help others. Donations can have very meaningful impacts on the issues I care about, often far more than other lifestyle choices I might be already making. I think the world would be a much better place if everyone was committed to giving some of their income/wealth, and there's no reason why it shouldn't start with me. (If you just want to see where I donated to in 2023, skip to the bottom). Why I decided to take the pledge Most people reading this are in the top 5% of wealth globally, and we should do something about it As someone who has been fairly engaged in progressive political activism, I often hear lots of comments attributing some key problems in the world, whether it's climate change, inequality or poverty, to the richest 1%. However, I think most peopl...

The Nonlinear Library
EA - Introducing the Animal Advocacy Forum - a space for those involved or interested in Animal Welfare & related topics by David van Beveren

The Nonlinear Library

Play Episode Listen Later Feb 6, 2024 4:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Animal Advocacy Forum - a space for those involved or interested in Animal Welfare & related topics, published by David van Beveren on February 6, 2024 on The Effective Altruism Forum. Summary Farmed Animal Strategic Team (FAST) is thrilled to announce the launch of our Animal Advocacy Forum, a new platform aimed at increasing discussion and enhancing collaboration within the animal advocacy movement. We invite everyone involved or interested in animal welfare, alternative proteins, animal rights, or related topics to participate, share insights about their initiatives, and discover valuable perspectives. Thank you! What is FAST? For more than a decade, FAST has operated as a private Google Group list, connecting over 500+ organizations and 1,400+ individuals dedicated to farmed animal welfare. This network includes professionals from pivotal EA-aligned organizations such as Open Philanthropy, Good Food Institute, The Humane League, Animal Charity Evaluators (ACE) - including a wide range of smaller and grassroots-based groups. Why a forum? In response to feedback from our FAST survey, members expressed a strong interest in deeper discussions and improved collaboration. There was also considerable dissatisfaction with the 'reply-all' feature, which led to unintentional spamming of 1,400 members - as a result, FAST decided to broaden its services to include a forum. While the FAST List continues to serve as a private space within the animal advocacy movement, the FAST Forum is open to the public to foster greater engagement, particularly from those involved in the EA and other closely-aligned movements. What should be posted there? Echoing the EA Forum's Animal Welfare topic's role which provides a space for organizations to announce initiatives, discuss promising new ideas, and constructively critique ongoing work - FAST's platform serves as a dedicated hub for in-depth discussions on animal advocacy and related topics. It aims to enable nuanced debates and collaboration on key issues such as alternative proteins, grassroots strategy, corporate campaigns, legal & policy work, among others. What shouldn't be posted there? Discussions related to ongoing investigations or internal strategy, especially regarding campaigns or initiatives not yet public, should not be shared on the forum to safeguard the confidentiality and security of those efforts. Why not use the EA Forum? While the EA Forum is a valuable resource for animal advocacy dialogue, the FAST forum is designed to foster a more focused and close-knit community. The EA Forum's broad spectrum of topics and distinct cultural norms can be intimidating for some, making it challenging for those specifically focused on animal advocacy to find and engage in targeted conversations. This initiative mirrors other communities such as the AI Alignment Forum, which serve to concentrate expertise and foster discussions in a critically important area. With that in mind, we strongly encourage members to continue sharing key content on the EA Forum for visibility and cross-engagement within the broader EA community.[1] Where do I start? Feel free to join us over at the Animal Advocacy Forum and become an active participant in our growing community.[2] To get started, simply register, complete your profile, and start or contribute to discussions that match your interests and expertise. This is also a great opportunity to introduce yourself and share insights about the impactful work you're doing. Thank you! Thank you to the organizations and individuals who have provided invaluable feedback and support for the forum and FAST's rebranding efforts, including Animal Charity Evaluators, Veganuary, ProVeg International, Stray Dog Institute, Animal Think Tank, Freedom Food Alliance, GFI, and the AVA Summit. Also, a big...

The Nonlinear Library
EA - Who wants to be hired? (Feb-May 2024) by tobytrem

The Nonlinear Library

Play Episode Listen Later Jan 31, 2024 4:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who wants to be hired? (Feb-May 2024), published by tobytrem on January 31, 2024 on The Effective Altruism Forum. Share your information in this thread if you are looking for full-time, part-time, or limited project work in EA causes[1]! We'd like to help people in EA find impactful work, so we've set up this thread, and another called Who's hiring? (we did this last in 2022[2]). Consider sharing this thread with friends who aren't on the Forum, but might be interested in getting involved in this kind of work. They will need to make an account to post, but we think it is worth it! If you have any feedback on these threads, please DM me or comment below. Take part in the thread To take part in this thread, add an 'Answer' below. Here's a template: TLDR: [1-line summary of the kind of work you're looking for and anything particularly relevant from your background or interests. ] Skills & background: [Outline your strengths and job experience, as well as your experience with EA if you think that might be relevant. Links to past projects have been particularly valuable for past job seekers] Location/remote: [Current location & whether you're willing to relocate or work remotely] Availability & type of work: [Note whether you're only available during a particular period, whether you're looking for part-time work, etc...] Resume/CV/LinkedIn: ___ Email/contact: [you can also suggest that people DM you on the Forum] Other notes: [Describe your cause area preferences if you have them, expand on the type of role you are looking for, etc... Hiring managers fed back after our last round of threads that they sometimes couldn't tell whether prospective hires would be interested in the roles they were offering.] Questions: [IF YOU HAVE ANY: Consider sharing uncertainties you have, other questions, etc.] Example answer[3] Read some hiring tips here: Yonatan Cale's quick take on using this thread effectively. Don't think, just apply! (usually) How to think about applying to EA jobs Job boards & other resources If you want to explore EA jobs, check out the related Who's hiring? thread, or the resources below: The 80,000 Hours Job Board compiles a huge amount of open roles; there are over 800 jobs listed right now. You can filter to exclude "career development" roles, set up alerts for roles matching your preferred criteria, and browse roles by organisation or "collection." The "Job listing (open)" page is a place to explore positions people have shared or discussed on the EA Forum (see also opportunities to take action). The EA Opportunity Board collects internships, volunteer opportunities, conferences, and more - including part-time and entry-level job opportunities. Other resources include Probably Good's list of impact-focused job boards, the EA job postings and EA volunteering Facebook groups, and these lists of project ideas you might be able to work on independently. (If you have other suggestions for what I should include here, please comment on this post or send me a DM!) ^ I phrase it this way to include explicitly EA organisations, as well as organisations which do not call themselves EA, but work on causes with significant support within EA such as farmed animal welfare or AI risk. ^ you can see those threads here: 1, 2 ^ TLDR: I'm looking for entry-level communications jobs or writing-heavy roles. My experience is mostly in writing (of different kinds) and tutoring students. Skills & background: I write a lot and have some undergraduate research experience and familiarity with legal work. I finished my BA in history in May 2023 (see [my thesis]). I spent two summers as a legal intern at [Place], and have been tutoring for a year now. I also speak Spanish. I helped run my university EA group in 2022-2023. You can see some of my public writing for [our student newspaper] an...

The Nonlinear Library
EA - Funding circle aimed at slowing down AI - looking for participants by Greg Colbourn

The Nonlinear Library

Play Episode Listen Later Jan 26, 2024 4:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding circle aimed at slowing down AI - looking for participants, published by Greg Colbourn on January 26, 2024 on The Effective Altruism Forum. Are you an earn-to-giver or (aspiring) philanthropist who has short AGI timelines and/or high p(doom|AGI)? Do you want to discuss donation opportunities with others who share your goal of slowing down / pausing / stopping AI development[1]? If so, I want to hear from you! For some context, I've been extremely concerned about short-term AI x-risk since March 2023 (post-GPT-4), and have, since then, thought that more AI Safety research will not be enough to save us (or AI Governance that isn't focused[2] on slowing down AI or a global moratorium on further capabilities advances). Thus I think that on the margin far more resources need to be going into slowing down AI (there are already many dedicated funds for the wider space of AI Safety). I posted this to an EA investing group in late April: And this AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now - to the EA Forum in early May. My p(doom|AGI) is ~90% as things stand ( Doom is default outcome of AGI). But my p(doom) overall is ~50% by 2030, because I think there's a decent chance we can actually get a Stop[3]. My timelines are ~ 0-5 years: I have donated >$150k[4] to people and projects focused on slowing down AI since (mostly as kind of seed funding - to individuals, and projects so new they don't have official orgs yet[5]), but I want to do a lot more. Having people with me would be great for multiplying impact and also for my motivation! I'm thinking 4-6 people, each committing ~$100k(+) over 2024, would be good. The idea would be to discuss donation opportunities in the "slowing down AI" space during a monthly call (e.g. Google Meet), and have an informal text chat for the group (e.g. Whatsapp or Messenger). Fostering a sense of unity of purpose[6], but nothing too demanding or official. Active, but low friction and low total time commitment. Donations would be made independently rather than from a pooled fund, but we can have some coordination to get "win-wins" based on any shared preferences of what to fund. Meta-charity Funders is a useful model. We could maybe do something like an S-process for coordination, like what Jaan Tallinn's Survival and Flourishing Fund does[7]; it helps avoid "donor chicken" situations. Or we could do something simpler like rank the value of donating successive marginal $10k amounts to each project. Or just stick to more qualitative discussion. This is all still to be determined by the group. Please join me if you can[8], or share with others you think may be interested. Feel free to DM me here or on X, book a call with me, or fill in this form. ^ If you oppose AI for other reasons (e.g. ethics, job loss, copyright), as long as you are looking to fund strategies that aim to show results in the short term (say within a year), then I'd be interested in you joining the circle. ^ I think Jaan Tallinn's new top priorities are great! ^ After 2030, if we have a Stop and are still here, we can keep kicking the can down the road.. ^ I've made a few more donations since that tweet. ^ Public examples include Holly Elmore, giving away copies of Uncontrollable, and AI-Plans.com. ^ Right now I feel quite isolated making donations in this space. ^ It's a little complicated, but here's a short description: "Everyone individually decides how much value each project creates at various funding levels. We find an allocation of funds that's fair and maximises the funders' expressed preferences (using a number of somewhat dubious but probably not too terrible assumptions). Funders can adjust how much money they want to distribute after seeing everyone's evaluations, including fully pulling out." (paraphr...

The Nonlinear Library
EA - Can a war cause human extinction? Once again, not on priors by Vasco Grilo

The Nonlinear Library

Play Episode Listen Later Jan 25, 2024 31:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can a war cause human extinction? Once again, not on priors, published by Vasco Grilo on January 25, 2024 on The Effective Altruism Forum. Summary Stephen Clare's classic EA Forum post How likely is World War III? concludes "the chance of an extinction-level war [this century] is about 1%". I commented that power law extrapolation often results in greatly overestimating tail risk, and that fitting a power law to all the data points instead of the ones in the right tail usually leads to higher risk too. To investigate the above, I looked into historical annual war deaths along the lines of what I did in Can a terrorist attack cause human extinction? Not on priors, where I concluded the probability of a terrorist attack causing human extinction is astronomically low. Historical annual war deaths of combatants suggest the annual probability of a war causing human extinction is astronomically low once again. 6.36*10^-14 according to my preferred estimate, although it is not resilient, and can easily be wrong by many orders of magnitude ( OOMs). One may well update to a much higher extinction risk after accounting for inside view factors (e.g. weapon technology), and indirect effects of war, like increasing the likelihood of civilisational collapse. However, extraordinary evidence would be required to move up sufficiently many orders of magnitude for an AI, bio or nuclear war to have a decent chance of causing human extinction. In the realm of the more anthropogenic AI, bio and nuclear risk, I personally think underweighting the outside view is a major reason leading to overly high risk. I encourage readers to check David Thorstad's series exaggerating the risks, which includes subseries on climate, AI and bio risk. Introduction The 166th EA Forum Digest had Stephen Clare's How likely is World War III? as the classic EA Forum post (as a side note, the rubric is great!). It presents the following conclusions: First, I estimate that the chance of direct Great Power conflict this century is around 45%. Second, I think the chance of a huge war as bad or worse than WWII is on the order of 10%. Third, I think the chance of an extinction-level war is about 1%. This is despite the fact that I put more credence in the hypothesis that war has become less likely in the post-WWII period than I do in the hypothesis that the risk of war has not changed. I view the last of these as a crucial consideration for cause prioritisation, in the sense it directly informs the potential scale of the benefits of mitigating the risk from great power conflict. It results from assuming each war has a 0.06 % (= 2*3*10^-4) chance of causing human extinction. This is explained elsewhere in the post, and in more detail in the curated one How bad could a war get? by Stephen and Rani Martin. In essence, it is 2 times a 0.03 % chance of war deaths of combatants being at least 8 billion: "In Only the Dead, political scientist Bear Braumoeller [I recommend his appearance on The 80,000 Hours Podcast!] uses his estimated parameters to infer the probability of enormous wars. His [ power law] distribution gives a 1 in 200 chance of a given war escalating to be [at least] twice as bad as World War II and a 3 in 10,000 chance of it causing [at least] 8 billion deaths [of combatants] (i.e. human extinction)". 2 times because the above 0.03 % "may underestimate the chance of an extinction war for at least two reasons. First, world population has been growing over time. If we instead considered the proportion of global population killed per war instead, extreme outcomes may seem more likely. Second, he does not consider civilian deaths. Historically, the ratio of civilian-deaths-to-battle deaths in war has been about 1-to-1 (though there's a lot of variation across wars). So fewer than 8 billion battle deaths would be...