POPULARITY
In this week's episode of Nonprofit Newsfeed by Whole Whale, George and Nick dive into an engaging discussion packed with insights for nonprofit enthusiasts. AI in Fundraising: The conversation shifts to an intriguing experiment reported by TechCrunch, where Sage Future, backed by Open Philanthropy, tasked AI models with fundraising for charity. These AI agents, with human oversight, raised $270 for Helen Keller International by engaging in tasks like setting up social media accounts and creating promotional content. While AI's role in automating communication is acknowledged, the hosts caution against over-reliance due to potential brand risks, especially for sensitive issues. Environmental Advocacy: The episode touches on the historical narrative of leaded gasoline and its eventual phase-out in 2021, highlighting the critical role of governmental oversight in protecting public health. The hosts use this story to emphasize the importance of maintaining robust environmental regulations. USDA Grant Freeze Impact: A pressing issue discussed is the USDA's grant freeze, which has left nonprofits like Pasa Sustainable Agriculture in financial turmoil. With $3 million in unpaid reimbursements, the organization had to furlough most of its staff, underscoring the dire consequences of such funding disruptions on local communities and farms. Community-Driven Violence Prevention: The Circle of Brotherhood's innovative efforts in Miami's Liberty City are celebrated for their community-based approach to violence prevention. By providing unarmed, de-escalative security services, the organization works alongside local youth centers to foster a safer environment, demonstrating the power of community engagement over traditional security methods.
In this week's episode of Nonprofit Newsfeed by Whole Whale, George and Nick dive into an engaging discussion packed with insights for nonprofit enthusiasts. AI in Fundraising: The conversation shifts to an intriguing experiment reported by TechCrunch, where Sage Future, backed by Open Philanthropy, tasked AI models with fundraising for charity. These AI agents, with human oversight, raised $270 for Helen Keller International by engaging in tasks like setting up social media accounts and creating promotional content. While AI's role in automating communication is acknowledged, the hosts caution against over-reliance due to potential brand risks, especially for sensitive issues. Environmental Advocacy: The episode touches on the historical narrative of leaded gasoline and its eventual phase-out in 2021, highlighting the critical role of governmental oversight in protecting public health. The hosts use this story to emphasize the importance of maintaining robust environmental regulations. USDA Grant Freeze Impact: A pressing issue discussed is the USDA's grant freeze, which has left nonprofits like Pasa Sustainable Agriculture in financial turmoil. With $3 million in unpaid reimbursements, the organization had to furlough most of its staff, underscoring the dire consequences of such funding disruptions on local communities and farms. Community-Driven Violence Prevention: The Circle of Brotherhood's innovative efforts in Miami's Liberty City are celebrated for their community-based approach to violence prevention. By providing unarmed, de-escalative security services, the organization works alongside local youth centers to foster a safer environment, demonstrating the power of community engagement over traditional security methods.
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward. The ideas I think could have the highest impact are: Government placements/secondments in key GHW areas (e.g. international development), and Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I'm excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups. I can't commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to [...] ---Outline:(01:19) Introduction(02:30) Project ideas(02:33) Fellowships and Placements(02:37) Placement orgs for governments and think tanks(03:06) Fellowships/Placements at GHW Organizations(03:57) More, and different, effective giving organizations(04:03) More (U)HNW advising(05:14) Targeting different niche demographics(05:50) Filling more geographic gaps(06:08) Infrastructure support for GHW organizations(06:38) EA-inspired GHW courses(06:56) BlueDot Impact for GHW(07:40) Incorporating EA content into university courses(08:35) Useful GHW events(08:51) Events bringing together EA and mainstream GHD orgs(09:57) Career panels or similar(10:13) More, and different, student groups(10:18) Action-focused student groups(11:34) Policy-focused grad student groups(11:51) Less thought-through ideas(13:12) Perceived impact and fitThe original text contained 2 footnotes which were omitted from this narration. --- First published: March 18th, 2025 Source: https://forum.effectivealtruism.org/posts/pAE6zfAgceCop6vcE/projects-i-d-like-to-see-in-the-ghw-meta-space --- Narrated by TYPE III AUDIO.
Read the full transcript here. Is it useful to vote against a majority when you might lose political or social capital for doing so? What are the various perspectives on the US / China AI race? How close is the competition? How has AI been used in Ukraine? Should we work towards a global ban of autonomous weapons? And if so, how should we define "autonomous"? Is there any potential for the US and China to cooperate on AI? To what extent do government officials — especially senior policymakers — worry about AI? Which particular worries are on their minds? To what extent is the average person on the street worried about AI? What's going on with the semiconductor industry in Taiwan? How hard is it to get an AI model to "reason"? How could animal training be improved? Do most horses fear humans? How do we project ourselves onto the space around us?Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University's Center for the Governance of AI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne. Follow her on Twitter at @hlntnr. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
Productivity growth in the developed world has been on a downward trend since the 1960s. Meanwhile, gains in life expectancy have also slowed. And yet the number of dollars and researchers dedicated to R&D grows every year. In today's episode, the FT's Chief Data Reporter, John Burn-Murdoch, asks whether western culture has lost its previous focus on human progress and become too risk-averse, or whether the problem is simply that the low-hanging fruit of scientific research has already been plucked. He does so in conversation with innovation economist Matt Clancy, who is the author of the New Things Under the Sun blog, and a research fellow at Open Philanthropy, a non-profit foundation based in San Francisco that provides research grants.John Burn-Murdoch writes a column each week for the Financial Times. You can find it hereSubscribe on Apple, Spotify, Pocket Casts or wherever you listen.Presented by John Burn-Murdoch. Produced by Edith Rousselot. The editor is Bryant Urstadt. Manuela Saragosa is the executive producer. Audio mix and original music by Breen Turner. The FT's head of audio is Cheryl Brumley.Read a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million. 10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: January 19th, 2025 Source: https://forum.effectivealtruism.org/posts/RdbDH4T8bxWwZpc9h/givewell-raised-less-than-its-10th-percentile-forecast-in --- Narrated by TYPE III AUDIO.
Podcast: AI SummerEpisode: Ajeya Cotra on AI safety and the future of humanityRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAjeya Cotra works at Open Philanthropy, a leading funder of efforts to combat existential risks from AI. She has led the foundation's grantmaking on technical research to understand and reduce catastrophic risks from advanced AI. She is co-author of Planned Obsolescence, a newsletter about AI futurism and AI alignment.Although a committed doomer herself, Cotra has worked hard to understand the perspectives of AI safety skeptics. In this episode, we asked her to guide us through the contentious debate over AI safety and—perhaps—explain why people with similar views on other issues frequently reach divergent views on this one. We spoke to Cotra on December 10. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
Summary There's a near consensus that EA needs funding diversification but with Open Phil accounting for ~90% of EA funding, that's just not possible due to some pretty basic math. Organizations and the community would need to make large tradeoffs and this simply isn't possible/worth it at this time. Lots of people want funding diversification It has been two years since the FTX collapse and one thing everyone seems to agree on is that we need more funding diversification. These takes range from off-hand wishes “it sure would be great if funding in EA were more diversified”, to organizations trying to get a certain percentage of their budgets from non-OP sources/saying they want to diversify their funding base[1][2][3][4][5][6][7][8] to Open Philanthropy/Good Ventures themselves wanting to see more funding diversification[9]. Everyone seems to agree; other people should be giving more money to the EA projects. The Math Of course, I [...] ---Outline:(00:34) Lots of people want funding diversification(01:11) The Math(03:47) Weighted Average(05:03) Making a lot of money to donate is difficult(09:18) Solutions(09:21) 1. Get more funders(10:35) 2. Spend Less(12:49) 3. Splitting up Open Philanthropy into Several Organizations(13:52) 4. More For-Profit EA Work/EA Organizations Charging for Their Work(16:23) 5. Acceptance(16:59) My Personal Solution(17:26) Conclusion(17:59) Further Readings--- First published: December 27th, 2024 Source: https://forum.effectivealtruism.org/posts/x8JrwokZTNzgCgYts/funding-diversification-for-mid-large-ea-organizations-is --- Narrated by TYPE III AUDIO.
Summary There's a near consensus that EA needs funding diversification but with Open Phil accounting for ~90% of EA funding, that's just not possible due to some pretty basic math. Organizations and the community would need to make large tradeoffs and this simply isn't possible/worth it at this time. Lots of people want funding diversification It has been two years since the FTX collapse and one thing everyone seems to agree on is that we need more funding diversification. These takes range from off-hand wishes “it sure would be great if funding in EA were more diversified”, to organizations trying to get a certain percentage of their budgets from non-OP sources/saying they want to diversify their funding base(1,2,3,4,5,6,7,8) to Open Philanthropy/Good Ventures themselves wanting to see more funding diversification(9). Everyone seems to agree; other people should be giving more money to the EA projects. The Math Of course, I [...] ---Outline:(00:07) Summary(00:29) Lots of people want funding diversification(01:10) The Math(03:46) Weighted Average(05:02) Making a lot of money to donate is difficult(09:17) Solutions(09:21) 1. Get more funders(10:34) 2. Spend Less(12:48) 3. Splitting up Open Philanthropy into Several Organizations(13:51) 4. More For-Profit EA Work/EA Organizations Charging for Their Work(16:22) 5. Acceptance(16:58) My Personal Solution(17:25) Conclusion(18:01) 1 I was approached at several EAGs, including a few weeks ago in Boston to donate to certain organizations specifically because they want to get a certain %X (30, 50, etc.) from non-OP sources but I'm sure I can find organizations who are very public about this(18:20) 2--- First published: December 27th, 2024 Source: https://forum.effectivealtruism.org/posts/x8JrwokZTNzgCgYts/funding-diversification-for-mid-large-ea-organizations-is --- Narrated by TYPE III AUDIO.
Tom Kalil is the CEO of Renaissance Philanthropy. He also served in the White House for two presidents (under Obama and Clinton); where he helped establish incentive prizes in government through challenge.gov; in addition to dozens of science and tech program. More recently Tom served as the Chief Innovation Officer at Schmidt Futures, where he helped launch Convergent Research. Matt Clancy is an economist and a research fellow at Open Philanthropy. He writes ‘New Things Under the Sun', which is a living literature review on academic research about science and innovation. We talked about: What is ‘influence without authority'? Should public funders sponsor more innovation prizes? Can policy entrepreneurship be taught formally? Why isn't ultra-wealthy philanthropy much more ambitious? What's the optimistic case for increasing US state capacity? What was it like being principal staffer to Gordon Moore? What is Renaissance Philanthropy? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!
Adam Marblestone is the CEO of Convergent Research. He is working with a large and growing network of collaborators and advisors to develop a strategic roadmap for future FROs. Outside of CR, he serves on the boards of several non-profits pursuing new methods of funding and organizing scientific research including Norn Group and New Science, and as an interviewer for the Hertz Foundation. Previously, he was a Schmidt Futures Innovation Fellow, a Fellow with the Federation of American Scientists (FAS), a research scientist at Google DeepMind, Chief Strategy Officer of the brain-computer interface company Kernel, a research scientist at MIT, a PhD student in biophysics with George Church and colleagues at Harvard, and a theoretical physics student at Yale. He has also previously helped to start companies like BioBright, and advised foundations such as Open Philanthropy.Session SummaryIn this episode of the Existential Hope Podcast, our guest is Adam Marblestone, CEO of Convergent Research. Adam shares his journey from working on nanotechnology and neuroscience to pioneering a bold new model for scientific work and funding: Focused Research Organizations (FROs). These nonprofit, deep-tech startups are designed to fill critical gaps in science by building the infrastructure needed to accelerate discovery. Tune in to hear how FROs are unlocking innovation, tackling bottlenecks across fields, and inspiring a new approach to advancing humanity's understanding of the world.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
This is a link post. One of Open Philanthropy's goals for this year is to experiment with collaborating with other funders. Today, we're excited to announce our biggest collaboration to date: the Lead Exposure Action Fund (LEAF). Lead exposure in low- and middle-income countries is a devastating but highly neglected issue. The Global Burden of Disease study estimates 1.5 million deaths per year attributable to lead poisoning. Despite this burden, lead poisoning has only received roughly $15 million per year in philanthropic funding until recently. That is less than 1% of the funding that goes towards diseases like tuberculosis or malaria, which are themselves considered neglected. The goal of LEAF is to accelerate progress toward a world free of lead exposure by making grants to support measurement, mitigation, and mainstreaming awareness of the problem. Our partners have already committed $104 million, and we plan for LEAF to allocate that [...] ---Outline:(01:54) Why we chose to work on lead(04:54) What LEAF hopes to achieve(05:30) The LEAF team(06:01) An experiment for Open Philanthropy(06:49) Grantmaking so farThe original text contained 3 footnotes which were omitted from this narration. --- First published: September 23rd, 2024 Source: https://forum.effectivealtruism.org/posts/z5PvTSa54pdxxw72W/announcing-the-lead-exposure-action-fund --- Narrated by TYPE III AUDIO.
This is a link post. This WaPo piece announces the Partnership for a Lead-Free Future (PLF), a collaboration led by Open Philanthropy, USAID, and UNICEF. It was co-authored by Alexander Berger (Open Phil's CEO) and Samantha Power, head of USAID. Ten years ago, when residents of Flint, Mich., were exposed to toxic levels of lead in their drinking water, 1 in 20 children in the city had elevated blood lead levels that placed them at risk for heart disease, strokes, cognitive deficits and developmental delays — health effects that residents still grapple with to this day. It was only after activists rallied, organized and advocated relentlessly that national attention focused on Flint, and officials committed nearly half a billion dollars to clean up Flint's water. Today, there is a lead poisoning crisis raging on a far greater scale — and hardly anyone is talking about it. [...] The partnership will [...] --- First published: September 23rd, 2024 Source: https://forum.effectivealtruism.org/posts/soeJ4XNnLoyWpiFsK/we-can-protect-millions-of-kids-from-a-global-killer-without --- Narrated by TYPE III AUDIO.
Join Professor Kevin Werbach in his discussion with Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology. In this episode, Werbach and Toner discuss how the public views AI safety and ethics and both the positive and negative outcomes of advancements in AI. We discuss Toner's lessons from the unsuccessful removal of Sam Altman as the CEO of OpenAI, oversight structures to audit and approve the AI companies deploy, and the role of the government in AI accountability. Finally, Toner explains how businesses can take charge of their responsible AI deployment. Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University's Center for the Governance of AI. From 2021-2023, she served on the board of OpenAI, the creator of ChatGPT. Helen Toner's TED Talk: How to Govern AI, Even if it's Hard to Predict Helen Toner on the OpenAI Coup “It was about trust and accountability” (Financial Times) Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: September 2024, published by Toby Tremlett on September 19, 2024 on The Effective Altruism Forum. If you would like to see EA Organization Updates as soon as they come out, consider subscribing to this tag. Some of the opportunities and job listings we feature in this update have (very) pressing deadlines (see AI Alignment Teaching Fellow opportunities at BlueDot Impact, September 22, and Institutional Foodservice Fellow at the Good Food Institute, September 18). You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there's also an "org update" tag, where you can find more news and updates that are not part of this consolidated series. These monthly posts originated as the "Updates" section of the monthly EA Newsletter. Organizations submit their own updates, which we edit for clarity. (If you'd like to share your updates and jobs via this series, please apply here.) Opportunities and jobs Opportunities Consider also checking opportunities listed on the EA Opportunity Board and the Opportunities to Take Action tag. ALLFED published a new database containing numerous research projects that prospective volunteers can assist with. Explore the database and apply here. Apply to the upcoming AI Safety Fundamentals: Alignment course by October 6 to learn about the risks from AI and how you can contribute to the field. The Animal Advocacy Careers Introduction to Animal Advocacy Course has been revamped. The course is for those wishing to kickstart a career in animal advocacy. Giv Effektivt (DK) needs ~110 EU citizens to become members before the new year in order to offer tax deductions of around 450.000DKK ($66.000) for 2024-25 donations. Become a member now for 50DKK ($7). An existing donor will give 100DKK for each new member until the organization reaches 300 members. Anima International's Animal Advocacy Training Center released a new online course - Fundraising Essentials. It's a free, self-paced resource with over two hours of video content for people new to the subject. Job listings Consider also exploring jobs listed on the Job listing (open) tag. For even more roles, check the 80,000 Hours Job Board. BlueDot Impact AI Alignment Teaching Fellow (Remote, £4.9K-£9.6K, apply by September 22nd) Centre for Effective Altruism Head of Operations (Remote, £107.4K / $179.9K, apply by October 7th) Cooperative AI Foundation Communications Officer (Remote, £35K-£40K, apply by September 29th) GiveWell Senior Researcher (Remote, $200K-$220.6K) Giving What We Can Global CEO (Remote, $130K+, apply by September 30th) Open Philanthropy Operations Coordinator/Associate (San Francisco, Washington, DC, $99.6K-$122.6K) If you're interested in working at Open Philanthropy but don't see an open role that matches your skillset, express your interest. Epoch AI Question Writer, Math Benchmark (Contractor Position) (Remote, $2K monthly + $100-$1K performance-based bonus) Senior Researcher, ML Distributed Systems (Remote, $150K-$180K) The Good Food Institute Managing Director, GFI India (Hybrid (Mumbai, Delhi, Hyderabad, or Bangalore), ₹4.5M, apply by October 2nd) Institutional Foodservice Fellow (Independent Contractor) (Remote in US, $3.6K biweekly, apply by September 18th) Organization updates The organization updates are in alphabetical order (0-A-Z). 80,000 Hours There is one month left to win $5,000 career grants by referring your friends or colleagues to 80,000 Hours' free career advising. Also, the organization released a blog post about the recent updates to their AI-related content, as well as a post about pandemic preparedness in relation to mpox and H5N1. On the 80,000 Hours Podcast, Rob interviewed: Nick Joseph on whether Anthropic's AI safety policy is up to the task...
Jacob Trefethen oversees Open Philanthropy's science and science policy programs. He was a Henry Fellow at Harvard University, and has a B.A. from the University of Cambridge. You can find links and a transcript at www.hearthisidea.com/episodes/trefethen In this episode we talked about open source the risks and benefits of open source AI models. We talk about: Life-saving health technologies which probably won't exist in 5 years (without a concerted effort) — like a widely available TB vaccine, and bugs which stop malaria spreading How R&D for neglected diseases works — How much does the world spend on it? How do drugs for neglected diseases go from design to distribution? No-brainer policy ideas for speeding up global health R&D Comparing health R&D to public health interventions (like bed nets) Comparing the social returns to frontier (‘Progress Studies') to global health R&D Why is there no GiveWell-equivalent for global health R&D? Won't AI do all the R&D for us soon? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fungal diseases: Health burden, neglectedness, and potential interventions, published by Rethink Priorities on September 4, 2024 on The Effective Altruism Forum. Editorial note This report is a "shallow" investigation, as described here, and was commissioned by Open Philanthropy and produced by Rethink Priorities from January to February 2023. We revised the report for publication. Open Philanthropy does not necessarily endorse our conclusions, nor do the organizations represented by those who were interviewed. Our report focuses on exploring fungal diseases as a potential new cause area for Open Philanthropy. We assessed the current and future health burden of fungal diseases, provided an overview of current interventions and the main gaps and barriers to address the burden, and discussed some plausible options for philanthropic spending. We reviewed the scientific and gray literature and spoke with five experts. While revising the report for publication, we learned of a new global burden study ( Denning et al., 2024) whose results show an annual incidence of 6.5 million invasive fungal infections, and 3.8 million total deaths from fungal diseases (2.5 million of which are "directly attributable" to fungal diseases). The study's results align with this report's estimate of annual 1.5 million to 4.6 million deaths (80% confidence) but were not considered in this report. We don't intend this report to be Rethink Priorities' final word on fungal diseases. We have tried to flag major sources of uncertainty in the report and are open to revising our views based on new information or further research. Executive summary While fungal diseases are very common and mostly mild, some forms are life-threatening and predominantly affect low- and middle-income countries (LMICs). The evidence base on the global fungal disease burden is poor, and estimates are mostly based on extrapolations from the few available studies. Yet, all experts we talked to agree that current burden estimates (usually stated as >1.7M deaths/year) likely underestimate the true burden. Overall, we think the annual death burden could be 1.5M - 4.6M (80% CI), which would exceed malaria and HIV/AIDS deaths combined.[1] Moreover, our best guess is that fungal diseases cause 8M - 49M DALYs (80% CI) per year, but this is based on our own back-of-the-envelope calculation of high-uncertainty inputs. Every expert we spoke with expects the burden to increase substantially in the future, though no formal estimates exist. We project that deaths and DALYs could grow to approximately 2-3 times the current burden until 2040, though this is highly uncertain. This will likely be partly due to a rise in antifungal resistance, which is especially problematic as few treatment classes exist and many fungal diseases are highly lethal without treatment. We estimate that only two diseases (chronic pulmonary aspergillosis [CPA] and candidemia/invasive candidiasis [IC/C]) account for ~39%-45% of the total death and DALY burden. Moreover, a single fungal pathogen (Aspergillus fumigatus) accounts for ~50% of the burden. Thus, much of the burden can be reduced by focusing on only a few of the fungal diseases or on a few pathogens. Available estimates suggest the top fungal diseases have highest burdens in Asia and LMICs, and that they most affect immunocompromised individuals. Fungal diseases seem very neglected in all areas we considered (research/R&D, advocacy/lobbying, philanthropic spending, and policy interventions) and receive little attention even in comparison to other diseases which predominantly affect LMICs. For example, we estimate the research funding/death ratio for malaria to be roughly 20 times higher than for fungal diseases. Moreover, fewer than 10 countries have national surveillance systems for fungal infections, an...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Charity Funders: Launching the 3rd round, published by Joey on August 31, 2024 on The Effective Altruism Forum. Meta Charity Funders is a funding circle consisting of ~10 funders, that funds meta initiatives and serves as an alternative to Open Philanthropy and EA funds. We're launching the third round of Meta Charity Funders. Apply for funding by September 30th or join the circle as a donor. We expect all applicants to have read this post's twin post, "Meta Charity Funders: What you need to know when applying" to understand how to write your application. Focus of this round We expect to fund many initiatives not on this list, but some projects that members of our circle have expressed extra interest in funding this round are: Ultra-high-net-worth-individual advising. However, we want to stress that we believe the skillset to do this well is rare, and these types of applications will be extra scrutinized. Effective Giving/Giving multiplier-organizations. For example, the ones incubated by CE's Effective Giving Incubation program. Career development programs that increase the number of individuals working in high-impact areas- including GCR reduction, animal welfare and Global Health. Especially in regions where there are currently fewer opportunities to engage in such programs. Information for this round Process The expected process is as follows: Applications open: August 30th 100 words in the summary; this should give us a quick overview of the project. In the full project description, please include a main summarizing document no longer than 2 pages. This is all we can commit to reading in the first stage. Any extra material will only be read if we choose to proceed with your application. When choosing the "Meta" category, please be as truthful as possible. It's obvious (and reflects negatively on the application) when a project has deliberately been placed in a category in which it does not belong. Applications close: September 29th Initial application review finishes: October 6th If your project has been filtered out during the initial application review (which we expect 60-80% of applications will), we will let you know around the end of October. Interviews, due diligence, deliberations: October 7th - November 24th If your application has passed the initial application review, we will discuss it during our gatherings, and we might reach out to you to gather more information, for example, by conducting an interview. N.B. This is not a commitment to fund you. Decisions made: November 25th We expect to pay out the grants in the weeks following Novermber 25th. Historical support: You can get a sense of what we have supported in historical rounds by reading our posts on our first and secound rounds. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Dr Oliver Kim has completed his PhD at Berkeley, recently appointed at Open Philanthropy. He does awesome research, carefully examining the drivers of structural transformation. We discussed: Why do you think East Asia is the only world region to have converged with the West? How have big data and computational tools changed our understanding of structural transformation? Oliver's website: https://oliverwkim.com/ His substack: https://www.global-developments.org/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Platinum Helps Draw Attention to Japan's Role in Global Health Funding, published by Open Philanthropy on August 27, 2024 on The Effective Altruism Forum. Japan spent more than $19.6 billion on overseas development assistance (ODA) in 2023, making it the third largest single-country donor behind the US and Germany. Open Philanthropy's Global Aid Policy (GAP) team, which is devoted to increasing government aid and guiding it toward more cost-effective approaches, believes there may be opportunities to increase the impact of this aid through targeted advocacy efforts. They estimate that in Western nations like the UK, for every $1,000 spent on ODA, aid advocacy funders spend around $2.60 attempting to support and inform its allocation. Meanwhile, in Japan, advocacy spending is a mere $0.25 for the same amount - more than 10 times less. Accordingly, the GAP program has prioritized work in Japan. The following case study highlights one grantee helping to drive this work forward. ***** One day in March 2023, in the district of Wayanad near India's southern tip, hundreds of villagers lined up for an uncommon service from an unexpected source: a check-up on their lung health, courtesy of Fujifilm. The Japanese company, best known for its cameras, was taking a different kind of picture. Its portable, 3.5 kg battery-powered X-ray machine, designed to deliver hospital-grade diagnostics, enables tuberculosis screenings in regions where medical facilities usually lack the necessary technology. This scene was just one stop on an illuminating trip to India for a group of Japanese journalists and youth activists. From Toyota Tsusho's Sakra World Hospital to Eisai's efforts to combat neglected tropical diseases (NTDs) in Yarada village, each site visit highlighted Japanese businesses and researchers contributing to global health initiatives. Recognizing this opportunity, Open Philanthropy supported Platinum, a Tokyo-based PR firm, in organizing a trip across India aimed at boosting the Japanese public's awareness of urgent global health issues, particularly tuberculosis and neglected tropical diseases (NTDs). Sixteen people attended: six journalists, representing outlets ranging from a long-running daily newspaper to a popular economics broadcast, and 10 youth activists sourced from PoliPoli's Reach Out Project, an Open Philanthropy-funded initiative that incubates charities focused on global health advocacy. Our Senior Program Officer for Global Aid Policy, Norma Altshuler, thought the initiative was timely given recent trends in Japan's ODA spending. Between 2019 and 2022, the share of Japanese ODA allocated to global health doubled (or tripled, including COVID-19 relief). To sustain this momentum, Open Philanthropy is supporting Japanese groups that aim to preserve or grow Japan's commitment to prioritizing global health initiatives. In a post-trip interview with Open Philanthropy, Soichi Murayama, who helped organize the trip, says one challenge of Japan's media landscape "is that Japanese media doesn't cover global health very often." Murayama attributes the dearth of dedicated coverage to limited reader interest, creating a feedback loop where minimal reporting leads to low awareness, which in turn reduces appetite for such stories. Ryota Todoroki, a medical student who participated in the trip, echoes this sentiment: "NTDs are often seen as a foreign issue with no relevance to Japan, so changing this perception is a major challenge." The Fujifilm initiative in Wayanad provides an example of how connecting Japanese companies to global health efforts can help illustrate the impact of foreign aid. This approach not only highlights Japan's technological contributions but also links economic interests with humanitarian efforts. To gauge the impact of awareness campaigns, PR pr...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring people to… help hire more people!, published by maura on August 17, 2024 on The Effective Altruism Forum. Open Philanthropy needs more recruiters. We'd love to see you apply, even if you've never worked in hiring before. "Recruiting" is sometimes used to narrowly refer to headhunting or outreach. At Open Phil, though, "recruiting" includes everything related to the hiring process. Our recruiting team manages the systems and people that take us from applications to offers. We design evaluations, interview candidates, manage stakeholders, etc.[1] We're looking for An operations mindset. Recruiting is project management, first and foremost; we want people who can reliably juggle lots of balls without dropping them. Interpersonal skills. We want clear communicators with good people judgment. Interest in Open Phil's mission. This is an intentionally broad definition-see below! What you don't need Prior recruiting experience. We'll teach you! To be well-networked or highly immersed in EA. You should be familiar with the areas Open Phil works in (such as global health and wellbeing and global catastrophic risks), but if you're wondering "Am I EA enough for this?", you almost certainly are. The job application will be posted to OP's website in coming weeks, but isn't there yet as of this post; we're starting with targeted outreach to high-context audiences (you!) before expanding our search to broader channels. If this role isn't for you but might be for someone in your network, please send them our way-we offer a reward if you counterfactually refer someone we end up hiring. 1. ^ The OP recruiting team also does headhunting and outreach, though, and we're open to hiring more folks to help with that work, too! If that sounds exciting to you, please apply to the current recruiter posting and mention an interest in outreach work in the "anything else" field. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding for work that builds capacity to address risks from transformative AI, published by GCR Capacity Building team (Open Phil) on August 14, 2024 on The Effective Altruism Forum. Post authors: Eli Rose, Asya Bergal Posting in our capacities as members of Open Philanthropy's Global Catastrophic Risks Capacity Building team. This program, together with our separate program for programs and events on global catastrophic risk, effective altruism, and other topics, has replaced our 2021 request for proposals for outreach projects. If you have a project which was in-scope for that program but isn't for either of these, you can apply to our team's general application instead. We think it's possible that the coming decades will see "transformative" progress in artificial intelligence, i.e., progress that leads to changes in human civilization at least as large as those brought on by the Agricultural and Industrial Revolutions. It is currently unclear to us whether these changes will go well or poorly, and we think that people today can do meaningful work to increase the likelihood of positive outcomes. To that end, we're interested in funding projects that: Help new talent get into work focused on addressing risks from transformative AI. Including people from academic or professional fields outside computer science or machine learning. Support existing talent in this field (e.g. via events that help build professional networks). Contribute to the discourse about transformative AI and its possible effects, positive and negative. We refer to this category of work as "capacity-building", in the sense of "building society's capacity" to navigate these risks. Types of work we've historically funded include training and mentorship programs, events, groups, and resources (e.g., blog posts, magazines, podcasts, videos), but we are interested in receiving applications for any capacity-building projects aimed at risks from advanced AI. This includes applications from both organizations and individuals, and includes both full-time and part-time projects. Apply for funding here. Applications are open until further notice and will be assessed on a rolling basis. We're interested in funding work to build capacity in a number of different fields which we think may be important for navigating transformative AI, including (but very much not limited to) technical alignment research, model evaluation and forecasting, AI governance and policy, information security, and research on the economics of AI. This program is not primarily intended to fund direct work in these fields, though we expect many grants to have both direct and capacity-building components - see below for more discussion. Categories of work we're interested in Training and mentorship programs These are programs that teach relevant skills or understanding, offer mentorship or professional connections for those new to a field, or provide career opportunities. These could take the form of fellowships, internships, residencies, visitor programs, online or in-person courses, bootcamps, etc. Some examples of training and mentorship programs we've funded in the past: BlueDot's online courses on technical AI safety and AI governance. MATS's in-person research and educational seminar programs in Berkeley, California. ML4Good's in-person AI safety bootcamps in Europe. We've previously funded a number of such programs in technical alignment research, and we're excited to fund new programs in this area. But we think other relevant areas may be relatively neglected - for instance, programs focusing on compute governance or on information security for frontier AI models. For illustration, here are some (hypothetical) examples of programs we could be interested in funding: A summer research fellowship for individuals with technical backgr...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding for programs and events on global catastrophic risk, effective altruism, and other topics, published by GCR Capacity Building team (Open Phil) on August 13, 2024 on The Effective Altruism Forum. Post authors: Eli Rose, Asya Bergal Posting in our capacities as members of Open Philanthropy's Global Catastrophic Risks Capacity Building team. Note: This program, together with our separate program for work that builds capacity to address risks from transformative AI, has replaced our 2021 request for proposals for outreach projects. If you have a project which was in-scope for that program but isn't for either of these, you can apply to our team's general application instead. Apply for funding here. Applications are open until further notice and will be assessed on a rolling basis. This is a wide-ranging call for applications, seeking to fund programs and events in a variety of areas of interest to Open Philanthropy - including effective altruism, global catastrophic risks, biosecurity, AI for epistemics, forecasting, and other areas. In general, if the topic of your program or event falls within one of our GCR focus areas, or if it's similar to work we've funded in the past in our GCR focus areas, it may be a good fit for this program. If you're unsure about whether to submit your application, we'd encourage you to err on the side of doing so. By "programs and events" we mean scholarship or fellowship programs, internships, residencies, visitor programs, courses[1], seminars, conferences, workshops, retreats, etc., including both in-person and online activities. We're open to funding programs or events aimed at individuals at any career stage, and with a wide range of potential purposes, including teaching new skills, providing new career opportunities, offering mentorship, or facilitating networking. Examples of programs and events of this type we've funded before include: Condor Camp, a summer program for Brazilian students interested in existential risk work. The Future of Humanity Institute's Research Scholars Program supporting early-career researchers in global catastrophic risk. Effective Altruism Global, a series of conferences for individuals interested in effective altruism. Future Forum, a conference aimed at bringing together members of several communities interested in emerging technology and the future. A workshop on using AI to improve epistemics, organized by academics from NYU, the Forecasting Research Institute, the AI Objectives Institute and Metaculus. AI-focused work We have a separate call up for work that builds societal capacity to address risks from transformative AI. If your program or event is focused on transformative AI and/or risks from transformative AI, we prefer you apply to that call instead. However, which call you apply to is unlikely to make a difference to the outcome of your application. Application information Apply for funding here. The application form asks for information about you, your project/organization (if relevant), and the activities you're requesting funding for. We're interested in funding both individual/one-off programs and events, and organizations that run or support programs and events. We expect to make most funding decisions within 8 weeks of receiving an application (assuming prompt responses to any follow-up questions we may have). You can indicate on our form if you'd like a more timely decision, though we may or may not be able to accommodate your request. Applications are open until further notice and will be assessed on a rolling basis. 1. ^ To apply for funding for the development of new university courses, please see our separate Course Development Grants RFP. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Provably Safe AI: Worldview and Projects, published by bgold on August 10, 2024 on LessWrong. In September 2023, Max Tegmark and Steve Omohundro proposed "Provably Safe AI" as a strategy for AI Safety. In May 2024, a larger group delineated the broader concept of "Guaranteed Safe AI" which includes Provably Safe AI and other related strategies. In July, 2024, Ben Goldhaber and Steve discussed Provably Safe AI and its future possibilities, as summarized in this document. Background In June 2024, ex-OpenAI AI Safety Researcher Leopold Aschenbrenner wrote a 165-page document entitled "Situational Awareness, The Decade Ahead" summarizing AI timeline evidence and beliefs which are shared by many frontier AI researchers. He argued that human-level AI is likely by 2027 and will likely lead to superhuman AI in 2028 or 2029. "Transformative AI" was coined by Open Philanthropy to describe AI which can "precipitate a transition comparable to the agricultural or industrial revolution". There appears to be a significant probability that Transformative AI may be created by 2030. If this probability is, say, greater than 10%, then humanity must immediately begin to prepare for it. The social changes and upheaval caused by Transformative AI are likely to be enormous. There will likely be many benefits but also many risks and dangers, perhaps even existential risks for humanity. Today's technological infrastructure is riddled with flaws and security holes. Power grids, cell service, and internet services have all been very vulnerable to accidents and attacks. Terrorists have attacked critical infrastructure as a political statement. Today's cybersecurity and physical security barely keeps human attackers at bay. When these groups obtain access to powerful cyberattack AI's, they will likely be able to cause enormous social damage and upheaval. Humanity has known how to write provably correct and secure software since Alan Turing's 1949 paper. Unfortunately, proving program correctness requires mathematical sophistication and it is rare in current software development practice. Fortunately, modern deep learning systems are becoming proficient at proving mathematical theorems and generating provably correct code. When combined with techniques like "autoformalization," this should enable powerful AI to rapidly replace today's flawed and insecure codebase with optimized, secure, and provably correct replacements. Many researchers working in these areas believe that AI theorem-proving at the level of human PhD's is likely about two years away. Similar issues plague hardware correctness and security, and it will be a much larger project to replace today's flawed and insecure hardware. Max and Steve propose formal methods grounded in mathematical physics to produce provably safe physical designs. The same AI techniques which are revolutionizing theorem proving and provable software synthesis are also applicable to provable hardware design. Finally, today's social mechanisms like money, contracts, voting, and the structures of governance, will also need to be updated for the new realities of an AI-driven society. Here too, the underlying rules of social interaction can be formalized, provably effective social protocols can be designed, and secure hardware implementing the new rules synthesized using powerful theorem proving AIs. What's next? Given the huge potential risk of uncontrolled powerful AI, many have argued for a pause in Frontier AI development. Unfortunately, that does not appear to be a stable solution. Even if the US paused its AI development, China or other countries could gain an advantage by accelerating their own work. There have been similar calls to limit the power of open source AI models. But, again, any group anywhere in the world can release their powerful AI model weig...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wild Animal Initiative has urgent need for more funding and more donors, published by Cameron Meyer Shorb on August 6, 2024 on The Effective Altruism Forum. Our room for more funding is bigger and more urgent than ever before. Our organizational strategy will be responsive both to the total amount raised and to how many people donate, so smaller donors will have an especially high impact this year. Good Ventures recently decided to phase out funding for several areas (GV blog, EA Forum post), including wild animal welfare. That's a pretty big shock to our movement. We don't know what exactly the impact will be, except that it's complicated. The purpose of this post is to share what we know and how we're thinking about things - primarily to encourage people to donate to Wild Animal Initiative this year, but also for anyone else who might be interested in the state of the wild animal welfare movement more broadly. Summary Track record Our primary goal is to support the growth of a self-sustaining interdisciplinary research community focused on reducing wild animal suffering. Wild animal welfare science is still a small field, but we're really happy with the momentum it's been building. Some highlights of the highlights: We generally get a positive response from researchers (particularly in animal behavior science and ecology), who tend to see wild animal welfare as a natural extension of their interest in conservation (unlike EAs, who tend to see those two as conflicting with each other). Wild animal welfare is increasingly becoming a topic of discussion at scientific conferences, and was recently the subject of the keynote presentation at one. Registration for our first online course filled to capacity (50 people) within a few hours, and just as many people joined the waitlist over the next few days. Room for more funding This is the first year in which our primary question is not how much more we can do, but whether we can avoid major budget cuts over the next few years. We raised less in 2023 than we did in 2022, so we need to make up for that gap. We're also going to lose our biggest donor because Good Ventures is requiring Open Philanthropy to phase out their funding for wild animal welfare. Open Phil was responsible for about half of our overall budget. The funding from their last grant to us will last halfway through 2026, but we need to decide soon how we're going to adapt. To avoid putting ourselves back in the position of relying on a single funder, our upcoming budgeting decisions will depend on not only how much money we raise, but also how diversified our funding is. That means gifts from smaller donors will have an unusually large impact. (The less you normally donate, the more disproportionate your impact will be, but the case still applies to basically everyone who isn't a multi-million-dollar foundation.) Specifically, our goal is to raise $240,000 by the end of the year from donors giving $10k or less. Impact of marginal donations We're evaluating whether we need to reduce our budget to a level we can sustain without Open Philanthropy. The more we raise this year - and the more donors who pitch in to make that happen - the less we'll need to cut. Research grants and staff-associated costs make up the vast majority of our budget, so we'd need to make cuts in one or both of those areas. Donations would help us avoid layoffs and keep funding external researchers. What we've accomplished so far Background If you're not familiar with Wild Animal Initiative, we're working to accelerate the growth of wild animal welfare science. We do that through three interconnected programs: We make grants to scientists who take on relevant projects, we conduct our own research on high-priority questions, and we do outreach through conferences and virtual events. Strategy...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Open Philanthropy's AI governance and policy RFP, published by JulianHazell on July 17, 2024 on The Effective Altruism Forum. AI has enormous beneficial potential if it is governed well. However, in line with a growing contingent of AI (and other) experts from academia, industry, government, and civil society, we also think that AI systems could soon (e.g. in the next 15 years) cause catastrophic harm. For example, this could happen if malicious human actors deliberately misuse advanced AI systems, or if we lose control of future powerful systems designed to take autonomous actions.[1] To improve the odds that humanity successfully navigates these risks, we are soliciting short expressions of interest (EOIs) for funding for work across six subject areas, described below. Strong applications might be funded by Good Ventures (Open Philanthropy's partner organization), or by any of >20 (and growing) other philanthropists who have told us they are concerned about these risks and are interested to hear about grant opportunities we recommend.[2] (You can indicate in your application whether we have permission to share your materials with other potential funders.) As this is a new initiative, we are uncertain about the volume of interest we will receive. Our goal is to keep this form open indefinitely; however, we may need to temporarily pause accepting EOIs if we lack the staff capacity to properly evaluate them. We will post any updates or changes to the application process on this page. Anyone is eligible to apply, including those working in academia, nonprofits, industry, or independently.[3] We will evaluate EOIs on a rolling basis. See below for more details. If you have any questions, please email us. If you have any feedback about this page or program, please let us know (anonymously, if you want) via this short feedback form. 1. Eligible proposal subject areas We are primarily seeking EOIs in the following subject areas, but will consider exceptional proposals outside of these areas, as long as they are relevant to mitigating catastrophic risks from AI: Technical AI governance: Developing and vetting technical mechanisms that improve the efficacy or feasibility of AI governance interventions, or answering technical questions that can inform governance decisions. Examples include compute governance, model evaluations, technical safety and security standards for AI developers, cybersecurity for model weights, and privacy-preserving transparency mechanisms. Policy development: Developing and vetting government policy proposals in enough detail that they can be debated and implemented by policymakers. Examples of policies that seem like they might be valuable (but which typically need more development and debate) include some of those mentioned e.g. here, here, and here. Frontier company policy: Developing and vetting policies and practices that frontier AI companies could volunteer or be required to implement to reduce risks, such as model evaluations, model scaling "red lines" and "if-then commitments," incident reporting protocols, and third-party audits. See e.g. here, here, and here. International AI governance: Developing and vetting paths to effective, broad, and multilateral AI governance, and working to improve coordination and cooperation among key state actors. See e.g. here. Law: Developing and vetting legal frameworks for AI governance, exploring relevant legal issues such as liability and antitrust, identifying concrete legal tools for implementing high-level AI governance solutions, encouraging sound legal drafting of impactful AI policies, and understanding the legal aspects of various AI policy proposals. See e.g. here. Strategic analysis and threat modeling: Improving society's understanding of the strategic landscape around transformative ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on this $16.7M "AI safety" grant?, published by defun on July 16, 2024 on The Effective Altruism Forum. Open Philanthropy has recommended a total of $16.7M to the Massachusetts Institute of Technology to support research led by Neil Thompson on modeling the trends and impacts of AI and computing. 2020 - MIT - AI Trends and Impacts Research - $550,688 2022 - MIT - AI Trends and Impacts Research - $13,277,348 2023 - MIT - AI Trends and Impacts Research - $2,911,324 I've read most of their research, and I don't understand why Open Philanthropy thinks this is a good use of their money. Thompson's Google Scholar here. Thompson's most cited paper "The Computational Limits of Deep Learning" (2020) @gwern pointed out some flaws on Reddit. Thompson's latest paper "A Model for Estimating the Economic Costs of Computer Vision Systems that use Deep Learning" (2024) This paper has many limitations (as acknowledged by the author) and from an x-risks point of view, this paper seems irrelevant. What do you think about Open Philanthropy recommending a total of $16.7M for this work? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on these $1M and $500k AI safety grants?, published by defun on July 12, 2024 on The Effective Altruism Forum. Open Philanthropy had a request for proposals for "benchmarking LLM agents on consequential real-world tasks". At least two of the grants went to professors who are developing agents (advancing capabilities). $1,045,620 grant From https://www.openphilanthropy.org/grants/princeton-university-software-engineering-llm-benchmark/ Open Philanthropy recommended a grant of $1,045,620 to Princeton University to support a project to develop a benchmark for evaluating the performance of Large Language Model (LLM) agents in software engineering tasks, led by Assistant Professor Karthik Narasimhan. From Karthik Narasimhan's LinkedIn: "My goal is to build intelligent agents that learn to handle the dynamics of the world through experience and existing human knowledge (ex. text). I am specifically interested in developing autonomous systems that can acquire language understanding through interaction with their environment while also utilizing textual knowledge to drive their decision making." $547,452 grant From https://www.openphilanthropy.org/grants/carnegie-mellon-university-benchmark-for-web-based-tasks/ Open Philanthropy recommended a grant of $547,452 to Carnegie Mellon University to support research led by Professor Graham Neubig to develop a benchmark for the performance of large language models conducting web-based tasks in the work of software engineers, managers, and accountants. Graham Neubig is one of the co-founders of All Hands AI which is developing OpenDevin. All Hands AI's mission is to build AI tools to help developers build software faster and better, and do it in the open. Our flagship project is OpenDevin, an open-source software development agent that can autonomously solve software development tasks end-to-end. Webinar In the webinar when the RFP's were announced, Max Nadeau said (minute 19:00): "a lot of the time when you construct the benchmark you're going to put some effort into making the capable LLM agent that can actually demonstrate accurately what existing models are capable of, but for the most part we're imagining, for both our RFPs, the majority of the effort is spent on performing the measurement as opposed to like trying to increase performance on it". They were already aware that these grants would fund the development of agents and addressed this concern in the same webinar (minute 21:55). https://www.lesswrong.com/posts/7qGxm2mgafEbtYHBf/survey-on-the-acceleration-risks-of-our-new-rfps-to-study Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seven Philanthropic Wins: The Stories That Inspired Open Phil's Offices, published by Open Philanthropy on July 3, 2024 on The Effective Altruism Forum. Since our early days, we've studied the history of philanthropy to understand what great giving looks like. The lessons we learned made us more ambitious and broadened our view of philanthropy's potential. The rooms in our San Francisco office pay tribute to this legacy. Seven of them are named after philanthropic "wins" - remarkable feats made possible by philanthropic funders. In this post, we'll share the story behind each win. Green Revolution During the second half of the twentieth century, the Green Revolution dramatically increased agricultural production in developing countries like Mexico and India. At a time of rapid population growth, this boost in production reduced hunger, helped to avert famine, and stimulated national economies. The Rockefeller Foundation played a key role by supporting early research by Norman Borlaug and others to enhance agricultural productivity. Applications of this research - developed in collaboration with governments, private companies, and the Ford Foundation - sparked the Green Revolution, which is estimated to have saved a billion people from starvation. Read more about the Rockefeller Foundation's role in the Green Revolution in Political Geography. The Pill In 1960, the FDA approved "the pill", an oral contraceptive that revolutionized women's reproductive health by providing a user-controlled family planning option. This groundbreaking development was largely funded by Katharine McCormick, a women's rights advocate and one of MIT's first female graduates. In the early 1950s, McCormick collaborated with Margaret Sanger, the founder of Planned Parenthood, to finance critical early-stage research that led to the creation of the pill. Today, the birth control pill stands as one of the most common and convenient methods of contraception, empowering generations of women to decide when to start a family. For a comprehensive history of the pill, try Jonathan Eig's The Birth of the Pill. Sesame Street In 1967, the Carnegie Corporation funded a feasibility study on educational TV programming for children, which led to the creation of the Children's Television Workshop and Sesame Street. Sesame Street became one of the most successful television ventures ever, broadcast in more than 150 countries and the winner of more than 200 Emmy awards. Research monitoring the learning progress of Sesame Street viewers has demonstrated significant advances in early literacy. A deeper look into how philanthropy helped to launch Sesame Street is available here. Nunn-Lugar The Nunn-Lugar Act (1991), also known as the Cooperative Threat Reduction Program, was enacted in response to the collapse of the USSR and the dangers posed by dispersed weapons of mass destruction. US Senators Sam Nunn and Richard Lugar led the initiative, focusing on the disarmament and securing of nuclear, chemical, and biological weapons from former Soviet states. In the course of this work, thousands of nuclear weapons were deactivated or destroyed. The act's inception and success were largely aided by the strategic philanthropy of the Carnegie Corporation and the MacArthur Foundation, which funded research at Brookings on the "cooperative security" approach to nuclear disarmament and de-escalation. Learn more about the Nunn-Lugar Act and its connection to philanthropy in this paper. Marriage Equality The Supreme Court's landmark ruling in Obergefell v. Hodges granted same-sex couples the right to marry, marking the culmination of decades of advocacy and a sizable cultural shift toward acceptance. Philanthropic funders - including the Gill Foundation and Freedom to Marry, an organization initially funded by the Evelyn and Wa...
If you want to do good, and do not have unlimited funds, how do you choose? Which places, people, and situations are most deserving? Do you invest in economic benefits or lives saved? Open Philanthropy in an organisation that aims to rigorously optimise the impact of every dollar it spends. Emily Oehlsen tells Tim Phillips about its successes so far, and how it still sometimes gets it wrong.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High Impact Engineers is Transitioning to a Volunteer-Led Model, published by Jessica Wen on July 2, 2024 on The Effective Altruism Forum. Summary After over 2 years of operations, High Impact Engineers (HI-Eng) is reverting to a volunteer-led organisational model due to a middling impact outcome and a lack of funding. We wanted to thank all our subscribers, supporters, and contributors for being the driving force behind HI-Eng's achievements, which you can read about in our Impact Report. What is High Impact Engineers? High Impact Engineers (HI-Eng for short, pronounced high-enj) is an organisation dedicated to helping (physical - i.e. non-software) engineers increase their ability to have an outsized positive impact through their work. Why Is HI-Eng Winding Down? In December 2023, we sent out a community survey and solicited case studies and testimonials to evaluate our impact, which we wrote up in our Impact Report. As shown in the report, there is some evidence of behavioural and attitudinal changes in our members towards more impactful career outcomes due to interactions with our programmes, as well as some ongoing career transitions that we supported to some extent, but even after consultations with grantmakers and other community builders, we found it difficult to determine if this amount of impact would meet the bar for ongoing funding. As a result, we decided to (re-)apply for funding from the major EA funds (i.e. EAIF and Open Philanthropy), and they ended up deciding to not fund High Impact Engineers. Since our runway from the previous funding round was so short, we decided against trying to hire someone else to take over running HI-Eng, and the team is moving on to new opportunities. However, we still believe that engineers in EA are a valuable and persistently underserved demographic, and that this latent potential can be realised by providing a hub for engineers in EA to meet other like-minded engineers and find relevant resources. Therefore, we decided to maintain the most valuable and impactful programmes through the help of volunteers. Lessons Learnt There are already many resources available for new community builders (e.g. the EA Groups Resource Centre, this, this, this, and this EA Forum post, and especially this post by Sofia Balderson), so we don't believe that there is much we can add that hasn't already been said. However, here are some lessons we think are robustly good: 1. Having a funding cycle of 6 months is too short. 2. If you're looking to get set up and running quickly, getting a fiscal sponsor is great. We went with the Players Philanthropy Fund, but there are other options (including Rethink Priorities and maybe your national EA group). 3. Speak to other community builders, and ask for their resources! They're often more than happy to give you a copy of their systems, processes and documentation (minus personal data). 4. Pay for monthly subscriptions to software when setting up, even if it's cheaper to get an annual subscription. You might end up switching to a different software further down the line, and it's easier (and cheaper) to cancel a monthly subscription. 5. Email each of your subscriptions' customer service to ask for a non-profit discount (if you have non-profit status). They can save you up to 50% of the ticket price. (Jessica will write up her own speculative lessons learnt in a future forum post). What Will HI-Eng Look Like Going Forward? Jessica will continue managing HI-Eng as a volunteer, and is currently implementing the following changes in our programmes: Email newsletter: the final HI-Eng newsletter was sent in May. Future impactful engineering opportunities can be found on the 80,000 Hours job board or the EA Opportunities board. Any other impactful engineering jobs can be submitted to these boards ( submission...
Welcome to The Eric Ries Show. I sat down with Dustin Moskovitz, founder of not one but two iconic companies: Facebook and the collaborative work platform Asana. Needless to say, he's engaged in the most intense form of entrepreneurship there is. A huge part of what he's chosen to do with the hard-earned knowledge it gave him is dedicate himself and Asana to investing in employees' mental health, communication skills, and more. All of this matters to Dustin on a human level, but he also explains why putting people first is the only way to get the kind of results most founders can only dream of. We talked about how to get into that flow state, why preserving culture is crucial, his leadership style and how he decides when to be hands-on versus when to delegate, and how Asana reflects what he's learned about supporting people at all levels. Dustin sums up the work Asana does this way: “Our individual practices are meant to restore coherence for the individual, our team practices are meant to restore coherence for the team, and Asana, the system, is meant to try and do it for the entire organization.” I'm delighted to share our conversation, which also covers: • How he uses AI and views its future • Why he founded a collaboration platform • How he applied the lessons of Facebook to building Asana • Why taking care of your mental health as a founder is crucial for the company as a whole • His thoughts on the evolution of Facebook • The importance of alignment with investors • His philanthropic work • And so much more — Brought to you by: Mercury – The art of simplified finances. Learn more. DigitalOcean – The cloud loved by developers and founders alike. Sign up. Neo4j – The graph database and analytics leader. Learn more. — Where to find Dustin Moskovitz: • LinkedIn: https://www.linkedin.com/in/dmoskov/ • Threads: https://www.threads.net/@moskov • Asana: https://asana.com/leadership#moskovitz Where to find Eric: • Newsletter: https://ericries.carrd.co/ • Podcast: https://ericriesshow.com/ • X: https://twitter.com/ericries • LinkedIn: https://www.linkedin.com/in/eries/ • YouTube: https://www.youtube.com/@theericriesshow — In This Episode We Cover: (00:00) Welcome to the Eric Ries Show (00:31) Meet our guest Dustin Moskovitz (04:02) How Dustin is using AI for creative projects (05:31) Dustin talks about the social media and SaaS era and his Facebook days (06:52) How Facebook has evolved from its original intention (10:27) The founding of Asana (14:35) Building entrepreneurial confidence (19:22) Making – and fixing – design errors at Asana (20:32) The importance of committing to “soft” values. (25:27) Short-term profit over people and terrible advice from VCs (28:44) Crypto as a caricature of extractive behavior (30:47) The positive impacts of doing things with purpose (34:24) How Asana is ensuring its purpose and mission are permanently enshrined in the company (41:35) Battling entropy and meeting culture (44:31) Being employee-centric, the flow state, and Asana's strategy (47:51) The organizational equivalent of repressing emotions (52:57) Dustin as a Cassandra (56:51) Dustin talks about his philanthropic work and philosophy: Open Philanthropy, Good Ventures (1:02:05) Dustin's thoughts on AI and its future (1:07:20) Ethics, calculated risk, and thinking long-term — Referenced: Asana: https://asana.com/ Conscious Leadership Group: https://conscious.is/ Ben Horowitz on managing your own psychology: https://a16z.com/whats-the-most-difficult-ceo-skill-managing-your-own-psychology/ The Infinite Game, by Simon Sinek Dr. John Sarno The 15 Commitments of Conscious Leadership Awareness: Conversations with the Masters, by Anthony de Mello Brené Brown: Dare to Lead , The Call to Courage (Netflix trailer) Open Philanthropy Good Ventures GiveWell — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email jordan@penname.co Eric may be an investor in the companies discussed.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice for EA org staff and EA group organisers interacting with political campaigns, published by Catherine Low on June 17, 2024 on The Effective Altruism Forum. Compiled by CEA's Community Health team 2024 is the biggest year for elections in history(!), and while many of these elections have passed, some important elections are upcoming, including the UK and US elections, providing a potentially large opportunity to have an impact through political change. This post is intended 1. To make it easier for EA group organisers and organisation staff to adhere to the laws in relevant countries 2. And more generally, to help the community be able to take high impact actions now and in the future by reducing risks of polarisation of EA and the cause areas we care about. Two main concerns: Legal risks and risks around polarisation and epistemics Legal risks Charities and organisations associated with/funded by charities have constraints on what political activities they can do. See "More about legal risks." Note: This post is not legal advice. Our team is employed by US and UK charities (Effective Ventures US and UK). So, we have a little familiarity with the legal situations for groups/organisations that are based in the US or UK (many EA organisations), and groups/organisations that are funded by charities in the US or UK (even more EA groups and organisations). We have very little knowledge about the legal situation relating to other countries. It could be useful for groups/orgs in any country (including US and UK) to get independent legal advice. Risks around polarisation and epistemics These risks include EA becoming more associated with specific parties or parts of the political spectrum, in a way that makes EAs less able to collaborate with others Issues EA works on about becoming polarised / associated with a specific party EA falling into lower standards of reasoning, honesty, etc through feeling a need to compete in political arenas where good epistemics are not valued as highly Creating suspicion about whether EAs are primarily motivated by seeking power rather than doing the most good. Of course, the upside of doing political work could be extremely high. So our recommendation isn't for EAs to stop doing political work, but to be very careful to think through risks when choosing your actions. Some related ideas about the risks of polarisation and political advocacy: 1. Climate change policy and politics in the US 2. Lesson 7: Even among EAs, politics might somewhat degrade our typical epistemics and rigor 3. To Oppose Polarization, Tug Sideways 4. Politics on the EA Forum More about legal risks If your group/organisation is a charity or is funded by a charity In many (or maybe all?) places, charities or organisations funded by charities are NOT allowed to engage in political campaigning. E.g. US U.S. 501(c)(3) public charities are prohibited from "intervening in political campaigns" ( more detail). This includes organisations that are funded by US 501 (c)(3) charities (including Open Philanthropy's charitable arm, and Effective Ventures (which hosts EA Funds and CEA)). This includes financial support for a campaign, including reimbursing costs for people to engage in volunteer activities endorsing or disapproving of a candidate, referring to a candidate's characteristics or qualifications for office - in writing, speaking, mentions on the website, podcasts, etc. Language that could appear partisan like stating "holding elected officials accountable" could also imply disapproval. taking action to help or hurt the chances of a candidate. This can be problematic even if you or your charity didn't intend to help or hurt the candidate. staff taking political action that's seen as representing the organisation they work for E.g. attending rallies or door knocking as ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] An update from Good Ventures, published by Alexander Berger on June 14, 2024 on The Effective Altruism Forum. I wanted to share this update from Good Ventures (Cari and Dustin's philanthropy), which seems relevant to the EA community. Tl;dr: "while we generally plan to continue increasing our grantmaking in our existing focus areas via our partner Open Philanthropy, we have decided to exit a handful of sub-causes (amounting to less than 5% of our annual grantmaking), and we are no longer planning to expand into new causes in the near term by default." A few follow-ups on this from an Open Phil perspective: I want to apologize to directly affected grantees (who've already been notified) for the negative surprise here, and for our part in not better anticipating it. While this represents a real update, we remain deeply aligned with Good Ventures (they're expecting to continue to increase giving via OP over time), and grateful for how many of the diverse funding opportunities we've recommended that they've been willing to tackle. An example of a new potential focus area that OP staff had been interested in exploring that Good Ventures is not planning to fund is research on the potential moral patienthood of digital minds. If any readers are interested in funding opportunities in that space, please reach out. Good Ventures has told us they don't plan to exit any overall focus areas in the near term. But this update is an important reminder that such a high degree of reliance on one funder (especially on the GCR side) represents a structural risk. I think it's important to diversify funding in many of the fields Good Ventures currently funds, and that doing so could make the funding base more stable both directly (by diversifying funding sources) and indirectly (by lowering the time and energy costs to Good Ventures from being such a disproportionately large funder). Another implication of these changes is that going forward, OP will have a higher bar for recommending grants that could draw on limited Good Ventures bandwidth, and so our program staff will face more constraints in terms of what they're able to fund. We always knew we weren't funding every worthy thing out there, but that will be even more true going forward. Accordingly, we expect marginal opportunities for other funders to look stronger going forward. Historically, OP has been focused on finding enough outstanding giving opportunities to hit Good Ventures' spending targets, with a long-term vision that once we had hit those targets, we'd expand our work to support other donors seeking to maximize their impact. We'd already gotten a lot closer to GV's spending targets over the last couple of years, but this update has accelerated our timeline for investing more in partnerships and advising other philanthropists. If you're interested, please consider applying or referring candidates to lead our new partnerships function. And if you happen to be a philanthropist looking for advice on how to invest >$1M/year in new cause areas, please get in touch. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Takeoff speeds presentation at Anthropic, published by Tom Davidson on June 4, 2024 on The AI Alignment Forum. This is a lightly edited transcript of a presentation that I (Tom Davidson) gave about the risks of a fast takeoff at Anthropic in September 2023. See also the video recording, or the slides. None of the content necessarily reflects the views of Anthropic or anyone who works there. Summary: Software progress - improvements in pre-training algorithms, data quality, prompting strategies, tooling, scaffolding, and all other sources of AI progress other than compute - has been a major driver of AI progress in recent years. I guess it's driven about half of total progress in the last 5 years. When we have "AGI" (=AI that could fully automate AI R&D), the pace of software progress might increase dramatically (e.g. by a factor of ten). Bottlenecks might prevent this - e.g. diminishing returns to finding software innovations, retraining new AI models from scratch, or computationally expensive experiments for finding better algorithms. But no bottleneck is decisive, and there's a real possibility that there is a period of dramatically faster capabilities progress despite all of the bottlenecks. This period of accelerated progress might happen just when new extremely dangerous capabilities are emerging and previously-effective alignment techniques stop working. A period of accelerated progress like this could significantly exacerbate risks from misuse, societal disruption, concentration of power, and loss of control. To reduce these risks, labs should monitor for early warning signs of AI accelerating AI progress. In particular they can: track the pace of software progress to see if it's accelerating; run evals of whether AI systems can autonomously complete challenging AI R&D tasks; and measure the productivity gains to employees who use AI systems in their work via surveys and RCTs. Labs should implement protective measures by the time these warning signs occur, including external oversight and info security. Intro Hi everyone, really great to be here. My name's Tom Davidson. I work at Open Philanthropy as a Senior Research Analyst and a lot of my work over the last couple of years has been around AI take-off speeds and the possibility that AI systems themselves could accelerate AI capabilities progress. In this talk I'm going to talk a little bit about that research, and then also about some steps that I think labs could take to reduce the risks caused by AI accelerating AI progress. Ok, so here is the brief plan. I'm going to quickly go through some recent drivers of AI progress, which will set the scene to discuss how much AI progress might accelerate when we get AGI. Then the bulk of the talk will be focused on what risks there might be if AGI does significantly accelerate AI progress - which I think is a real possibility - and how labs can reduce those risks. Software improvements have been a significant fraction of recent AI progress So drivers of progress. It's probably very familiar to many people that the compute used to train the most powerful AI models has increased very quickly over recent years. According to Epoch's accounting, about 4X increase per year. Efficiency improvements in pre-training algorithms are a significant driver of AI progress What I want to highlight from this slide is that algorithmic efficiency improvements - improved algorithms that allow you to train equally capable models with less compute than before - have also played a kind of comparably important role. According to Epoch's accounting, these algorithmic efficiency improvements account for more than half of the gains from compute. That's going to be important later because when we're talking about how much the pace of progress might accelerate, today's fast algorithmic progress...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I bet Greg Colbourn 10 k€ that AI will not kill us all by the end of 2027, published by Vasco Grilo on June 4, 2024 on The Effective Altruism Forum. Agreement 78 % of my donations so far have gone to the Long-Term Future Fund[1] (LTFF), which mainly supports AI safety interventions. However, I have become increasingly sceptical about the value of existential risk mitigation, and currently think the best interventions are in the area of animal welfare[2]. As a result, I realised it made sense for me to arrange a bet with someone very worried about AI in order to increase my donations to animal welfare interventions. Gregory Colbourn (Greg) was the 1st person I thought of. He said: I think AGI [artificial general intelligence] is 0-5 years away and p(doom|AGI) is ~90% I doubt doom in the sense of human extinction is anywhere as likely as suggested by the above. I guess the annual extinction risk over the next 10 years is 10^-7, so I proposed a bet to Greg similar to the end-of-the-world bet between Bryan Caplan and Eliezer Yudkowsky. Meanwhile, I transferred 10 k€ to PauseAI[3], which is supported by Greg, and he agreed to the following. If Greg or any of his heirs are still alive by the end of 2027, they transfer to me or an organisation of my choice 20 k€ times the ratio between the consumer price index for all urban consumers and items in the United States, as reported by the Federal Reserve Economic Data (FRED), in December 2027 and April 2024. I expect inflation in this period, i.e. a ratio higher than 1. Some more details: The transfer must be made in January 2028. I will decide in December 2027 whether the transfer should go to me or an organisation of choice. My current preference is for it to go directly to an organisation, such that 10 % of it is not lost in taxes. If for some reason I am not able to decide (e.g. if I die before 2028), the transfer must be made to my lastly stated organisation of choice, currently The Humane League (THL). As Founders Pledge's Patient Philanthropy Fund, I have my investments in Vanguard FTSE All-World UCITS ETF USD Acc. This is an exchange-traded fund (ETF) tracking global stocks, which have provided annual real returns of 5.0 % since 1900. In addition, Lewis Bollard expects the marginal cost-effectiveness of Open Philanthropy's (OP's) farmed animal welfare grantmaking "will only decrease slightly, if at all, through January 2028"[4], so I suppose I do not have to worry much about donating less over the period of the bet of 3.67 years (= 2028 + 1/12 - (2024 + 5/12)). Consequently, I think my bet is worth it if its benefit-to-cost ratio is higher than 1.20 (= (1 + 0.050)^3.67). It would be 2 (= 20*10^3/(10*10^3)) if the transfer to me or an organisation of my choice was fully made, and Person X fulfils the agreement, so I need 60 % (= 1.20/2) of the transfer to be made and agreement with Person X to be fulfilled. I expect this to be the case based on what I know about Greg and Person X, and information Greg shared, so I went ahead with the bet. Here are my and Greg's informal signatures: Me: Vasco Henrique Amaral Grilo. Greg: Gregory Hamish Colbourn. Impact I expect 90 % of the potential benefits of the bet to be realised. So I believe the bet will lead to additional donations of 8 k€ (= (0.9*20 - 10)*10^3). Saulius estimated corporate campaigns for chicken welfare improve 41 chicken-years per $, and OP thinks "the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius' analysis", which means my donations will affect 8.20 chicken-years per $ (= 41/5). Therefore I expect my bet to improve 65.6 k chicken-years (= 8*10^3*8.20). I also estimate corporate campaigns for chicken welfare have a cost-effectiveness of 14.3 DALY/$[5]. So I expect the benefits of the bet to be equiv...
It is pretty obvious that the Peace Corps. was a unique intelligence-gathering operation, but most people have not heard how it was established and who financed the operation over the decades. There is also a pipeline of talent for NGOs and spook agencies that runs through the Peace Corps., making it hidden in plain sight. The other fake do-gooder organization with questionable partnerships and sketchy financing is Greenpeace and its ties to John Podesta, George Soros, Bill Gates, and Planned Parenthood. It appears that they intend to preserve the planet through depopulation, which would explain their financial support from known eugenics operations and anti-humanity NGOs, such as the Tides Foundation, Open Philanthropy, and the World Wildlife Fund. The Octopus of Global Control Audiobook: https://amzn.to/3xu0rMm Anarchapulco 2024 Replay: www.Anarchapulco.com Promo Code: MACRO Sponsors: Chemical Free Body: https://www.chemicalfreebody.com Promo Code: MACRO C60 Purple Power: https://c60purplepower.com/ Promo Code: MACRO Wise Wolf Gold & Silver: www.Macroaggressions.gold True Hemp Science: https://truehempscience.com/ Haelan: https://haelan951.com/pages/macro Solar Power Lifestyle: https://solarpowerlifestyle.com/ Promo Code: MACRO LegalShield: www.DontGetPushedAround.com EMP Shield: www.EMPShield.com Promo Code: MACRO Christian Yordanov's Detoxification Program: https://members.christianyordanov.com/detox-workshop?coupon=MACRO Privacy Academy: https://privacyacademy.com/step/privacy-action-plan-checkout-2/?ref=5620 Coin Bit App: https://coinbitsapp.com/?ref=0SPP0gjuI68PjGU89wUv Macroaggressions Merch Store: https://www.teepublic.com/stores/macroaggressions?ref_id=22530 LinkTree: linktr.ee/macroaggressions Books: HYPOCRAZY: https://amzn.to/3VsPDp8 Controlled Demolition on Amazon: https://amzn.to/3ufZdzx The Octopus Of Global Control: Amazon: https://amzn.to/3VDWQ5c Barnes & Noble: https://bit.ly/39vdKeQ Online Connection: Link Tree: https://linktr.ee/Macroaggressions Instagram: https://www.instagram.com/macroaggressions_podcast/ Discord Link: https://discord.gg/4mGzmcFexg Website: www.Macroaggressions.io Facebook: www.facebook.com/theoctopusofglobalcontrol Twitter: www.twitter.com/macroaggressio3 Twitter Handle: @macroaggressio3 Rumble: https://rumble.com/c/c-4728012 The Union Of The Unwanted LinkTree: https://linktr.ee/uotuw RSS FEED: https://uotuw.podbean.com/ Merch Store: https://www.teepublic.com/stores/union-of-the-unwanted?ref_id=22643&utm_campaign=22643&utm_medium=affiliate&utm_source
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An invitation to the Berlin EA co-working space TEAMWORK, published by Johanna Schröder on May 24, 2024 on The Effective Altruism Forum. TL;DR TEAMWORK, a co-working and event space in Berlin run by Effektiv Spenden, is available for use by the Effective Altruism (EA) community. We offer up to 15 desk spaces in a co-working office for EA professionals and a workshop and event space for a broad range of EA events, all free of charge at present (and at least for the rest of 2024). A lot has changed since the space was established in 2021. After a remodeling project in September last year, there has been a notable improvement in the acoustics and soundproofing, leading to a more focused and productive work environment. Apply here if you would like to join our TEAMWORK community. What TEAMWORK offers TEAMWORK is a co-working space focused on EA professionals operated by Effektiv Spenden and located in Berlin. Following a remodeling project in fall 2023, we were able to improve the acoustics and soundproofing significantly, fostering a more conducive atmosphere for focused work. Additionally, we transformed one of our co-working rooms into a workshop space, providing everything necessary for productive collaboration and gave our meeting room a makeover with modern new furniture, ensuring a professional setting for discussions and presentations. Our facilities include: Co-working Offices: One large office with 11 desks and a smaller office with four desks. The smaller office is also bookable for team retreats or team co-working, while the big office can be transformed into an event space for up to 40 people. Workshop Room: "Flamingo Paradise" serves as a workshop room with a big sofa, a large desk, a flip chart, and a pin board. It can also be used as a small event space, complete with a portable projector. When not in use for events, it functions as a chill and social area. Meeting Room: A meeting room for up to four people (max capacity six people). Can also be used for calls. Phone Booths: Four private phone booths. In addition to that also the "Flamingo Paradise" and the Meeting room can be used to take a call. Community Kitchen: A kitchen with free coffee and tea. We have a communal lunch at 1 pm where members can either bring their own meals or go out to eat. Berlin as an EA Hub Berlin is home to a vibrant and growing (professional) EA community, making it one of the biggest EA hubs in continental Europe. It is also home of Effektiv Spenden, Germany's effective giving organization, that is hosting this space. Engaging with this dynamic community provides opportunities for collaboration and networking with like-minded individuals. Additionally, working from Berlin could offer a change of scene maybe enhancing your productivity and inspiration (particularly in spring and summer). Join Our Community Our vision is to have a space where people from the EA Community can not only work to make the world a better place, but can also informally engage with other members of the community during coffee breaks, lunch or at community events. Many of the EA meetups organized by the EA Berlin community take place at TEAMWORK. You can find more information on how to engage with the EA Berlin community here. People in the TEAMWORK community are working on various cause areas. Our members represent a range of organizations, including Founders Pledge, Future Matters, Open Philanthropy, and Kooperation Global. We frequently host international visitors from numerous EA-aligned organizations such as Charity Entrepreneurship, the Center for Effective Altruism, the Good Food Institute, Future Cleantech Architects, and the Center for the Governance of AI. Additionally, organizations like EA Germany, the Fish Welfare Initiative, One for the World, and Allfed have utilized our space for team re...
"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I'm just making this up — but we give people superforecasting tests when they're doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we're making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we're a year ahead of where we would have been if we hadn't done this kind of stuff."Now, suppose in 10 years we're going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we've brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that's really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster." —Matt ClancyIn today's episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy's Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.Links to learn more, highlights, and full transcript.They cover:Whether scientific progress is actually net positive for humanity.Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.Why Matt is sceptical that AGI could really cause explosive economic growth.And much more.Chapters:Is scientific progress net positive for humanity? (00:03:00)The time of biological perils (00:17:50)Modelling the benefits of science (00:25:48)Income and health gains from scientific progress (00:32:49)Discount rates (00:42:14)How big are the returns to science? (00:51:08)Forecasting global catastrophic biological risks from scientific progress (01:05:20)What's the value of scientific progress, given the risks? (01:15:09)Factoring in extinction risk (01:21:56)How science could reduce extinction risk (01:30:18)Are we already too late to delay the time of perils? (01:42:38)Domain experts vs superforecasters (01:46:03)What Open Philanthropy's Innovation Policy programme settled on (01:53:47)Explosive economic growth (02:06:28)Matt's favourite thought experiment (02:34:57)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
The idea of “decarbonizing” the world is laughable and insane due to the obvious impossibility of the task, but many of the most powerful and wealthy institutes and foundations have invested billions to try to do the impossible. Or at least, try to convince you that they are. The manipulation of data, behaviors, and emotions with regard to the climate change grift is set to be pushed to the public for the next decade through groups such as Climate Central, Open Philanthropy, ClimateWorks Foundation, and the World Resources Institute. Their mission statements might sound altruistic, but depopulation is always just below the surface. The Octopus of Global Control Audiobook: https://amzn.to/3xu0rMm Anarchapulco 2024 Replay: www.Anarchapulco.com Promo Code: MACRO Sponsors: Chemical Free Body: https://www.chemicalfreebody.com Promo Code: MACRO C60 Purple Power: https://c60purplepower.com/ Promo Code: MACRO Wise Wolf Gold & Silver: www.Macroaggressions.gold True Hemp Science: https://truehempscience.com/ Haelan: https://haelan951.com/pages/macro Solar Power Lifestyle: https://solarpowerlifestyle.com/ Promo Code: MACRO LegalShield: www.DontGetPushedAround.com EMP Shield: www.EMPShield.com Promo Code: MACRO Christian Yordanov's Detoxification Program: https://members.christianyordanov.com/detox-workshop?coupon=MACRO Privacy Academy: https://privacyacademy.com/step/privacy-action-plan-checkout-2/?ref=5620 Coin Bit App: https://coinbitsapp.com/?ref=0SPP0gjuI68PjGU89wUv Macroaggressions Merch Store: https://www.teepublic.com/stores/macroaggressions?ref_id=22530 LinkTree: linktr.ee/macroaggressions Books: HYPOCRAZY: https://amzn.to/3VsPDp8 Controlled Demolition on Amazon: https://amzn.to/3ufZdzx The Octopus Of Global Control: Amazon: https://amzn.to/3VDWQ5c Barnes & Noble: https://bit.ly/39vdKeQ Online Connection: Link Tree: https://linktr.ee/Macroaggressions Instagram: https://www.instagram.com/macroaggressions_podcast/ Discord Link: https://discord.gg/4mGzmcFexg Website: www.Macroaggressions.io Facebook: www.facebook.com/theoctopusofglobalcontrol Twitter: www.twitter.com/macroaggressio3 Twitter Handle: @macroaggressio3 Rumble: https://rumble.com/c/c-4728012 The Union Of The Unwanted LinkTree: https://linktr.ee/uotuw RSS FEED: https://uotuw.podbean.com/ Merch Store: https://www.teepublic.com/stores/union-of-the-unwanted?ref_id=22643&utm_campaign=22643&utm_medium=affiliate&utm_source
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scorable Functions: A Format for Algorithmic Forecasting, published by Ozzie Gooen on May 22, 2024 on The Effective Altruism Forum. Introduction Imagine if a forecasting platform had estimates for things like: 1. "For every year until 2100, what will be the probability of a global catastrophic biological event, given different levels of biosecurity investment and technological advancement?" 2. "What will be the impact of various AI governance policies on the likelihood of developing safe and beneficial artificial general intelligence, and how will this affect key indicators of global well-being over the next century?" 3. "How valuable is every single project funded by Open Philanthropy, according to a person with any set of demographic information, if they would spend 1000 hours reflecting on it?" These complex, multidimensional questions are useful for informing decision-making and resource allocation around effective altruism and existential risk mitigation. However, traditional judgemental forecasting methods often struggle to capture the nuance and conditionality required to address such questions effectively. This is where "scorable functions" come in - a forecasting format that allows forecasters to directly submit entire predictive models rather than just point estimates or simple probability distributions. Scorable functions allow encoding a vast range of relationships and dependencies, from basic linear trends to intricate nonlinear dynamics. Forecasters can precisely specify interactions between variables, the evolution of probabilities over time, and how different scenarios could unfold. At their core, scorable functions are executable models that output probabilistic predictions and can be directly scored via function calls. They encapsulate the forecasting logic, whether it stems from human judgment, data-driven insights, or a hybrid of the two. Scorable functions can span from concise one-liners to elaborate constructs like neural networks. Over the past few years, we at QURI have been investigating how to effectively harness these methods. We believe scorable functions could be a key piece of the forecasting puzzle going forward. From Forecast Bots to Scorable Functions Many people are familiar with the idea of using "bots" to automate forecasts on platforms like Metaculus. Let's consider a simple example to see how scorable functions can extend this concept. Suppose there's a binary question on Metaculus: "Will event X happen in 2024?" Intuitively, the probability should decrease as 2024 progresses, assuming no resolution. A forecaster might start at 90% in January, but want to gradually decrease to 10% by December. One approach is to manually update the forecast each week - a tedious process. A more efficient solution is to write a bot that submits forecasts based on a simple function: (Example using Squiggle, but hopefully it's straightforward enough) This bot can automatically submit daily forecasts via the Metaculus API. However, while more efficient than manual updates, this approach has several drawbacks: 1. The platform must store and process a separate forecast for each day, even though they all derive from a simple function. 2. Viewers can't see the full forecast trajectory, only the discrete submissions. 3. The forecaster's future projections and scenario contingencies are opaque. Scorable functions elegantly solve these issues. Instead of a bot submitting individual forecasts, the forecaster simply submits the generating function itself. You can imagine there being a custom input box directly in Metaculus. The function submitted would be the same, though it might be provided as a lambda function or with a standardized function name. The platform can then evaluate this function on-demand to generate up-to-date forecasts. Viewers see the comp...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear security seems like an interesting funding gap, published by Benjamin Todd on May 14, 2024 on The Effective Altruism Forum. Here's the funding gap that gets me the most emotionally worked up: In 2020, the largest philanthropic funder of nuclear security, the MacArthur Foundation, withdrew from the field, reducing total annual funding from $50m to $30m. That means people who've spent decades building experience in the field will no longer be able to find jobs. And $30m a year of philanthropic funding for nuclear security philanthropy is tiny on conventional terms. (In fact, the budget of Oppenheimer was $100m, so a single movie cost more than 3x annual funding to non-profit policy efforts to reduce nuclear war.) And even other neglected EA causes, like factory farming, catastrophic biorisks and AI safety, these days receive hundreds of millions of dollars of philanthropic funding, so at least on this dimension, nuclear security is even more neglected. I agree that a full accounting of neglectedness should consider all resources going towards the cause (not just philanthropic ones), and that 'preventing nuclear war' more broadly receives significant attention from defence departments. However, even considering those resources, it still seems similarly neglected as biorisk. And the amount of philanthropic funding still matters because certain important types of work in the space can only be funded by philanthropists (e.g. lobbying or other policy efforts you don't want to originate within a certain national government). All this is happening exactly as nuclear risk seems to be increasing. There are credible reports that Russia considered the use of nuclear weapons against Ukraine in autumn 2022. China is on track to triple its arsenal. North Korea has at least 30 nuclear weapons. More broadly, we appear to be entering an era of more great power conflict and potentially rapid destabilising technological change, including through advanced AI and biotechnology. The Future Fund was going to fill this gap with ~$10m per year. Longview Philanthropy hired an experienced grantmaker in the field, Carl Robichaud, as well as Matthew Gentzel. The team was all ready to get started. But the collapse of FTX meant that didn't materialise. Moreover, Open Philanthropy decided to raise their funding bar, and focus on AI safety and biosecurity, so it hasn't stepped in to fill it either. Longview's program was left with only around $500k to allocate on Nuclear Weapons Policy in 2023, and has under $1m on hand now. Giving Carl and Matthew more like $3 million (or more) a year seems like an interesting niche that a group of smaller donors could specialise in. This would allow them to pick the low hanging fruit among opportunities abandoned by MacArthur - as well as look for new opportunities, including those that might have been neglected by the field to date. I agree it's unclear how tractable policy efforts are here, and I haven't looked into specific grants, but it still seems better to me to have a flourishing field of nuclear policy than not. I'd suggest talking to Carl about the specific grants they see at the margin (carl@longview.org). I'm also not sure, given my worldview, that this is even more effective than funding AI safety or biosecurity, so I don't think Open Philanthropy is obviously making a mistake by not funding it. But I do hope someone in the world can fill this gap. I'd expect it to be most attractive to someone who's more sceptical about AI safety, but agrees the world underrates catastrophic risks (or reduce the chance of all major cities blowing up for common sense reasons). It could also be interesting as something that's getting less philanthropic attention than AI safety, and as something a smaller donor could specialise in and play an important role in. If...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Cool Things Our GHW Grantees Have Done in 2023" - Open Philanthropy, published by Lizka on May 13, 2024 on The Effective Altruism Forum. Open Philanthropy[1] recently shared a blog post with a list of some cool things accomplished in 2023 by grantees of their Global Health and Wellbeing (GHW) programs (including farm animal welfare). The post "aims to highlight just a few updates on what our grantees accomplished in 2023, to showcase their impact and make [OP's] work a little more tangible." I'm link-posting because I found it valuable to read about these projects, several of which I hadn't heard of. And I like that despite its brevity, the post manages to include a lot of relevant information (and links), along with explanations of the key relevant theories of change and opportunity. For people who don't want to click through to the post itself, I'm including an overview of what's included and a selection of excerpts below. Overview The post introduces each program with a little blurb, and then provides 1-2 examples of projects and one of their updates from 2023. Here's the table of contents: 1. Global Public Health Policy 1. Dr. Sachchida Tripathi (air quality sensors) 2. Lead Exposure Elimination Project (LEEP) 2. Global Health R&D 1. Cures Within Reach 2. SAVAC 3. Scientific Research 1. Dr. Caitlin Howell (catheters) 2. Dr. Allan Basbaum (pain research) 4. Land Use Reform 1. Sightline Institute 5. Innovation Policy 1. Institute for Progress 2. Institute for Replication 6. Farm Animal Welfare 1. Open Wing Alliance 2. Aquaculture Stewardship Council 7. Global Aid Policy 1. PoliPoli 8. Effective Altruism (Global Health and Wellbeing) 1. Charity Entrepreneurship 9. How you can support our grantees Examples/excerpts from the post I've chosen some examples (pretty arbitrarily - I'm really excited about many of the other examples, but wanted to limit myself here), and am including quotes from the original post. 1.1 Dr. Sachchida Tripathi (air quality sensors) Sachchida Tripathi is a professor at IIT Kanpur, one of India's leading universities, where he focuses on civil engineering and sustainable energy. Dr. Tripathi used an Open Philanthropy grant to purchase 1,400 low-cost air quality sensors and place them in every block[2] in rural Uttar Pradesh and Bihar. Using low-cost sensors involved procuring and calibrating them (see photo). These sensors now provide much more accurate and reliable data for these rural areas than was previously available to the air quality community. This work has two main routes to impact. First, these sensors make the problem of rural air pollution legible. Because air quality in India is assumed to be a largely urban issue, most ground-based sensors are in urban areas. Second, proving the value of these low-cost sensors and getting operational experience can encourage buy-in from stakeholders (e.g., local governments) who may fund additional sensors or other air quality interventions. Air quality monitoring is a major theme of our South Asian Air Quality grantmaking. We are actively exploring opportunities in new geographic areas, both within and beyond India, without high-quality, ground-based monitoring. Santosh Harish, who leads our grantmaking on environmental health, recently spoke to the 80,000 Hours podcast about this grant as well as air quality in India more generally. 2.2. SAVAC (accelerating the development and implementation of strep A vaccines) The Strep A Vaccine Global Consortium (SAVAC) is working to accelerate the development and implementation of safe and effective strep A vaccines. Open Philanthropy is one of very few funders supporting the development of a group A strep (GAS) vaccine (we've funded two projects to test new vaccines). GAS kills over 500,000 people per year, mostly by causing rheumatic heart disease.[3] Wh...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Potential Pitfalls in University EA Community Building, published by jessica mccurdy on May 8, 2024 on The Effective Altruism Forum. TL;DR: This is a written version of a talk given at EAG Bay Area 2023. It claims university EA community building can be incredibly impactful, but there are important pitfalls to avoid, such as being overly zealous, overly open, or overly exclusionary. These pitfalls can turn away talented people and create epistemic issues in the group. By understanding these failure modes, focusing on truth-seeking discussions, and being intentional about group culture, university groups can expose promising students to important ideas and help them flourish. Introduction Community building at universities can be incredibly impactful, but important pitfalls can make this work less effective or even net negative. These pitfalls can turn off the kind of talented people that we want in the EA community, and it's challenging to tell if you're falling into them. This post is based on a talk I gave at EAG Bay Area in early 2023[1]. If you are a new group organizer or interested in becoming one, you might want to check out this advice post. This talk was made specifically for university groups, but I believe many of these pitfalls transfer to other groups. Note, that I didn't edit this post much and may not be able to respond in-depth to comments now. I have been in the EA university group ecosystem for almost 7 years now. While I wish I had more rigorous data and a better idea of the effect sizes, this post is based on anecdotes from years of working with group organizers. Over the past years, I think I went from being extremely encouraging of students doing university community building and selling it as a default option for students, to becoming much more aware of risks and concerns and hence writing this talk. I think I probably over-updated on the risks and concerns, and this led me to be less outwardly enthusiastic about the value of CB over the past year. I think that was a mistake, and I am looking forward to revitalizing the space to a happy medium. But that is a post for another day. Why University Community Building Can Be Impactful Before discussing the pitfalls, I want to emphasize that I do think community building at universities can be quite high leverage. University groups can help talented people go on to have effective careers. Students are at a time in their lives when they're thinking about their priorities and how to make a change in the world. They're making lifelong friendships. They have a flexibility that people at other life stages often lack. There is also some empirical evidence supporting the value of university groups. The longtermist capacity building team at Open Philanthropy ran a mass survey. One of their findings was that a significant portion of people working on projects they're excited about had attributed a lot of value to their university EA groups. Common Pitfalls in University Group Organizing While university groups can be impactful, there are several pitfalls that organizers should be aware of. In this section, I'll introduce some fictional characters that illustrate these failure modes. While the examples are simplified, I believe they capture real dynamics that can arise. Pitfall 1: Being Overly Zealous One common pitfall is being overly zealous or salesy when trying to convince others of EA ideas. This can come across as not genuinely engaging with people's arguments or concerns. Consider this example: Skeptical Serena asks, "Can we actually predict the downstream consequences of our actions in the long run? Doesn't that make RCTs not useful?" Zealous Zack[2] responds confidently, "That's a good point but even 20-year studies show this is working. There's a lot of research that has gone into it. So, it really d...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Hive: A rebrand for Impactful Animal Advocacy, published by Hive (formerly Impactful Animal Advocacy) on May 7, 2024 on The Effective Altruism Forum. Last month, we made an April Fools Day post about rebranding to Shrimpactful Animal Advocacy. It was a two-sided joke, because we are rebranding. As of today, Impactful Animal Advocacy is now Hive. Our new name won't have the word "impactful" in it - but it's still an important part of our mission: to end factory farming through creating impact-focused communities. We are excited to announce that Impactful Animal Advocacy is now Hive, the same organization with a fresh look and feel. After much thought and consideration, we envision Hive to better reflect who we are - capturing our collaborative spirit and commitment to generating impact within the farmed animal advocacy movement. Why rebrand? This change is, in part, a response to common feedback that the name Impactful Animal Advocacy is too long (10 syllabus!), hard to remember, and difficult to recognize as the acronym IAA. We want to empower animal advocates to do work that's effective and impactful. And if they are, we want them to feel comfortable calling their work impactful animal advocacy without worrying about overstepping into an org's name. Why we chose Hive We want to convey a sense of friendliness and approachability, combined with a rational and results-driven approach. Additionally, we wanted to avoid using an acronym if possible. Name: Hive. The word hive is a place full of activity, and in the context of bees, it's a place where many gather. Hive symbolizes collaboration and coordination, while still bearing a subtle animal element. Logo: Hexagon with 3 typing indicator dots. The hexagon is a reference to bee hives. The three typing indicator dots are common on digital platforms, and help explain that we are a largely online community that thrives on communication. Colors: Deep and light orange gradient. We think these warmer colors will help establish the brand as a place for fresh ideas and energy. The fluid nature of the gradient represents the diversity of viewpoints and collaborative nature of our community. How this change affects our community or programs You will see that our name, logo, and website will be changed. Besides that, our programs (Slack, newsletter, etc) will run as usual. Other changes that you may see, such as Community Calls, are not related to the rebrand and are instead part of our continuous improvements. We thank many people and organizations: Vegan Hacktivists for their initial logo design and brand kit - it helped us get started quickly and we are thankful this service helps other orgs get off the ground also. Our trusted advisors and mentors have been critical in supporting us through all the organizational growth we experienced in 2023. Thank you to Cameron King, David Meyer and Tracy Spencer, Rockwell Schwarz, Aaron Boddy, Nicoll Peracha, Jay Barrett, Rachel Atcheson, Jacob Eliosoff, Monica Chen, David Nash, Steven Rouk, Wanyi Zeng, David Coman-Hidy, Tania Luna, Alyssa Greene-Crow, Nicole Rawling and Ana Bradley. You have all been so helpful and we appreciate your time. Our funders and donors who have helped not just with funding but with valuable advice that helped us create and follow a focused and solid strategy, including Open Philanthropy, Lush, Veg Trust, Craigslist Charitable Fund, Humane America Animal Foundation, and all the individual donors who believed in us and supported us so early in our journey. James and Amy Odene from User-Friendly who helped us come up with a new name and design the Hive logo and website. You have been incredibly helpful and patient in this process. We love the end product! All of our loyal Slack members - for being the impactful and helpful advocates that you are! We are...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates on the EA catastrophic risk landscape, published by Benjamin Todd on May 6, 2024 on The Effective Altruism Forum. Around the end of Feb 2024 I attended the Summit on Existential Risk and EAG: Bay Area (GCRs), during which I did 25+ one-on-ones about the needs and gaps in the EA-adjacent catastrophic risk landscape, and how they've changed. The meetings were mostly with senior managers or researchers in the field who I think are worth listening to (unfortunately I can't share names). Below is how I'd summarise the main themes in what was said. If you have different impressions of the landscape, I'd be keen to hear them. There's been a big increase in the number of people working on AI safety, partly driven by a reallocation of effort (e.g. Rethink Priorities starting an AI policy think tank); and partly driven by new people entering the field after its newfound prominence. Allocation in the landscape seems more efficient than in the past - it's harder to identify especially neglected interventions, causes, money, or skill-sets. That means it's become more important to choose based on your motivations. That said, here's a few ideas for neglected gaps: Within AI risk, it seems plausible the community is somewhat too focused on risks from misalignment rather than mis-use or concentration of power. There's currently very little work going into issues that arise even if AI is aligned, including the deployment problem, Will MacAskill's " grand challenges" and Lukas Finnveden's list of project ideas. If you put significant probability on alignment being solved, some of these could have high importance too; though most are at the stage where they can't absorb a large number of people. Within these, digital sentience was the hottest topic, but to me it doesn't obviously seem like the most pressing of these other issues. (Though doing field building for digital sentience is among the more shovel ready of these ideas.) The concrete entrepreneurial idea that came up the most, and seemed most interesting to me, was founding orgs that use AI to improve epistemics / forecasting / decision-making (I have a draft post on this - comments welcome). Post-FTX, funding has become even more dramatically concentrated under Open Philanthropy, so finding new donors seems like a much bigger priority than in the past. (It seems plausible to me that $1bn in a foundation independent from OP could be worth several times that amount added to OP.) In addition, donors have less money than in the past, while the number of opportunities to fund things in AI safety has increased dramatically, which means marginal funding opportunities seem higher value than in the past (as a concrete example, nuclear security is getting almost no funding). Both points mean efforts to start new foundations, fundraise and earn to give all seem more valuable compared to a couple of years ago. Many people mentioned comms as the biggest issue facing both AI safety and EA. EA has been losing its battle for messaging, and AI safety is in danger of losing its too (with both a new powerful anti-regulation tech lobby and the more left-wing AI ethics scene branding it as sci-fi, doomer, cultish and in bed with labs). People might be neglecting measures that would help in very short timelines (e.g. transformative AI in under 3 years), though that might be because most people are unable to do much in these scenarios. Right now, directly talking about AI safety seems to get more people in the door than talking about EA, so some community building efforts have switched to that. There's been a recent influx in junior people interested in AI safety, so it seems plausible the biggest bottleneck again lies with mentoring & management, rather than recruiting more junior people. Randomly: there seems to have been a trend of former le...
"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways and these ways, and they still stay alive. And we've modelled out every possibility and we've found that it works.' I think another possibility, which I don't understand as well, is that AI could lock in current moral values. And I think in particular there's a risk that if AI is learning from what we do as humans today, the lesson it's going to learn is that it's OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there's a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue." —Lewis BollardIn today's episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.Links to learn more, highlights, and full transcript.They cover:The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.Work to improve farmed animal welfare that Open Philanthropy is excited about funding.The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.The occasional tension between ending factory farming and curbing climate changeHow AI could transform factory farming for better or worse — and Lewis's fears that the technology will just help us maximise cruelty in the name of profit.How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.Lewis's personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.And much more.Chapters:Common objections to ending factory farming (00:13:21)Potential solutions (00:30:55)Cage-free reforms (00:34:25)Broiler chicken welfare (00:46:48)Do companies follow through on these commitments? (01:00:21)Fish welfare (01:05:02)Alternatives to animal proteins (01:16:36)Farm animal welfare in Asia (01:26:00)Farm animal welfare in Europe (01:30:45)Animal welfare science (01:42:09)Approaches Lewis is less excited about (01:52:10)Will we end factory farming in our lifetimes? (01:56:36)Effect of AI (01:57:59)Recent big wins for farm animals (02:07:38)How animal advocacy has changed since Lewis first got involved (02:15:57)Response to the Moral Weight Project (02:19:52)How to help (02:28:14)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Around the world, there are tens of billions of non-human animals trapped in the barbaric system of industrial animal agriculture. And unfortunately, that number is increasing. It's a severe problem, not just for the suffering animals, but for human health and the health of our planet. Fortunately, there are people dedicating their lives to tackling this issue. And today, we're joined by one of these remarkable individuals. Lewis Bollard is a dedicated animal advocate making meaningful change on a global scale. He is the Farm Animal Welfare Director at Open Philanthropy, which is a philanthropic funder addressing important and often neglected causes. Prior to working there, Lewis was the Policy Advisor and International Liaison to the CEO of The Humane Society of the United States. Join us to hear our illuminating discussion with Lewis as we shine a spotlight on his quest for a more ethical world and his efforts to end factory farming once and for all! “We chose farm animals just because of the sheer numbers. I mean the sheer scale of factory farming globally in terms of tens of billions of animals suffering. And as you say, I mean it is definitely especially neglected because people don't pay attention to them. I think that's partly, I mean, people will say yes because they're not smart, but I'm skeptical that's what's really going on. I think it's just convenient to not care about them, and it would be really inconvenient to take seriously their interests. I mean, as with you, I think that what ultimately matters is whether they can suffer. You know, I think across all these species, I suspect there's a lot more going on. I suspect they're smarter than we give them credit for. But I ultimately think that debate doesn't even really matter to our moral obligations. We just, shouldn't torture them. We shouldn't cause suffering to a being who can suffer.” - Lewis Bollard What we discuss in this episode: - The eye-opening family trip that changed Lewis's life. - Why do people fall off the vegan wagon? - Lewis's thoughts on alternative proteins and plant-based meats. - How the animal rights movement has evolved. - The most abused farm animals in the world. - Positive changes in the animal rights movement and the potential threat AI poses. - How governments around the world are promoting plant-based diets. - The work Lewis is doing to effect positive change. - Lewis's advice to those who want to make a positive impact but don't know where to start. Resources: - Open Philanthropy: Lewis Bollard | Open Philanthropy - https://www.openphilanthropy.org/about/team/lewis-bollard/ - Lewis's Twitter/X: https://twitter.com/Lewis_Bollard - Vote for the Switch4Good podcast here: https://bit.ly/s4gpodcast ★☆★ Click the link below to support the ADD SOY Act! ★☆★ https://switch4good.org/add-soy-act/ ★☆★ Share the website and get your resources here ★☆★ https://kidsandmilk.org/ ★☆★ Send us a voice message and ask a question. We want to hear from you! ★☆★ https://switch4good.org/podcast/ ★☆★ Dairy-Free Swaps Guide: Easy Anti-Inflammatory Meals, Recipes, and Tips ★☆★ https://switch4good.org/dairy-free-swaps-guide ★☆★SUPPORT SWITCH4GOOD★☆★ https://switch4good.org/support-us/ ★☆★ JOIN OUR PRIVATE FACEBOOK GROUP ★☆★ https://www.facebook.com/groups/podcastchat ★☆★ SWITCH4GOOD WEBSITE ★☆★ https://switch4good.org/ ★☆★ ONLINE STORE ★☆★ https://shop.switch4good.org/shop/ ★☆★ FOLLOW US ON INSTAGRAM ★☆★ https://www.instagram.com/Switch4Good/ ★☆★ LIKE US ON FACEBOOK ★☆★ https://www.facebook.com/Switch4Good/ ★☆★ FOLLOW US ON TWITTER ★☆★ https://mobile.twitter.com/Switch4GoodNFT ★☆★ AMAZON STORE ★☆★ https://www.amazon.com/shop/switch4good ★☆★ DOWNLOAD THE ABILLION APP ★☆★ https://app.abillion.com/users/switch4good
Rebroadcast: this episode was originally released in January 2021.You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land?”You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you're in the big world — if the coin landed tails, way more people should be having an experience just like yours.But then you get up, walk outside, and look at the number on your box.‘3'. Huh. Now you don't know what to believe.If God made 10 billion boxes, surely it's much more likely that you would have seen a number like 7,346,678,928?In today's interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning' could be relevant for figuring out where we should direct our charitable giving.Links to learn more, summary, and full transcript.Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘longtermism' — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that's both very large relative to what's possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.But imagine that humanity has two possible futures ahead of it: Either we're going to have a huge future like that, in which trillions of people ultimately exist, or we're going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘doomsday argument‘ alone.If that's true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we're incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.There are many critics of this theoretical ‘doomsday argument', and it may be the case that it logically doesn't work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.They also discuss:Which worldviews Open Phil finds most plausible, and how it balances themWhich worldviews Ajeya doesn't embrace but almost doesHow hard it is to get to other solar systemsThe famous ‘simulation argument'When transformative AI might actually arriveThe biggest challenges involved in working on big research reportsWhat it's like working at Open PhilAnd much moreProducer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Sofia Davis-Fogel
Platformer's Casey Newton moderates a conversation at Code 2023 on ethics in artificial intelligence, with Ajeya Cotra, Senior Program Officer at Open Philanthropy, and Helen Toner, Director of Strategy at Georgetown University's Center for Security and Emerging Technology. The panel discusses the risks and rewards of the technology, as well as best practices and safety measures. Recorded on September 27th in Los Angeles. Learn more about your ad choices. Visit podcastchoices.com/adchoices