Podcasts about open philanthropy project

American grantmaking foundation

  • 24PODCASTS
  • 59EPISODES
  • 1hAVG DURATION
  • ?INFREQUENT EPISODES
  • May 3, 2024LATEST
open philanthropy project

POPULARITY

20172018201920202021202220232024


Best podcasts about open philanthropy project

Latest podcast episodes about open philanthropy project

Progress, Potential, and Possibilities
Dr. Jaime Yassif, Ph.D. - VP, Global Biological Policy and Programs, Nuclear Threat Initiative - Working To Reduce Global Catastrophic Biological Risks

Progress, Potential, and Possibilities

Play Episode Listen Later May 3, 2024 61:00


Dr. Jaime Yassif, Ph.D. serves as Vice President of Global Biological Policy and Programs, at the Nuclear Threat Initiative ( https://www.nti.org/about/people/jaime-yassif-phd/ ) where she oversees work to reduce global catastrophic biological risks, strengthen biosecurity and pandemic preparedness, and drives progress in advancing global health security. Prior to this, Dr. Yassif served as a Program Officer at the Open Philanthropy Project, where she led the initiative on Biosecurity and Pandemic Preparedness. In this role, she recommended and managed approximately $40 million in biosecurity grants, which rebuilt the field and supported work in several key areas, including: development of new biosecurity programming at several leading think tanks; cultivation of new talent through biosecurity leadership development programs; initiation of new biosecurity work in China and India; establishment of the Global Health Security Index; development of the Clade X tabletop exercise; and the emergence of a new discussion about global catastrophic biological risks. Previously, Dr. Yassif was a Science and Technology Policy Advisor at the U.S. Department of Defense, where she focused on oversight of the Cooperative Threat Reduction Program and East Asia security issues. During this period, she also worked on the Global Health Security Agenda (GHSA) at the Department of Health and Human Services, where she helped lay the groundwork for the WHO Joint External Evaluations and the GHSA Steering Group. Dr. Yassif's previous experience includes work with Connecting Organizations for Regional Disease Surveillance, Chatham House, NTI, the Federation of American Scientists and the Tsinghua University Institute for International Studies. Dr. Yassif holds a Ph.D. in Biophysics from UC Berkeley, an MA in Science and Security from the King's College London War Studies Department, where she wrote her thesis on verification of the Biological Weapons Convention, and a BA in Biology from Swarthmore College. Important episode link - The International Biosecurity and Biosafety Initiative for Science (IBBIS) - https://ibbis.bio/ Support the Show.

The Nonlinear Library
EA - CEA is fundraising, and funding constrained by Ben West

The Nonlinear Library

Play Episode Listen Later Nov 20, 2023 14:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA is fundraising, and funding constrained, published by Ben West on November 20, 2023 on The Effective Altruism Forum. Tl;dr The Centre for Effective Altruism (CEA) has an expected funding gap of $3.6m in 2024. Some example things we think are worth doing but are unlikely to have funding for by default: Funding a Community Building Grant in Boston Funding travel grants for EAG(x) attendees Note that these are illustrative of our current cost-effectiveness bar (as opposed to a binding commitment that the next dollar we receive will go to one of these things). In collaboration with EA Funds we have produced models where users can plug in their own parameters to determine the relative value of a donation to CEA versus EA Funds. Intro The role of an interim executive is weird: whereas permanent CEOs like to come in with a bold new vision (ideally one which blames all the organization's problems on their predecessor), interim CEOs are stuck staying the course. Fortunately for me, I mostly liked the course CEA was on when I came in. The past few years seem to have proven the value of the EA community: my own origin cause area of animal welfare has been substantially transformed (e.g. as recounted by Jakub here), and even as AI safety has entered the global main stage many of the people doing research, engineering, and other related work have interacted with CEA's projects. Of course, this is not to say that CEA's work is a slamdunk. In collaboration with Caleb and Linch at EA Funds, I have included below some estimates of whether marginal donations to CEA are more impactful than those to EA Funds, and a reasonable confidence interval very comfortably includes the possibility that you should donate elsewhere. We are fortunate to count the Open Philanthropy Project (and in particular Open Phil's GCR Capacity Building program) among the people who believe we are a good use of funding, but they (reasonably) prefer to not fund all of our budget, leaving us with a substantial number of projects which we believe would produce value if we could fund starting or scaling them. This post outlines where we expect marginal donations to go and the value we expect to come from those donations. You can donate to CEA here. If you are interested in donating and have further questions, feel free to email me (ben.west@centreforeffectivealtruism.org). I will also try to answer questions in the comments. The basic case for CEA Community building is sometimes motivated by the following: suppose you spent a year telling everyone you know about EA and getting them excited. Probably you could get at least one person excited. Then this means that you will have doubled your lifetime impact, as both you and this other person will go on to do good things. That's a pretty good ROI for one year of work! This story is overly simplistic, but is roughly my motivation for working on (and donating to) community building: it's a leveraged way to do good in the world. And it does seem to be the case that many people whose work seems impactful attribute some of their impact to CEA: The Open Philanthropy longtermist survey in 2020 identified CEA among the top tier of important influences on people's journey towards work improving the long-term future, with about half of CEA's demonstrated value coming through events (EA Global and EAGx conferences) and half through our other programs. The 80,000 Hours user survey in 2022 identified CEA as the EA-related resource which has influenced the most people's career plans (in addition to 80k itself), with 64% citing the EA Forum as influential and 44% citing EAG. This selection of impact stories illustrates some of the ways we've helped people increase their impact by providing high-quality discussion spaces to consider their ideas, values and options for and about maki...

Clearer Thinking with Spencer Greenberg
Science is learning from start-ups (with Adam Marblestone)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 15, 2023 73:39


Read the full transcript here. What are focused research organizations? Which kinds of research projects lend themselves to the FRO model? Researchers in academia frequently complain about the incentive structures around funding and publishing; so how do FROs change those dynamics? Why must FROs be time-limited, especially if they're successful? Who's in charge in an FRO? How does "field-building" help to improve science? What effects might large language models have on science?Adam Marblestone is the CEO of Convergent Research. He's been launching Focused Research Organizations (FROs) such as E11 bio and Cultivarium. He also serves on the boards of several non-profits pursuing new methods of funding and organizing scientific research including Norn Group and New Science. Previously, he was a Schmidt Futures Innovation Fellow, a consultant for the Astera Institute, a Fellow with the Federation of American Scientists (FAS), a research scientist at Google DeepMind, Chief Strategy Officer of the brain-computer interface company Kernel, a research scientist at MIT, a PhD student in biophysics with George Church and colleagues at Harvard, and a theoretical physics student at Yale. He also previously helped to start companies like BioBright and advised foundations such as the Open Philanthropy Project. His work has been recognized with a Technology Review 35 Innovators Under 35 Award (2018), a Fannie and John Hertz Foundation Fellowship (2010), and a Goldwater Scholarship (2008). Learn more about him at adammarblestone.org. [Read more]

Awesome Vegans with Elysabeth Alfano
Lewis Bollard of the Open Philanthropy Project: Is Animal Welfare Part of ESG

Awesome Vegans with Elysabeth Alfano

Play Episode Listen Later May 4, 2023 46:28


Senior Program Officer for the Open Philanthropy Project, Lewis Bollard, discusses if Animal Welfare is growing in recognition as an ESG investing principle. Join us today #live and bring your questions for this episode of The Plantbased Business Hour.   Subscribe right now to never miss this podcast!   For plant-based media/branding consulting and public speaking, reach out at elysabeth@elysabethalfano.com. For more information, visit ElysabethAlfano.com.   Connect with Elysabeth on Linked in here: https://www.linkedin.com/in/elysabeth-alfano-8b370b7/   For more PBH, visit ElysabethAlfano.com/Plantbased-Business-Hour

The Plantbased Business Hour
Lewis Bollard of the Open Philanthropy Project: Is Animal Welfare Part of ESG

The Plantbased Business Hour

Play Episode Listen Later May 4, 2023 46:28


Senior Program Officer for the Open Philanthropy Project, Lewis Bollard, discusses if Animal Welfare is growing in recognition as an ESG investing principle. Join us today #live and bring your questions for this episode of The Plantbased Business Hour.   Subscribe right now to never miss this podcast!   For plant-based media/branding consulting and public speaking, reach out at elysabeth@elysabethalfano.com. For more information, visit ElysabethAlfano.com.   Connect with Elysabeth on Linked in here: https://www.linkedin.com/in/elysabeth-alfano-8b370b7/   For more PBH, visit ElysabethAlfano.com/Plantbased-Business-Hour

The Nonlinear Library
EA - Ending Open Philanthropy Project by Dusten Muskovitz

The Nonlinear Library

Play Episode Listen Later Apr 2, 2023 3:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ending Open Philanthropy Project, published by Dusten Muskovitz on April 2, 2023 on The Effective Altruism Forum. Effective immediately, my wife and I will no longer plan funding for EA or EAs. There's enough money with OpenPhil to wind down operations gracefully, paying out all current grants, all grants under consideration that we normally would have made, and any new grants that come in within the next three months that we normally would have said yes to (existing charities receiving Open Philanthropy money are particularly encouraged to apply), and providing six months for everyone currently at the non-profit before we shut down. I want to emphasize that this is not because of anything that Alexander Berger or the rest of the wonderful team at OpenPhil have done. They're great, and I think that they've tried as hard as anyone could to do the best possible work with our money. It's the rest of you. I present three primary motivations. They're all somewhat interrelated, but hopefully by presenting three arguments in succession you will update on each of them sequentially. Certainly I've lost all hope in y'all retaining any of the virtues of the rationalist community, rather than just their vices. I hope that this helps you as a community clean up your act while you try to convince someone else to fund this mess. Maybe Bernauld Arnault. That was a joke. Haha, fat chance. 1. In the words of philosopher Liam Kofi Bright, “Why can't you just be normal?” Two of Redwood's leadership team have or have had relationships to an [Open Philanthropy] grant maker. A Redwood board member is married to a different OP grantmaker. A co-CEO of OP is one of the other three board members of Redwood. Additionally, many OP staff work out of Constellation, the office that Redwood runs. OP pays Redwood for use of the space. Just be normal. Stop having a social community where people live and work and study and sing together, and do social atomization like everybody else. This won't cause any problems. Everyone else is doing it. There is another way! You don't, actually, need to have more partners than the average American adult has friends. Also, just don't have sex. That's not that much to ask for, is it? I've been married for a decade now: I can tell you, it's perfectly possible. 2. I'm tired of all the criticism. I'm tired of it hitting Asana, which I still love and care about. Moving my donations to instead buying superyachts, artwork, and expanding on an actually fun hobby (giant bonsai) is going to substantially reduce how often my family, friends, and employees sees me getting attacked in one news outlet or another. 3. Pick a cause and stick with it. Have the courage of your convictions. I don't need to spend my time hearing about sea-rats and prisoners and suffering matrices and matrices that are suffering and discount rates and so many different ways human bodies can go wrong in other countries and immigration and housing for techies and so many more. Y'all were supposed to be optimizers, so this donating splitting between different cause areas should end. Like I said, most of my wealth is going into the new super-yacht my wife and I will be commissioning. Maybe then you could stop arguing quite so much. Get it all out of your systems, figure out what the best charity is, and stick with it. Average American adult has three or fewer friends. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Reasoning Transparency by Lizka

The Nonlinear Library

Play Episode Listen Later Sep 28, 2022 28:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reasoning Transparency, published by Lizka on September 28, 2022 on The Effective Altruism Forum. I think “reasoning transparency” (and/or epistemic legibility) is a key value of effective altruism. As far as I can tell, the key piece of writing about it is this Open Philanthropy blog post by Luke Muehlhauser, which I'm cross-posting it to the Forum with permission. We also have a topic page for "reasoning transparency" — you can see some related posts there. Published: December 01, 2017 | by Luke Muehlhauser We at the Open Philanthropy Project value analyses which exhibit strong “reasoning transparency.” This document explains what we mean by “reasoning transparency,” and provides some tips for how to efficiently write documents that exhibit greater reasoning transparency than is standard in many domains. In short, our top recommendations are to: Open with a linked summary of key takeaways. [more] Throughout a document, indicate which considerations are most important to your key takeaways. [more] Throughout a document, indicate how confident you are in major claims, and what support you have for them. [more] 1 Motivation When reading an analysis — e.g. a scientific paper, or some other collection of arguments and evidence for some conclusions — we want to know: “How should I update my view in response to this?” In particular, we want to know things like: Has the author presented a fair or biased presentation of evidence and arguments on this topic? How much expertise does the author have in this area? How trustworthy is the author in general? What are their biases and conflicts of interest? What was the research process that led to this analysis? What shortcuts were taken? What rough level of confidence does the author have in each of their substantive claims? What support does the author think they have for each of their substantive claims? What does the author think are the most important takeaways, and what could change the author's mind about those takeaways? If the analysis includes some data analysis, how were the data collected, which analyses were done, and can I access the data myself? Many scientific communication norms are aimed at making it easier for a reader to answer questions like these, e.g. norms for ‘related literature' sections and ‘methods' sections, open data and code, reporting standards, pre-registration, conflict of interest statements, and so on. In other ways, typical scientific communication norms lack some aspects of reasoning transparency that we value. For example, many scientific papers say little about roughly how confident the authors are in different claims throughout the paper, or they might cite a series of papers (or even entire books!) in support of specific claims without giving any page numbers. Below, I (Luke Muehlhauser) offer some tips for how to write analyses that (I suspect, and in my experience) make it easier for the reader to answer the question, “How should I update my views in response to this?” 2 Example of GiveWell charity reviews I'll use a GiveWell charity review to illustrate a relatively “extreme” model of reasoning transparency, one that is probably more costly than it's worth for most analysts. Later, I'll give some tips for how to improve an analysis' reasoning transparency without paying as high a cost for it as GiveWell does. Consider GiveWell's review of Against Malaria Foundation (AMF). This review. .includes a summary of the most important points of the review, each linked to a longer section that elaborates those points and the evidence for them in some detail. .provides detailed responses to major questions that bear on the likely cost-effectiveness of marginal donations to AMF, e.g. “Are LLINs targeted at people who do not already have them?”, “Do LLINs reach intended destinations?”, “Is there roo...

The Nonlinear Library
EA - Three common mistakes when naming an org or project by Kat Woods

The Nonlinear Library

Play Episode Listen Later Jul 23, 2022 4:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Three common mistakes when naming an org or project, published by Kat Woods on July 23, 2022 on The Effective Altruism Forum. If Hermione had named her organization H.E.R.O (House Elf Release Organization) instead of S.P.E.W, she might have gotten a lot more traction. Similarly, aspiring charity entrepreneurs know that finding a good name for their organization or project can play an important role in their future impact. After starting four EA organizations (with varying degrees of name quality), I am often asked what I think of a charity entrepreneur's name for their new venture. I always have the same three pieces of advice, so I thought I'd put it into a blog post so others can benefit from it as well. 1. People will shorten the name if it's too long. Name accordingly Consider how people will shorten your organization's name in everyday conversation. People don't like saying more than two or three syllables at once. In everyday conversation, no one wastes their breath on the lengthy names of ‘Eighty-thousand Hours' or ‘the Open Philanthropy Project'. They say ‘80k' or ‘Open Phil'. Your name should either have 1-3 syllables in the first place (‘GiveWell') or look good when shortened to 1-3 syllables. The full name can have more than three syllables if it has a snappy acronym. It's great if your acronym spells a word or phrase, especially if it evokes the organization's mission (e.g., ACE, CFAR, ALLFED). If your acronym doesn't spell something, avoid Ws - it's very awkward and long to say ‘double-you'. 2. Don't artificially lock yourself into a particular strategy with your name Your name shouldn't tie you to a specific project, method, goal, or aim. Over time, you will hopefully change your mind about what's the highest impact thing to do; a vague name preserves your option value. If the Against Malaria Foundation wanted to work on tuberculosis instead, or 80k decided to focus on donations rather than career choice, they'd be stuck. Names like ‘Lightcone' and ‘Nonlinear' are evocative, but they don't imply that the organizations are working on anything in particular. At Nonlinear we could switch our focus from meta work to direct work tomorrow and the name would still work. Of course, names won't necessarily stop you from pivoting. Oxfam is the shortened form of the Oxford Committee for Famine Relief, and now they do far more than help those facing famine. However, it increases the friction of updating based on new evidence or crucial considerations, which is where a massive percentage of your potential future impact comes from. So don't artificially limit yourself simply because of a name. 3. Get loads of feedback on loads of different names Generate LOTS of options - potentially hundreds - then choose the best 10 and ask your friends to rate them. Don't just choose one name and ask your friends what they think. First, they can't tell you how the name compares to other possible names - maybe they think it's fine, but they'd much prefer another option you considered. Second, it's socially difficult for your friends to respond ‘actually, I hate it,' so it's hard to get honest feedback this way. Even if you name your child Adolf or Hashtag, people will coo ‘aww! How cute! How original!' If you send your friends options, it's easier for them to be honest about which they like best. So there's the 80/20 advice on naming your organization or project: Keep it three syllables or less, or know that its shortened form will also be good Preserve option value by giving yourself a vague name Generate a ton of options and get feedback on the top 5-10 from a bunch of friends Reminder that if this reaches 25 upvotes, you can listen to this post on your podcast player using the Nonlinear Library. This post was written collaboratively by Kat Woods and Amber Dawn Ace as part of Non...

Aging-US
Announcement: Special Collection of Dr. Steve Horvath Publications in Aging

Aging-US

Play Episode Listen Later Mar 31, 2022 5:09


Listen to a blog summary of the announcement for a special collection of Steve Horvath research papers published by Aging (Aging-US). ____________________ Epigenetics is the study of gene expression and the changes that occur which do not involve alterations in the DNA sequence. These changes can occur as a result of environmental influences, including exposure to chemicals, diet and stress. Epigenetic modifications can be passed down through generations and play an important role in disease development. An exciting area of epigenetics research is its role in the aging process. Studies have shown that epigenetic modifications can affect aging and the onset of age-related diseases. Recently, researchers also discovered that epigenetic modifications may be used to measure biological age and aging rate. Steve Horvath, Ph.D., ScD is a world-renowned researcher, geneticist, biostatistician, and Professor of Human Genetics and Biostatistics at the University of California, Los Angeles. His research areas of study include aging, cancer, cardiovascular disease, HIV, Huntington's disease, and neurodegenerative diseases. Today, he is well-known for his contributions in epigenetics research. In 2013, Dr. Horvath developed the first multi-tissue DNA methylation-based epigenetic biomarker of aging, known as the Horvath aging clock. Dr. Horvath earned numerous awards for his groundbreaking research, including the Allen Distinguished Investigator award, the Open Philanthropy Project award and the Schober Award. In 2018, 2019, 2020, and 2021, the Clarivate Web of Science Group named him as one of the world's most influential scientific researchers. Full blog - https://aging-us.net/2022/03/special-collection-of-steve-horvath-publications-in-aging/ Steve Horvath Special Collection - https://www.aging-us.com/special-collections-archive/steve-horvath Contact information - Steve Horvath - shorvath@mednet.ucla.edu Keywords - aging, aging research, epigenetics, epigenetic clock, longevity, healthspan, lifespan, lifestyle, research, research papers About Aging-US Launched in 2009, Aging-US publishes papers of general interest and biological significance in all fields of aging research and age-related diseases, including cancer—and now, with a special focus on COVID-19 vulnerability as an age-dependent syndrome. Topics in Aging-US go beyond traditional gerontology, including, but not limited to, cellular and molecular biology, human age-related diseases, pathology in model organisms, signal transduction pathways (e.g., p53, sirtuins, and PI-3K/AKT/mTOR, among others), and approaches to modulating these signaling pathways. Please visit our website at http://www.Aging-US.com​​ and connect with us: SoundCloud - https://soundcloud.com/Aging-Us Facebook - https://www.facebook.com/AgingUS/ Twitter - https://twitter.com/AgingJrnl Instagram - https://www.instagram.com/agingjrnl/ YouTube - https://www.youtube.com/agingus​ LinkedIn - https://www.linkedin.com/company/aging/ Pinterest - https://www.pinterest.com/AgingUS/ Aging-US is published by Impact Journals, LLC: http://www.ImpactJournals.com​​ Media Contact 18009220957 MEDIA@IMPACTJOURNALS.COM

The Nonlinear Library
LW - Quick thoughts on empathic metaethics by lukeprog from No-Nonsense Metaethics

The Nonlinear Library

Play Episode Listen Later Dec 25, 2021 16:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This isNo-Nonsense Metaethics, Part 5: Quick thoughts on empathic metaethics, published by lukeprog. Years ago, I wrote an unfinished sequence of posts called "No-Nonsense Metaethics." My last post, Pluralistic Moral Reductionism, said I would next explore "empathic metaethics," but I never got around to writing those posts. Recently, I wrote a high-level summary of some initial thoughts on "empathic metaethics" in section 6.1.2 of a report prepared for my employer, the Open Philanthropy Project. With my employer's permission, I've adapted that section for publication here, so that it can serve as the long-overdue concluding post in my sequence on metaethics. In my previous post, I distinguished "austere metaethics" and "empathic metaethics," where austere metaethics confronts moral questions roughly like this: Tell me what you mean by 'right', and I will tell you what is the right thing to do. If by 'right' you mean X, then Y is the right thing to do. If by 'right' you mean P, then Z is the right thing to do. But if you can't tell me what you mean by 'right', then you have failed to ask a coherent question, and no one can answer an incoherent question. Meanwhile, empathic metaethics says instead: You may not know what you mean by 'right.' But let's not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we'll be able to answer your question. Then we can tell you what the right thing to do is. Below, I provide a high-level summary of some of my initial thoughts on what one approach to "empathic metaethics" could look like. Given my metaethical approach, when I make a “moral judgment” about something (e.g. about which kinds of beings are moral patients), I don't conceive of myself as perceiving an objective moral truth, or coming to know an objective moral truth via a series of arguments. Nor do I conceive of myself as merely expressing my moral feelings as they stand today. Rather, I conceive of myself as making a conditional forecast about what my values would be if I underwent a certain “idealization” or “extrapolation” procedure (coming to know more true facts, having more time to consider moral arguments, etc.).[1] Thus, in a (hypothetical) "extreme effort" attempt to engage in empathic metaethics (for thinking about my own moral judgments), I would do something like the following: I would try to make the scenario I'm aiming to forecast as concrete as possible, so that my brain is able to treat it as a genuine forecasting challenge, akin to participating in a prediction market or forecasting tournament, rather than as a fantasy about which my brain feels "allowed" to make up whatever story feels nice, or signals my values to others, or achieves something else that isn't forecasting accuracy.[2] In my case, I concretize the extrapolation procedure as one involving a large population of copies of me who learn many true facts, consider many moral arguments, and undergo various other experiences, and then collectively advise me about what I should value and why.[3] However, I would also try to make forecasts I can actually check for accuracy, e.g. about what my moral judgment about various cases will be 2 months in the future. When making these forecasts, I would try to draw on the best research I've seen concerning how to make accurate estimates and forecasts. For example I would try to "think like a fox, not like a hedgehog," and I've already done several hours of probability calibration training, and some amount of forecasting training.[4] Clearly, my current moral intuitions would serve as one important source of evidence about what my extrapolated values might be. However, recent findings in moral psychology and related fields lead me to assign more evidential weight to some moral ...

The Nonlinear Library: LessWrong
LW - Quick thoughts on empathic metaethics by lukeprog from No-Nonsense Metaethics

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 25, 2021 16:05


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This isNo-Nonsense Metaethics, Part 5: Quick thoughts on empathic metaethics, published by lukeprog. Years ago, I wrote an unfinished sequence of posts called "No-Nonsense Metaethics." My last post, Pluralistic Moral Reductionism, said I would next explore "empathic metaethics," but I never got around to writing those posts. Recently, I wrote a high-level summary of some initial thoughts on "empathic metaethics" in section 6.1.2 of a report prepared for my employer, the Open Philanthropy Project. With my employer's permission, I've adapted that section for publication here, so that it can serve as the long-overdue concluding post in my sequence on metaethics. In my previous post, I distinguished "austere metaethics" and "empathic metaethics," where austere metaethics confronts moral questions roughly like this: Tell me what you mean by 'right', and I will tell you what is the right thing to do. If by 'right' you mean X, then Y is the right thing to do. If by 'right' you mean P, then Z is the right thing to do. But if you can't tell me what you mean by 'right', then you have failed to ask a coherent question, and no one can answer an incoherent question. Meanwhile, empathic metaethics says instead: You may not know what you mean by 'right.' But let's not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we'll be able to answer your question. Then we can tell you what the right thing to do is. Below, I provide a high-level summary of some of my initial thoughts on what one approach to "empathic metaethics" could look like. Given my metaethical approach, when I make a “moral judgment” about something (e.g. about which kinds of beings are moral patients), I don't conceive of myself as perceiving an objective moral truth, or coming to know an objective moral truth via a series of arguments. Nor do I conceive of myself as merely expressing my moral feelings as they stand today. Rather, I conceive of myself as making a conditional forecast about what my values would be if I underwent a certain “idealization” or “extrapolation” procedure (coming to know more true facts, having more time to consider moral arguments, etc.).[1] Thus, in a (hypothetical) "extreme effort" attempt to engage in empathic metaethics (for thinking about my own moral judgments), I would do something like the following: I would try to make the scenario I'm aiming to forecast as concrete as possible, so that my brain is able to treat it as a genuine forecasting challenge, akin to participating in a prediction market or forecasting tournament, rather than as a fantasy about which my brain feels "allowed" to make up whatever story feels nice, or signals my values to others, or achieves something else that isn't forecasting accuracy.[2] In my case, I concretize the extrapolation procedure as one involving a large population of copies of me who learn many true facts, consider many moral arguments, and undergo various other experiences, and then collectively advise me about what I should value and why.[3] However, I would also try to make forecasts I can actually check for accuracy, e.g. about what my moral judgment about various cases will be 2 months in the future. When making these forecasts, I would try to draw on the best research I've seen concerning how to make accurate estimates and forecasts. For example I would try to "think like a fox, not like a hedgehog," and I've already done several hours of probability calibration training, and some amount of forecasting training.[4] Clearly, my current moral intuitions would serve as one important source of evidence about what my extrapolated values might be. However, recent findings in moral psychology and related fields lead me to assign more evidential weight to some moral ...

The Nonlinear Library: EA Forum Top Posts
The ITN framework, cost-effectiveness, and cause prioritisation by John G. Halstead

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 24:51


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: The ITN framework, cost-effectiveness, and cause prioritisation, published by John G. Halstead on the effective altruism forum. Write a Review From reading EA material, one might get the impression that the Importance, Tractability and Neglectedness (ITN) framework is the (1) only, or (2) best way to prioritise causes. For example, in EA concepts' two entries on cause prioritisation, the ITN framework is put forward as the only or leading way to prioritise causes. Will MacAskill's recent TedTalk leaned heavily on the ITN framework as the way to make cause prioritisation decisions. Open Philanthropy Project explicitly prioritises causes using an informal version of the ITN framework. In this post, I argue that: Extant versions of the ITN framework are subject to conceptual problems. A new version of the ITN framework, developed here, is preferable to extant versions. Non-ITN cost-effectiveness analysis is, when workable, superior to ITN analysis for the purposes of cause prioritisation. This is because: Marginal cost-effectiveness is what we ultimately care about. If we can estimate the marginal cost-effectiveness of work on a cause without estimating the total scale of a problem or its neglectedness, then we should do that, in order to save time. Marginal cost-effectiveness analysis does not require the assumption of diminishing marginal returns, which may not characterise all problems. ITN analysis may be useful when it is difficult to produce intuitions about the marginal cost-effectiveness of work on a problem. In that case, we can make progress by zooming out and carrying out an ITN analysis. In difficult high stakes cause prioritisation decisions, we have to get into the weeds and consider in-depth the arguments for and against different problems being cost-effective to work on. We cannot bypass this process through simple mechanistic scoring and aggregation of the three ITN factors. For this reason, the EA movement has thus far significantly over-relied on the ITN framework as a way to prioritise causes. For high stakes cause prioritisation decisions, we should move towards in-depth analysis of marginal cost-effectiveness. [update - my footnotes didn't transfer from the googledoc, so I am adding them now] 1. Outlining the ITN framework Importance, tractability and neglectedness are three factors which are widely held to be correlated with cost-effectiveness; if one cause is more important, tractable and neglected than another, then it is likely to be more cost-effective to work on, on the margin. ITN analyses are meant to be useful when it is difficult to estimate directly the cost-effectiveness of work on different causes. Informal and formal versions of the ITN framework tend to define importance and neglectedness in the same way. As we will see below, they differ on how to define tractability. Importance or scale = the overall badness of a problem, or correspondingly, how good it would be to solve it. So for example, the importance of malaria is given by the total health burden it imposes, which you could measure in terms of a health or welfare metric like DALYs. Neglectedness = the total amount of resources or attention a problem currently receives. So for example, a good proxy for the neglectedness of malaria is the total amount of money that currently goes towards dealing with the disease.[^1] Extant informal definitions of tractability Tractability is harder to define and harder to quantify than importance and neglectedness. In informal versions of the framework, tractability is sometimes defined in terms of cost-effectiveness. However, this does not make that much sense because, as mentioned, the ITN framework is meant to be most useful when it is difficult to estimate the marginal cost-effectiveness of work on a particular cause. There would be no reas...

The Nonlinear Library: EA Forum Top Posts
Update on CEA's EA Grants Program by Nicole_Ross, Centre for Effective Altruism

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 10:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on CEA's EA Grants Program, published by Nicole_Ross, Centre for Effective Altruism on the Effective Altruism Forum. Write a Review In December, I (Nicole Ross) joined CEA to run the EA Grants program, which gives relatively small grants (usually under $60,000 per grant) to individuals and start-up projects within EA cause areas. Before joining CEA, I worked at the Open Philanthropy Project and GiveWell doing both research and grants operations. When I joined CEA, the EA Grants program had been running since 2017. Upon my initial review, it had a mixed track record. Some grants seemed quite exciting, some seemed promising, others lacked the information I needed to make an impact judgment, and others raised some concerns. Additionally, the program had a history of operational and strategic challenges. I've spent the majority of the last nine months working to improve the overall functioning of the program. I'm now planning the future of EA Grants, and trying to determine whether some version of the program ought to exist moving forward. In this brief update, I'll describe some of the program's past challenges, a few things I've worked on, and some preliminary thoughts about the future of the program. I'll also request feedback on the current EA funding landscape, and what value EA Grants might be able to add if we decide to maintain the program going forward. Note on early 2019 EA Grants round Last year, we publicly stated that we “expect the next round after this one to be early next year [2019] but we want to review lessons from this round before committing to a date.” When it became clear that we would not hold a round in early 2019, we did not update the previous statement. We regret any confusion we may have caused by failing to provide a clear update on our plans. Issues with the program EA Grants began in 2017. From June 2017 to December 2018 (when I joined CEA), grant management was a part-time responsibility of various staff members who also had other roles. As a result, the program did not get as much strategic and evaluative attention as it needed. Additionally, CEA did not appropriately anticipate the operational systems and capacity needed to run a grantmaking operation, and we did not have the full infrastructure and capacity in place to run the program. Because everyone involved recognized the importance of the program, CEA eventually began to take steps to resolve broader issues related to this lack of attention, including establishing the full-time Grants role for which I was hired and hiring an operations contractor to process grants. We believe it was a mistake that we didn't act more quickly to improve the program, and that we weren't more transparent during this process. My first responsibility in my new role was to investigate these issues, with support from staff who had worked on the EA Grants program in the past. I am grateful for the many hours current and former staff have spent helping me get up to speed and build a consolidated picture of the EA Grants program. Below are what I view as the most important historical challenges with the EA Grants program: 1) Lack of consolidated records and communications We did not maintain well-organized records of individuals applying for grants, grant applications under evaluation, and records of approved or rejected applications. We sometimes verbally promised grants without full documentation in our system. As a result, it was difficult for us to keep track of outstanding commitments, and of which individuals were waiting to hear back from CEA. This resulted in us spending much longer preparing for our independent audit than would have been ideal. 2) Lack of clarification about the role EA Grants played in the funding ecosystem While we gave information about the types of projects EA Grants woul...

The Nonlinear Library: EA Forum Top Posts
Information security careers for GCR reduction by ClaireZabel, lukeprog

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 14:55


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Information security careers for GCR reduction, published by ClaireZabel, lukeprog on the effective altruism forum. Update 2019-12-14: There is now a Facebook group for discussion of infosec careers in EA (including for GCR reduction); join here This post was written by Claire Zabel and Luke Muehlhauser, based on their experiences as Open Philanthropy Project staff members working on global catastrophic risk reduction, though this post isn't intended to represent an official position of Open Phil. Summary In this post, we summarize why we think information security (preventing unauthorized users, such as hackers, from accessing or altering information) may be an impactful career path for some people who are focused on reducing global catastrophic risks (GCRs). If you'd like to hear about job opportunities in information security and global catastrophic risk, you can fill out this form created by 80,000 Hours, and their staff will get in touch with you if something might be a good fit. In brief, we think: Information security (infosec) expertise may be crucial for addressing catastrophic risks related to AI and biosecurity. More generally, security expertise may be useful for those attempting to reduce GCRs, because such work sometimes involves engaging with information that could do harm if misused. We have thus far found it difficult to hire security professionals who aren't motivated by GCR reduction to work with us and some of our GCR-focused grantees, due to the high demand for security experts and the unconventional nature of our situation and that of some of our grantees. More broadly, we expect there to continue to be a deficit of GCR-focused security expertise in AI and biosecurity, and that this deficit will result in several GCR-specific challenges and concerns being under-addressed by default. It's more likely than not that within 10 years, there will be dozens of GCR-focused roles in information security, and some organizations are already looking for candidates that fit their needs (and would hire them now, if they found them). It's plausible that some people focused on high-impact careers (as many effective altruists are) would be well-suited to helping meet this need by gaining infosec expertise and experience and then moving into work at the relevant organizations. If people who try this don't get a direct work job but gain the relevant skills, they could still end up in a highly lucrative career in which their skillset would be in high demand. We explain below. Risks from Advanced AI As AI capabilities improve, leading AI projects will likely be targets of increasingly sophisticated and well-resourced cyberattacks (by states and other actors) which seek to steal AI-related intellectual property. If these attacks are not mitigated by teams of highly skilled and experienced security professionals, then such attacks seem likely to (1) increase the odds that TAI / AGI is first deployed by malicious or incautious actors (who acquired world-leading AI technology by theft), and also seem likely to (2) exacerbate and destabilize potential AI technology races which could lead to dangerously hasty deployment of TAI / AGI, leaving insufficient time for alignment research, robustness checks, etc.[1] As far as we know, this is a common view among those who have studied questions of TAI / AGI alignment and strategy for several years, though there remains much disagreement about the details, and about the relative magnitudes of different risks. Given this, we think a member of such a security team could do a lot of good, if they are better than their replacement and/or they understand the full nature of the AI safety and security challenge better than their replacement (e.g. because they have spent many years thinking about AI from a GCR-reduction angle). Furthermo...

The Nonlinear Library: EA Forum Top Posts
Ben Garfinkel: How sure are we about this AI stuff?

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 27:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ben Garfinkel: How sure are we about this AI stuff?, published by by Ben Garfinkel, EA Global on the Effective Altruism Forum. It is increasingly clear that artificial intelligence is poised to have a huge impact on the world, potentially of comparable magnitude to the agricultural or industrial revolutions. But what does that actually mean for us today? Should it influence our behavior? In this talk from EA Global 2018: London, Ben Garfinkel makes the case for measured skepticism. The Talk Today, work on risks from artificial intelligence constitutes a noteworthy but still fairly small portion of the EA portfolio. Only a small portion of donations made by individuals in the community are targeted at risks from AI. Only about 5% of the grants given out by the Open Philanthropy Project, the leading grant-making organization in the space, target risks from AI. And in surveys of community members, most do not list AI as the area that they think should be most prioritized. At the same time though, work on AI is prominent in other ways. Leading career advising and community building organizations like 80,000 Hours and CEA often highlight careers in AI governance and safety as especially promising ways to make an impact with your career. Interest in AI is also a clear element of community culture. And lastly, I think there's also a sense of momentum around people's interest in AI. I think especially over the last couple of years, quite a few people have begun to consider career changes into the area, or made quite large changes in their careers. I think this is true more for work around AI than for most other cause areas. So I think all of this together suggests that now is a pretty good time to take stock. It's a good time to look backwards and ask how the community first came to be interested in risks from AI. It's a good time look forward and ask how large we expect the community's bet on AI to be: how large a portion of the portfolio we expect AI to be five or ten years down the road. It's a good time to ask, are the reasons that we first got interested in AI still valid? And if they're not still valid, are there perhaps other reasons which are either more or less compelling? To give a brief talk roadmap, first I'm going to run through what I see as an intuitively appealing argument for focusing on AI. Then I'm going to say why this argument is a bit less forceful than you might anticipate. Then I'll discuss a few more concrete arguments for focusing on AI and highlight some missing pieces of those arguments. And then I'll close by giving concrete implications for cause prioritization. The intuitive argument So first, here's what I see as an intuitive argument for working on AI, and that'd be the sort of, "AI is a big deal" argument. There are three concepts underpinning this argument: The future is what matters most in the sense that, if you could have an impact that carries forward and affects future generations, then this is likely to be more ethically pressing than having impact that only affects the world today. Technological progress is likely to make the world very different in the future: that just as the world is very different than it was a thousand years ago because of technology, it's likely to be very different again a thousand years from now. If we're looking at technologies that are likely to make especially large changes, then AI stands out as especially promising among them. So given these three premises, we have the conclusion that working on AI is a really good way to have leverage over the future, and that shaping the development of AI positively is an important thing to pursue. I think that a lot of this argument works. I think there are compelling reasons to try and focus on your impact in the future. I think that it's very likely that the world w...

ai speech ea technological cea garfinkel rationalist open philanthropy project ea global
The Nonlinear Library: EA Forum Top Posts
Getting money out of politics and into charity by UnexpectedValues

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 4:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting money out of politics and into charity , published by UnexpectedValues on the Effective Altruism Forum. I'm Eric Neyman, a grad student working on mechanism design at Columbia. I am working with Yash Upadhyay, a student at UPenn (and previously Y Combinator Summer ‘19), to build a platform that would match donations to opposing political campaigns and send the money to charities instead. Here's the basic idea: let's say that in 2024 Kamala Harris (D) will be running against Mike Pence (R) for president. The platform would collect money from donors to both campaigns; let's say for example that Harris donors give us $10 million and Pence donors give us $8 million. We would send matching amounts ($8 million on each side) to charity and donate the remaining amount to the political campaign that raised more ($2 million to Harris). The result is that $16 million more gets sent to charity, while not changing how much money the campaigns have relative to one another. From a donor's perspective, one way to think about this is: if you donate $100 to the platform, then in the worst case, your money will not end up matched and will go to your preferred campaign (as it would have gone if you'd contributed directly). But in the best case, your money will be matched with $100 on the other side, reducing the opposing candidate's cash on hand by $100 and causing an extra $200 to go to charity. As a back-of-the-envelope calculation: $7 billion was spent on the 2016 election cycle, a number that has been rapidly increasing. If just 0.1% of the money spent on the 2016 election had instead gone to effective charitable causes, that would amount to a few thousand lives saved. If you'd like to read more about this idea, see here for a more extensive write-up and here for an analysis of possible incentives issues with the platform, as well as possible fixes. This idea has been tried before: during the 2012 election, Eric Zolt and Jonathan DiBenedetto tried to create a platform like this and called it Repledge; here's a Washington Post profile. Unfortunately they didn't get past the testing phase. Yash and I talked to the two of them a couple weeks ago to learn what worked and what didn't. They told us that the primary obstacle they ran into wasn't a technical one (web infrastructure etc.) but a legal one: campaign finance law is complicated, plus the political parties won't like you (you're taking their money) and will very likely sue you. Dr. Zolt said that these lawsuits are dangerous despite an FEC ruling saying that Repledge was legal, because there are various ways to interpret the ruling. He gave us a ballpark estimate that creating something like Repledge would cost a quarter of a million dollars. (We are working on getting a more granular estimate for the legal and marketing costs individually, but the largest component would probably be legal.) The purpose of this post is basically to gauge interest and ask for advice. Here are some concrete questions: If we successfully built this platform, would you consider using it? If your answer is “it depends”, what does it depend on? Do you think building this platform is worth the cost? If so, do you have suggestions for how we might be able to finance this project? What grant-awarding organizations might be a good fit for our project? In particular, would it be reasonable for us to contact the Open Philanthropy Project? One thing I didn't specify in the description above is how exactly the charity donation process will work. Our tentative plan is to offer a list of charities for donors to choose from; whatever fraction of a donor's money gets matched will go to the charity they chose. If you have a suggestion you think is better, we'd love to hear it. But if we end up going with this plan, how should we choose the charities? I thin...

The Nonlinear Library: EA Forum Top Posts
Economic policy in poor countries by John G. Halstead

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 3:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Economic policy in poor countries, published by John G. Halstead on the effective altruism forum. When funding policy advocacy in the rich world, Open Philanthropy Project aims to only fund projects that at least meet the '100x bar', which means that the things they fund need to increase incomes for average Americans by $100 for every $1 spent to get as much benefit as giving $1 to GiveDirectly recipients in Africa. The reason for this is that (1) there is roughly a 100:1 ratio between the consumption of Americans to GiveDirectly cash transfer recipients, and (2) the returns of money to welfare are logarithmic. A logarithmic utility function implies that $1 for someone with 100x less consumption is worth 100x as much. Since GiveWell's top charities are 10x better than GiveDirectly, the standard set by GiveWell's top charities is a '1,000x bar'. Since 2015, Open Phil has made roughly 300 grants totalling almost $200 million in their near-termist, human-centric focus areas of criminal justice reform, immigration policy, land use reform, macroeconomic stabilisation policy, and scientific research. In 'GiveWell's Top Charities Are (Increasingly) Hard to Beat', Alex Berger argues that much of Open Phil's US policy work probably passes the 100x bar, but relatively little passes the 1,000x bar. The reason that Open Phil's policy work is able to meet the 100x bar is that it is leveraged. Although trying to change planning law in California has a low chance of success, the economic payoffs are so large that the expected value of these grants is high. So, even though it is a lot harder to increase welfare in the US, because the policy work has so much leverage, the expected benefits are high enough to 100x the $ benefits. This raises the question: if all of this true, wouldn't advocating for improved economic policy in poor countries be much better than GiveWell's top charities? If policy in the US has high expected benefits because it is leveraged, then policy in Kenya must also have high expected benefits because it is leveraged. We should expect many projects improving economic policy in Kenya to produce 100x the welfare benefits of GiveDirectly, and we should expect a handful to produce 1,000x the welfare benefits of GiveDirectly. This is an argument for funding work to improve economic policy in the world's poorest countries. Lant Pritchett has been arguing for this position for at least 7 years without any published response from the EA community. Hauke Hillebrandt and I summarise his arguments here. My former colleagues from Founders Pledge, Stephen Clare and Aidan Goth, discuss the arguments in more depth here. Updated addendum: At present, according to GiveWell, the best way to improve the economic outcomes of very poor people is to deworm them. This is on the basis of one very controversial RCT conducted in 2004. I don't think this is a tenable position. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: EA Forum Top Posts
EA Leaders Forum: Survey on EA priorities (data and analysis) by Aaron Gertler

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 27:38


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. This is: EA Leaders Forum: Survey on EA priorities (data and analysis), published by Aaron Gertler on the effective altruism forum. Thanks to Alexander Gordon-Brown, Amy Labenz, Ben Todd, Jenna Peters, Joan Gass, Julia Wise, Rob Wiblin, Sky Mayhew, and Will MacAskill for assisting in various parts of this project, from finalizing survey questions to providing feedback on the final post. Clarification on pronouns: “We” refers to the group of people who worked on the survey and helped with the writeup. “I” refers to me; I use it to note some specific decisions I made about presenting the data and my observations from attending the event. This post is the second in a series of posts where we aim to share summaries of the feedback we have received about our own work and about the effective altruism community more generally. The first can be found here. Overview Each year, the EA Leaders Forum, organized by CEA, brings together executives, researchers, and other experienced staffers from a variety of EA-aligned organizations. At the event, they share ideas and discuss the present state (and possible futures) of effective altruism. This year (during a date range centered around ~1 July), invitees were asked to complete a “Priorities for Effective Altruism” survey, compiled by CEA and 80,000 Hours, which covered the following broad topics: The resources and talents most needed by the community How EA's resources should be allocated between different cause areas Bottlenecks on the community's progress and impact Problems the community is facing, and mistakes we could be making now This post is a summary of the survey's findings (N = 33; 56 people received the survey). Here's a list of organizations respondents worked for, with the number of respondents from each organization in parentheses. Respondents included both leadership and other staff (an organization appearing on this list doesn't mean that the org's leader responded). 80,000 Hours (3) Animal Charity Evaluators (1) Center for Applied Rationality (1) Centre for Effective Altruism (3) Centre for the Study of Existential Risk (1) DeepMind (1) Effective Altruism Foundation (2) Effective Giving (1) Future of Humanity Institute (4) Global Priorities Institute (2) Good Food Institute (1) Machine Intelligence Research Institute (1) Open Philanthropy Project (6) Three respondents work at organizations small enough that naming the organizations would be likely to de-anonymize the respondents. Three respondents don't work at an EA-aligned organization, but are large donors and/or advisors to one or more such organizations. What this data does and does not represent This is a snapshot of some views held by a small group of people (albeit people with broad networks and a lot of experience with EA) as of July 2019. We're sharing it as a conversation-starter, and because we felt that some people might be interested in seeing the data. These results shouldn't be taken as an authoritative or consensus view of effective altruism as a whole. They don't represent everyone in EA, or even every leader of an EA organization. If you're interested in seeing data that comes closer to this kind of representativeness, consider the 2018 EA Survey Series, which compiles responses from thousands of people. Talent Needs What types of talent do you currently think [your organization // EA as a whole] will need more of over the next 5 years? (Pick up to 6) This question was the same as a question asked to Leaders Forum participants in 2018 (see 80,000 Hours' summary of the 2018 Talent Gaps survey for more). Here's a graph showing how the most common responses from 2019 compare to the same categories in the 2018 talent needs survey from 80,000 Hours, for EA as a whole: And for the respondent's organization: The following table contains data on every category ...

The Valmy
Utilitarianism and Its Flavors with Nick Beckstead

The Valmy

Play Episode Listen Later May 17, 2021 90:42


Podcast: Clearer Thinking with Spencer Greenberg (LS 41 · TOP 1.5% )Episode: Utilitarianism and Its Flavors with Nick BecksteadRelease date: 2021-05-16​What is utilitarianism? And what are the different flavors of utilitarianism? What are some alternatives to utilitarianism for people that find it generally plausible but who can't stomach some of its counterintuitive conclusions? For the times when people do use utilitarianism to make moral decisions, when is it appropriate to perform actual calculations (as opposed to making estimations or even just going with one's "gut")? And what is "utility" anyway? Nick Beckstead is a Program Officer for the Open Philanthropy Project, which he joined in 2014. He works on global catastrophic risk reduction. Previously, he led the creation of Open Phil's grantmaking program in scientific research. Prior to that, he was a research fellow at the Future of Humanity Institute at Oxford University. He received a Ph.D. in Philosophy from Rutgers University, where he wrote a dissertation on the importance of shaping the distant future. You can find out more about him on his website.

Clearer Thinking with Spencer Greenberg
Utilitarianism and Its Flavors (with Nick Beckstead)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 16, 2021 90:42


​What is utilitarianism? And what are the different flavors of utilitarianism? What are some alternatives to utilitarianism for people that find it generally plausible but who can't stomach some of its counterintuitive conclusions? For the times when people do use utilitarianism to make moral decisions, when is it appropriate to perform actual calculations (as opposed to making estimations or even just going with one's "gut")? And what is "utility" anyway?Nick Beckstead is a Program Officer for the Open Philanthropy Project, which he joined in 2014. He works on global catastrophic risk reduction. Previously, he led the creation of Open Phil's grantmaking program in scientific research. Prior to that, he was a research fellow at the Future of Humanity Institute at Oxford University. He received a Ph.D. in Philosophy from Rutgers University, where he wrote a dissertation on the importance of shaping the distant future. You can find out more about him on his website.

Clearer Thinking with Spencer Greenberg
Utilitarianism and Its Flavors (with Nick Beckstead)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 16, 2021 90:42


​What is utilitarianism? And what are the different flavors of utilitarianism? What are some alternatives to utilitarianism for people that find it generally plausible but who can't stomach some of its counterintuitive conclusions? For the times when people do use utilitarianism to make moral decisions, when is it appropriate to perform actual calculations (as opposed to making estimations or even just going with one's "gut")? And what is "utility" anyway?Nick Beckstead is a Program Officer for the Open Philanthropy Project, which he joined in 2014. He works on global catastrophic risk reduction. Previously, he led the creation of Open Phil's grantmaking program in scientific research. Prior to that, he was a research fellow at the Future of Humanity Institute at Oxford University. He received a Ph.D. in Philosophy from Rutgers University, where he wrote a dissertation on the importance of shaping the distant future. You can find out more about him on his website.[Read more]

Clearer Thinking with Spencer Greenberg
Utilitarianism and Its Flavors with Nick Beckstead

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 16, 2021 90:42


​What is utilitarianism? And what are the different flavors of utilitarianism? What are some alternatives to utilitarianism for people that find it generally plausible but who can't stomach some of its counterintuitive conclusions? For the times when people do use utilitarianism to make moral decisions, when is it appropriate to perform actual calculations (as opposed to making estimations or even just going with one's "gut")? And what is "utility" anyway?Nick Beckstead is a Program Officer for the Open Philanthropy Project, which he joined in 2014. He works on global catastrophic risk reduction. Previously, he led the creation of Open Phil's grantmaking program in scientific research. Prior to that, he was a research fellow at the Future of Humanity Institute at Oxford University. He received a Ph.D. in Philosophy from Rutgers University, where he wrote a dissertation on the importance of shaping the distant future. You can find out more about him on his website.

Clearer Thinking with Spencer Greenberg
Utilitarianism and Its Flavors with Nick Beckstead

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 16, 2021 90:42


​What is utilitarianism? And what are the different flavors of utilitarianism? What are some alternatives to utilitarianism for people that find it generally plausible but who can't stomach some of its counterintuitive conclusions? For the times when people do use utilitarianism to make moral decisions, when is it appropriate to perform actual calculations (as opposed to making estimations or even just going with one's "gut")? And what is "utility" anyway? Nick Beckstead is a Program Officer for the Open Philanthropy Project, which he joined in 2014. He works on global catastrophic risk reduction. Previously, he led the creation of Open Phil’s grantmaking program in scientific research. Prior to that, he was a research fellow at the Future of Humanity Institute at Oxford University. He received a Ph.D. in Philosophy from Rutgers University, where he wrote a dissertation on the importance of shaping the distant future. You can find out more about him on his website.

Podcast Notes Playlist: Latest Episodes
Dustin Moskovitz – Eliminating Work About Work – [Founder’s Field Guide, EP. 19]

Podcast Notes Playlist: Latest Episodes

Play Episode Listen Later Feb 8, 2021 53:04


Invest Like the Best Podcast Notes Key Takeaways Give people clarity about what’s most important, the strategy, and goals they’re working towardsMost of “work about work” is really exchanging status information, and getting on the same page with your teamThe pyramid of clarity from top to bottom:The MissionStrategyCompany-wide objectivesBusiness, product, and internal objectivesKey resultsProjectsYou get diminishing returns as you go beyond 50 or 60 hours per week- your hours get less productiveThe goal is not to maximize your hours but maximize your outputMeetings are not evil – but they chop up your calendar, and can interrupt focus timeRadical inclusiveness is where whoever shows up is totally welcome and embraced and included and encouraged to participateRead the full notes @ podcastnotes.orgMy guest today is Dustin Moskovitz, co-founder and CEO of Asana, a team-centric product management tool used by over 1.3 million users around the world. Dustin started Asana in 2008, 4 years after co-founding Facebook. In this conversation, we dive into Dustin's belief about the diminishing returns of hard work, the shocking amount of productivity lost in doing "work about work", and Dustin's philanthropic investment strategy around leverage and maximizing ROI. I hope you enjoy my wide-ranging conversation with Dustin Moskovitz.    For the full show notes, transcript, and links to mentioned content, check out https://www.joincolossus.com/episodes/88012555/moskovitz-eliminating-work-about-work ----- This episode is brought to you by Tegus. Tegus has built the most extensive primary information platform available for investors. With Tegus, you can learn everything you’d want to know about a company in an on-demand digital platform. Investors share their expert calls, allowing others to instantly access more than 10,000 calls on Affirm, Teladoc, Roblox, or almost any company of interest. Visit https://www.tegus.co/patrick to learn more. ----- This episode is brought to you by Vanta.  Vanta has built software that makes it easier to both get and maintain your SOC 2 report at a fraction of the normal cost. Founders Field Guide listeners can redeem a $1k off coupon at vanta.com/patrick.  ----- Founder's Field Guide is a property of Colossus Inc. For more episodes of Founder's Field Guide, go to https://www.joincolossus.com/episodes.    Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up at https://www.joincolossus.com/newsletter.   Follow Patrick on Twitter at @patrick_oshag Follow Colossus on Twitter at @JoinColossus   Show Notes [00:03:19] – [First question] – Balancing hard purposeful work and too much work that leads to burn out [00:05:41] – What led to this way of thinking [00:06:54] – Regulating hard work through culture [00:08:25] – False tradeoffs and how Asana represents this [00:09:43] – Origins of Asana [00:13:22] – Organizing the chaos of a project [00:18:09] – Change vs discipline of the mission [00:19:55] – Transferring good ideas from one company to another [00:23:19] – Instilling leverage as a concept in an early company [00:25:21] – New learning curves in building Asana [00:26:52] – Hardest boss battle during his time at Asana [00:28:43] – The role of the work graph [00:31:46] – The proliferation of the work management space and the overall landscape [00:32:56] – The idea of radical inclusiveness [00:36:31] – Best reasons to start a new company [00:37:47] – What will lead to Asana’s continued success [00:38:59] – Lessons building the product [00:41:13] – Work with the Open Philanthropy Project [00:43:44] – Work on pandemics and biosecurity [00:46:11] – Where he sees the future of artificial intelligence [00:50:47] – Kindest thing anyone has done for him

Invest Like the Best with Patrick O'Shaughnessy
Dustin Moskovitz – Eliminating Work About Work – [Founder’s Field Guide, EP. 19]

Invest Like the Best with Patrick O'Shaughnessy

Play Episode Listen Later Feb 4, 2021 53:04


My guest today is Dustin Moskovitz, co-founder and CEO of Asana, a team-centric product management tool used by over 1.3 million users around the world. Dustin started Asana in 2008, 4 years after co-founding Facebook. In this conversation, we dive into Dustin's belief about the diminishing returns of hard work, the shocking amount of productivity lost in doing "work about work", and Dustin's philanthropic investment strategy around leverage and maximizing ROI. I hope you enjoy my wide-ranging conversation with Dustin Moskovitz.    For the full show notes, transcript, and links to mentioned content, check out https://www.joincolossus.com/episodes/88012555/moskovitz-eliminating-work-about-work ----- This episode is brought to you by Tegus. Tegus has built the most extensive primary information platform available for investors. With Tegus, you can learn everything you’d want to know about a company in an on-demand digital platform. Investors share their expert calls, allowing others to instantly access more than 10,000 calls on Affirm, Teladoc, Roblox, or almost any company of interest. Visit https://www.tegus.co/patrick to learn more. ----- This episode is brought to you by Vanta.  Vanta has built software that makes it easier to both get and maintain your SOC 2 report at a fraction of the normal cost. Founders Field Guide listeners can redeem a $1k off coupon at vanta.com/patrick.  ----- Founder's Field Guide is a property of Colossus Inc. For more episodes of Founder's Field Guide, go to https://www.joincolossus.com/episodes.    Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up at https://www.joincolossus.com/newsletter.   Follow Patrick on Twitter at @patrick_oshag Follow Colossus on Twitter at @JoinColossus   Show Notes [00:03:19] – [First question] – Balancing hard purposeful work and too much work that leads to burn out [00:05:41] – What led to this way of thinking [00:06:54] – Regulating hard work through culture [00:08:25] – False tradeoffs and how Asana represents this [00:09:43] – Origins of Asana [00:13:22] – Organizing the chaos of a project [00:18:09] – Change vs discipline of the mission [00:19:55] – Transferring good ideas from one company to another [00:23:19] – Instilling leverage as a concept in an early company [00:25:21] – New learning curves in building Asana [00:26:52] – Hardest boss battle during his time at Asana [00:28:43] – The role of the work graph [00:31:46] – The proliferation of the work management space and the overall landscape [00:32:56] – The idea of radical inclusiveness [00:36:31] – Best reasons to start a new company [00:37:47] – What will lead to Asana’s continued success [00:38:59] – Lessons building the product [00:41:13] – Work with the Open Philanthropy Project [00:43:44] – Work on pandemics and biosecurity [00:46:11] – Where he sees the future of artificial intelligence [00:50:47] – Kindest thing anyone has done for him

Sped up Rationally Speaking
Rationally Speaking #136 - David Roodman on Why Microfinance Won't Cure Global Poverty

Sped up Rationally Speaking

Play Episode Listen Later Jan 3, 2021 42:07


Can we pull the world's poor out of poverty by giving them access to financial services? This episode features a conversation with economist David Roodman, formerly a fellow at the Center for Global Development and senior advisor to the Gates Foundation, currently senior advisor to the Open Philanthropy Project, and the author of Due Diligence: An Impertinent Inquiry into Microfinance. Roodman casts a critical eye on the hype about microfinance as a panacea for global poverty. He and Julia explore why it's hard to design a good study, even a randomized one; three different conceptions of "development,"; and why Goodman doesn't think we should give up on microfinance altogether. Sped up the speakers by ['1.07', '1.0']

Radio Acromática
Cribando la desinformación # 40

Radio Acromática

Play Episode Listen Later Jul 31, 2020 50:37


En este Episodio: Pedro David Santiago "El Cribador" te habla sobre: Parte I New York Times confirma "error" código de 100% casos positivos. Intencionalidad? Tomar medidas alertas para evitarlo. Bill GAtes "predijo" el debate para el Teatro Polémica. Cancelación de fuerzas = propagación del virus selectivo. Vacunas con ARN de animales para facilitar la zoonosis, transferencia de virus de animales a humanos, insensibilidad, deshumanización etc. . Se crearán humanos Genéticamente Modificados y Patentizados. Acusación de Genocidio en la Haya contra Bolsonaro. Las dos maneras de reducir la pobreza: Solidaridad o Malthusianismo. Bill Gates Admite efectos secundarios afecta el 80% de las personas, excede los limites permitidos. Se comprende las privatizaciones de aeropuerto cierre de escuelas por el virus por los planes de las altas esferas de poder. Nuevo (IV) Reich, Nuevo Nuremberg. Parte II El caso PUA. La inmoralidad inherente y la corrupción política religiosa inherente en nuestra sociedad que no se quiere . EL idiota: ciertas características. El Verbo Logos o La Palabra es más que números y letras literales pero encierra la interpretación de las leyes de la naturaleza. El Libro dela Vida no es la Biblia, al contrario La Biblia, Biblion es Babilon o Babilonia. El Libro de la Vida es la Naturaleza y su mensaje es la enseñanza. Todo es Dual como base. La derecha Cristiana. EL nombre de la Gracias en hebreo es Ratson = Razón. La relación energía-información. La maldas y la ignorancia. Espíritu Santo es el Conocimiento Sagrado obtenido de la observación de la manifestación de la Creación. La falsedad del cristianismo. ¿Existe la ética cristiana? ¿De que se justifica el cristiano? Por que entonces hablan de ética y moral inexistente en la Justificación. Confirmadas las criticas a Charbonier, Wanda Vázquez, Schatz,etc. Hipocresía cristiana catrolica contra los alegados dictadores pero expulsan a Franciscano por presentar puntos de vistas diferentes. Puerto Rico cuando se presenta de Vitrina contra el Socialismo cubano era un sistema mixto. Socializados AEE AAA Arbona etc. No hay valores reales en el cristianismo. La justificación es vs la Ley moral o ética. Justificación de la FE + excepcionalismo americano = Anomía y amoralidad. El dios cristiano paulista es necio e injusto. La incapacidad del puertorriqueño fomentada por religiosos y políticos mismos. El Éxito del colonizador. El infanticidio del COVID 19 en los niños de las naciones empobrecidas. La conexión Zengli Shi, Bill Gates Mitt Romney Bain Capital, Paul Singer, Open Philanthropy Project. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/radio-acromtica/message Support this podcast: https://anchor.fm/radio-acromtica/support

DealMakers
Rahul Dhanda On Raising $50 Million To Create The Ultimate Coronavirus Test

DealMakers

Play Episode Listen Later Jul 7, 2020 37:15


Rahul Dhanda is the cofounder and CEO of Sherlock Biosciences which is an engineering biology company offering unparalleled breadth and versatility for diagnostic solutions. The company has raised over $50 million from investors such as Baidu Ventures, Northpond Ventures, and Open Philanthropy Project.

ceo coronavirus raising rahul open philanthropy project
DealMakers
Rahul Dhanda On Raising $50 Million To Create The Ultimate Coronavirus Test

DealMakers

Play Episode Listen Later Jul 7, 2020 37:15


Rahul Dhanda is the cofounder and CEO of Sherlock Biosciences which is an engineering biology company offering unparalleled breadth and versatility for diagnostic solutions. The company has raised over $50 million from investors such as Baidu Ventures, Northpond Ventures, and Open Philanthropy Project.

ceo coronavirus raising rahul open philanthropy project
BeBrave
Protecting Animal Welfare Even When It's Controversial

BeBrave

Play Episode Listen Later Oct 14, 2019 28:02


In this episode, Allison Pickens (COO, Gainsight) sits down with Lewis Bollard (Animal Welfare Program Manager, Open Philanthropy Project) to discuss why animal welfare should be a central issue for everyone in the fight against global warming and climate change.

EARadio
EAG 2019 SF: Lessons learned in farm animal welfare (Lewis Bollard)

EARadio

Play Episode Listen Later Oct 11, 2019 23:57


The Open Philanthropy Project has recommended over $90 million in grants for farm animal welfare work around the world. What have they learned? In this talk, Lewis Bollard, who heads Open Phil’s work on animal welfare, shares lessons that could be useful to anyone working in that area, or on grantmaking and policy work more … Continue reading EAG 2019 SF: Lessons learned in farm animal welfare (Lewis Bollard)

The Most Interesting People I Know
11 - Lewis Bollard on Ending Factory Farming

The Most Interesting People I Know

Play Episode Listen Later Aug 11, 2019 83:27


Lewis Bollard leads the Open Philanthropy Project's strategy for farm animal welfare. He directs roughly $30M in grants annually to nonprofits working to reduce suffering of farmed animals around the world. By virtue of his position, Lewis has deep insight into the state of the farmed animal welfare movement, which we get into in some detail. Unfortunately, there are some audio issues with this episode- Macbook Airs are the bane of my existence. Otherwise, I think this was a great conversation. Lewis is a world-class expert on this topic, and his passion for the cause is clear. We cover: Open Philanthropy's approach to ending factory farming, the scale, tractability, and neglectedness of factory farming, the transition to plant based meat alternatives, the hierarchy of suffering per calorie, whether you have to be a vegan to be an animal activist, the advocacy campaigns that Open Philanthropy is supporting, America's role in defending factory farming worldwide, whether factory farming is efficient, whether we need to end capitalism to end factory farming, the psychological challenge of seeing the horror of factory farming in everyday life, undercover farm investigations, civil disobedience and violence in fighting for animal rights, the ethics of pursuing corporate campaigns, criticisms of Open Phil's approach to farmed animal welfare, and, of course, how you can get involved. Show notes: Lewis: His Twitter: https://twitter.com/lewis_bollard His monthly newsletter  His conversation on the 80,000 Hours Podcast Effective Altruism Animal Welfare Fund Other links: Infographic showing number of animals killed on farms compared to labs and shelters Amount of animal suffering per calorie for different foods Meat and the H Word: Given the amount of suffering involved in the mass killing of animals, how is it not one of the greatest moral atrocities of our time? Hedonic Treadmill Video: Baby Pig Fresh Pork Sausage Prank At Least 3.4 Million Farm Animals Drowned in the Aftermath of Hurricane Florence Anarcho-pacificism Animal Charity Evaluators (ACE) Books: Animal Liberation Eating Animals Animal Machines

80,000 Hours Podcast with Rob Wiblin
#60 - Prof Tetlock on why accurate forecasting matters for everything, and how you can do it better

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jun 28, 2019 131:38


Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case? Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can't assess the likelihood of different outcomes we're in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul's Drag Race. Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day. He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better. Along with other psychologists, he identified that many ordinary people are attracted to a 'folk probability' that draws just three distinctions — 'impossible', 'possible' and 'certain' — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% as against 57% likely. • Links to learn more, summary and full transcript • The calibration training app • Sign up for the Civ-5 counterfactual forecasting tournament • A review of the evidence on good forecasting practices • Learn more about Effective Altruism Global In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014. That was five years ago. In today's interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement. We discuss how his work can be applied to your personal life to answer high-stakes questions, like how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by the Open Philanthropy Project and Clearer Thinking that teaches you to distinguish your '70 percents' from your '80 percents'.) We also bring some tough methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions that shape the world their profession, as it has been for Tetlock over many decades. We view Tetlock's work as so core to living well that we've brought him back for a second and longer appearance on the show — his first was back in episode 15. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

80,000 Hours Podcast with Rob Wiblin
#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 15, 2019 177:57


Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right? Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences. Many animals are hunted by predators, and constantly have to remain vigilant about the risk of being killed, and perhaps experiencing the horror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter animals freeze to death; in droughts they die of heat or thirst. There are fewer than 20 people in the world dedicating their lives to researching these problems. But according to Persis Eskander, researcher at the Open Philanthropy Project, if we sum up the negative experiences of all wild animals, their sheer number could make the scale of the problem larger than most other near-term concerns. Links to learn more, summary and full transcript. Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can't survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death. But should we actually intervene? How do we know what animals are sentient? How often do animals feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could eventually allow us to massively improve wild animal welfare? For most of these big questions, the answer is: we don’t know. And Persis thinks we're far away from knowing enough to start interfering with ecosystems. But that's all the more reason to start looking at these questions. There are some concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours. In today’s interview we explore wild animal welfare as a new field of research, and discuss: • Do we have a moral duty towards wild animals or not? • How should we measure the number of wild animals? • What are some key activities that generate a lot of suffering or pleasure for wild animals that people might not fully appreciate? • Is there a danger in imagining how we as humans would feel if we were put into their situation? • Should we eliminate parasites and predators? • How important are insects? • How strongly should we focus on just avoiding humans going in and making things worse? • How does this compare to work on farmed animal suffering? • The most compelling arguments for humanity not dedicating resources to wild animal welfare • Is there much of a case for the idea that this work could improve the very long-term future of humanity? Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss: • The importance of figuring out your values • Chemistry, psychology, and other different paths towards working on wild animal welfare • How to break into new fields Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

EARadio
EAG 2018 SF: Open Philanthropy Project biosecurity update

EARadio

Play Episode Listen Later Feb 26, 2019 25:57


The Open Philanthropy Project pursues charitable interventions across many categories, of which a major one is global catastrophic risk mitigation. Claire Zabel, a program officer for the Open Philanthropy Project, gives an update on their recent work and thinking about biosecurity. This talk was recorded at Effective Altruism Global 2018: San Francisco. To learn more … Continue reading EAG 2018 SF: Open Philanthropy Project biosecurity update

san francisco biosecurity eag open philanthropy project
EARadio
EAG 2018 SF: Biosecurity as an EA cause area

EARadio

Play Episode Listen Later Feb 11, 2019 29:37


In this 2017 talk, the Open Philanthropy Project’s Claire Zabel talks about their work to mitigate Global Catastrophic Biological Risks. She also discusses what effective altruists can do to help, as well as differences between biological risks and risks from advanced AI. To learn more about effective altruism, visit http://www.effectivealtruism.org To read a transcript of … Continue reading EAG 2018 SF: Biosecurity as an EA cause area

ai ea biosecurity eag open philanthropy project
EARadio
EAG 2018 SF: Fireside chat with Holden Karnofsky

EARadio

Play Episode Listen Later Feb 7, 2019 56:36


Since co-founding GiveWell in 2007, Holden Karnofsky has been rigorously evaluating opportunities to do good. In this fireside chat with Will MacAskill, he discusses cause prioritization at the Open Philanthropy Project, his adherence to people-based giving, which work he’s found particularly exciting during the past year, and many other topics of interest. To learn more … Continue reading EAG 2018 SF: Fireside chat with Holden Karnofsky

80,000 Hours Podcast with Rob Wiblin
#10 Classic episode - Dr Nick Beckstead on spending billions of dollars preventing human extinction

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 2, 2019 112:04


Rebroadcast: this episode was originally released in October 2017. What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project - people like Dr Nick Beckstead. Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion. Links to learn more, episode summary & full transcript These are the world’s highest impact career paths according to our research Why despite global progress, humanity is probably facing its most dangerous time ever This episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including: * Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, On the Overwhelming Importance of Shaping the Far Future. (The first 31 minutes of this episode is a snappier version of my conversation with Toby Ord.) * Is clean meat (aka *in vitro* meat) technologically feasible any time soon, or should we be looking for plant-based alternatives? * What are the greatest risks to human civilisation? * To stop malaria is it more cost-effective to use technology to eliminate mosquitos than to distribute bed nets? * Should people who want to improve the future work for changes that will be very useful in a specific scenario, or just generally try to improve how well humanity makes decisions? * What specific jobs should our listeners take in order for Nick to be able to spend more money in useful ways to improve the world? * Should we expect the future to be better if the economy grows more quickly - or more slowly? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

80,000 Hours Podcast with Rob Wiblin
#8 Classic episode - Lewis Bollard on how to end factory farming in our lifetimes

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jan 16, 2019 194:53


Rebroadcast: this episode was originally released in September 2017. Every year tens of billions of animals are raised in terrible conditions in factory farms before being killed for human consumption. Over the last two years Lewis Bollard – Project Officer for Farm Animal Welfare at the Open Philanthropy Project – has conducted extensive research into the best ways to eliminate animal suffering in farms as soon as possible. This has resulted in $30 million in grants to farm animal advocacy. Links to learn more, episode summary & full transcript Jobs focussed on ending factory farming Problem profile: factory farming We covered almost every approach being taken, which ones work, and how individuals can best contribute through their careers. We also had time to venture into a wide range of issues that are less often discussed, including: * Why Lewis thinks insect farming would be worse than the status quo, and whether we should look for ‘humane’ insecticides; * How young people can set themselves up to contribute to scientific research into meat alternatives; * How genetic manipulation of chickens has caused them to suffer much more than their ancestors, but could also be used to make them better off; * Why Lewis is skeptical of vegan advocacy; * Why he doubts that much can be done to tackle factory farming through legal advocacy or electoral politics; * Which species of farm animals is best to focus on first; * Whether fish and crustaceans are conscious, and if so what can be done for them; * And many others Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

EARadio
Update on the Open Philanthropy Project (Holden Karnofsky)

EARadio

Play Episode Listen Later Dec 23, 2018 56:04


Source: GiveWell.

holden karnofsky open philanthropy project
80,000 Hours Podcast with Rob Wiblin
#41 - David Roodman on incarceration, geomagnetic storms, & becoming a world-class researcher

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Aug 28, 2018 138:00


With 698 inmates per 100,000 citizens, the U.S. is by far the leader among large wealthy nations in incarceration. But what effect does imprisonment actually have on crime? According to David Roodman, Senior Advisor to the Open Philanthropy Project, the marginal effect is zero. * 80,000 HOURS IMPACT SURVEY - Let me know how this show has helped you with your career. * ROB'S AUDIOBOOK RECOMMENDATIONS This stunning rebuke to the American criminal justice system comes from the man Holden Karnofsky’s called "the gold standard for in-depth quantitative research", whose other investigations include the risk of geomagnetic storms, whether deworming improves health and test scores, and the development impacts of microfinance. Links to learn more, summary and full transcript. The effects of crime can be split into three categories; before, during, and after. Does having tougher sentences deter people from committing crime? After reviewing studies on gun laws and ‘three strikes’ in California, David concluded that the effect of deterrence is zero. Does imprisoning more people reduce crime by incapacitating potential offenders? Here he says yes, noting that crimes like motor vehicle theft have gone up in a way that seems pretty clearly connected with recent Californian criminal justice reforms (though the effect on violent crime is far lower). Finally, do the after-effects of prison make you more or less likely to commit future crimes? This one is more complicated. Concerned that he was biased towards a comfortable position against incarceration, David did a cost-benefit analysis using both his favored reading of the evidence and the devil's advocate view; that there is deterrence and that the after-effects are beneficial. For the devil’s advocate position David used the highest assessment of the harm caused by crime, which suggests a year of prison prevents about $92,000 in crime. But weighed against a lost year of liberty, valued at $50,000, plus the cost of operating prisons, the numbers came out exactly the same. So even using the least-favorable cost-benefit valuation of the least favorable reading of the evidence -- it just breaks even. The argument for incarceration melts further when you consider the significant crime that occurs within prisons, de-emphasised because of a lack of data and a perceived lack of compassion for inmates. In today’s episode we discuss how to conduct such impactful research, and how to proceed having reached strong conclusions. We also cover: * How do you become a world class researcher? What kinds of character traits are important? * Are academics aware of following perverse incentives? * What’s involved in data replication? How often do papers replicate? * The politics of large orgs vs. small orgs * Geomagnetic storms as a potential cause area * How much does David rely on interviews with experts? * The effects of deworming on child health and test scores * Should we have more ‘data vigilantes’? * What are David’s critiques of effective altruism? * What are the pros and cons of starting your career in the think tank world? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.

80,000 Hours Podcast with Rob Wiblin
#34 - We use the worst voting system that exists. Here's how Aaron Hamlin is going to fix it.

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jun 1, 2018 138:30


In 1991 Edwin Edwards won the Louisiana gubernatorial election. In 2001, he was found guilty of racketeering and received a 10 year invitation to Federal prison. The strange thing about that election? By 1991 Edwards was already notorious for his corruption. Actually, that’s not it. The truly strange thing is that Edwards was clearly the good guy in the race. How is that possible? His opponent was former Ku Klux Klan Grand Wizard David Duke. How could Louisiana end up having to choose between a criminal and a Nazi sympathiser? It’s not like they lacked other options: the state’s moderate incumbent governor Buddy Roemer ran for re-election. Polling showed that Roemer was massively preferred to both the career criminal and the career bigot, and would easily win a head-to-head election against either. Unfortunately, in Louisiana every candidate from every party competes in the first round, and the top two then go on to a second - a so-called ‘jungle primary’. Vote splitting squeezed out the middle, and meant that Roemer was eliminated in the first round. Louisiana voters were left with only terrible options, in a run-off election mostly remembered for the proliferation of bumper stickers reading “Vote for the Crook. It’s Important.” We could look at this as a cultural problem, exposing widespread enthusiasm for bribery and racism that will take generations to overcome. But according to Aaron Hamlin, Executive Director of The Center for Election Science (CES), there’s a simple way to make sure we never have to elect someone hated by more than half the electorate: change how we vote. He advocates an alternative voting method called approval voting, in which you can vote for as many candidates as you want, not just one. That means that you can always support your honest favorite candidate, even when an election seems like a choice between the lesser of two evils. Full transcript, links to learn more, and summary of key points. If you'd like to meet Aaron he's doing events for CES in San Francisco, DC, Philadelphia, New York and Brooklyn over the next two weeks - RSVP here. While it might not seem sexy, this single change could transform politics. Approval voting is adored by voting researchers, who regard it as the best simple voting system available. Which do they regard as unquestionably the worst? First-past-the-post - precisely the disastrous system used and exported around the world by the US and UK. Aaron has a practical plan to spread approval voting across the US using ballot initiatives - and it just might be our best shot at making politics a bit less unreasonable. The Center for Election Science is a U.S. non-profit which aims to fix broken government by helping the world adopt smarter election systems. They recently received a $600,000 grant from the Open Philanthropy Project to scale up their efforts. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

EARadio
EAG 2017 London: The Open Philanthropy Project’s work on AI risk (Helen Toner)

EARadio

Play Episode Listen Later Apr 23, 2018 23:22


Updates on the Open Philanthropy Project’s work to build the field of technical AI safety research and to support initial work on AI strategy and policy. This talk will also include some comments on how interested attendees can get involved in these issues. Source: Effective Altruism Global (video).

risk toner open philanthropy project
80,000 Hours Podcast with Rob Wiblin
#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 17, 2018 136:40


How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits. If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block. Dr Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’. In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US. Links to learn more, job opportunities, and full transcript. But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help: http://www.nti.org/analysis/articles/protect-us-investments-global-health-security/ ) But there are positive signs as well - the center Inglesby leads recently received a $16 million grant from the Open Philanthropy Project to further their work preventing global catastrophes. It also runs the [Emerging Leaders in Biosecurity Fellowship](http://www.centerforhealthsecurity.org/our-work/emergingbioleaders/) to train the next generation of biosecurity experts for the US government. And Inglesby regularly testifies to Congress on the threats we all face and how to address them. In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security. Some of the topics we cover include: * Should more people in medicine work on security? * What are the top jobs for people who want to improve health security and how do they work towards getting them? * What people can do to protect funding for the Global Health Security Agenda. * Should we be more concerned about natural or human caused pandemics? Which is more neglected? * Should we be allocating more attention and resources to global catastrophic risk scenarios? * Why are senior figures reluctant to prioritize one project or area at the expense of another? * What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures? * Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them? * How is the current US government performing in these areas? * Which agencies are empowered to think about low probability high magnitude events? And more... Get this episode by subscribing: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

80,000 Hours Podcast with Rob Wiblin
#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 27, 2018 155:35


The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthroughs that transformed the world. But few know that those breakthroughs only happened when they did because of a philanthropist willing to take a risky bet on a new idea. Today’s guest, Holden Karnofsky, has been looking for philanthropy’s biggest success stories because he’s Executive Director of the Open Philanthropy Project, which gives away over $100 million per year - and he’s hungry for big wins. Full transcript, related links, job opportunities and summary of the interview. In the 1940s, poverty reduction overseas was not a big priority for many. But the Rockefeller Foundation decided to fund agricultural scientists to breed much better crops for the developing world - thereby massively increasing their food production. In the 1950s, society was a long way from demanding effective birth control. Activist Margaret Sanger had the idea for the pill, and endocrinologist Gregory Pincus the research team – but they couldn’t proceed without a $40,000 research check from biologist and women’s rights activist Katherine McCormick. In both cases, it was philanthropists rather than governments that led the way. The reason, according to Holden, is that while governments have enormous resources, they’re constrained by only being able to fund reasonably sure bets. Philanthropists can transform the world by filling the gaps government leaves - but to seize that opportunity they have to hire outstanding researchers, think long-term and be willing to fail most of the time. Holden knows more about this type of giving than almost anyone. As founder of GiveWell and then the Open Philanthropy Project, he has been working feverishly since 2007 to find outstanding giving opportunities. This practical experience has made him one of the most influential figures in the development of the school of thought that has come to be known as effective altruism. We’ve recorded this episode now because [the Open Philanthropy Project is hiring](https://www.openphilanthropy.org/get-involved/jobs) for a large number of positions, which we think would allow the right person to have a very large positive influence on the world. They’re looking for a large number of entry lever researchers to train up, 3 specialist researchers into potential risks from advanced artificial intelligence, as well as a Director of Operations, Operations Associate and General Counsel. But the conversation goes well beyond specifics about these jobs. We also discuss: * How did they pick the problems they focus on, and how will they change over time? * What would Holden do differently if he were starting Open Phil again today? * What can we learn from the history of philanthropy? * What makes a good Program Officer. * The importance of not letting hype get ahead of the science in an emerging field. * The importance of honest feedback for philanthropists, and the difficulty getting it. * How do they decide what’s above the bar to fund, and when it’s better to hold onto the money? * How philanthropic funding can most influence politics. * What Holden would say to a new billionaire who wanted to give away most of their wealth. * Why Open Phil is building a research field around the safe development of artificial intelligence * Why they invested in OpenAI. * Academia’s faulty approach to answering practical questions. * What potential utopias do people most want, according to opinion polls? Keiran Harris helped produce today’s episode.

Morality is Hard
Episode 6 - Elie Hassenfeld - GiveWell and how to get the most out of a donation

Morality is Hard

Play Episode Listen Later Feb 6, 2018 34:36


Elie Hassenfeld and I spoke about the charity he co-founded with Holden Karnofsky, GiveWell, and how it analyses charities to determine how effective they are at alleviating suffering. We also spoke about Open Philanthropy Project, a sister organisation of GiveWell, which started with the question of "How can we accomplish as much good as possible with our giving?" https://www.givewell.org/ https://www.openphilanthropy.org/

donations givewell holden karnofsky open philanthropy project elie hassenfeld
CNAS Podcasts
Artificial Intelligence’s Transformative Effects on Society

CNAS Podcasts

Play Episode Listen Later Feb 2, 2018 18:14


Join Paul Scharre, Senior Fellow and Director of the Technology and National Security program at CNAS in a discussion with Helen Toner, Senior Research Analyst, Open Philanthropy Project and Jack Clark, Strategy and Communications Director, OpenAI about the most significant benefits and risks that artificial intelligence technology poses to national security.

EARadio
EAG 2017 SF: The Open Philanthropy Project’s work on potential risks from advanced AI (Daniel Dewey)

EARadio

Play Episode Listen Later Nov 3, 2017 33:19


An update on Open Philanthropy Project’s views on AI risk (misalignment risks and strategic risks), general plan (AI alignment and strategy field-building), and the work we’ve done so far. Source: Effective Altruism Global (video).

ai risks dewey open philanthropy project
EARadio
EAG 2017 SF: Fireside chat (Holden Karnofsky with William MacAskill)

EARadio

Play Episode Listen Later Nov 3, 2017 62:33


Fireside chat with Holden Karnofsky, the Executive Director of the Open Philanthropy Project. Source: Effective Altruism Global (video).

EARadio
EAG 2017 SF: EA community building (Nick Beckstead)

EARadio

Play Episode Listen Later Nov 3, 2017 37:10


EA Community Building with Nick Beckstead, Program Officer at Open Philanthropy Project. Source: Effective Altruism Global (video).

Unfunded List Wine Grants
Alexander Berger - Pinot Noir (December 2015)

Unfunded List Wine Grants

Play Episode Listen Later Oct 17, 2017 41:13


Alexander Berger has the shortest bio Dave has ever read in the booth. But the brevity of his bio shouldn’t mislead you – he’s accomplished a lot. An effective altruist, he runs U.S. programs at GiveWell and is a leader within the Open Philanthropy Project. A philosophy major at Berkeley, he is now a philosopher-philanthropist spending his time thinking about how folks can have the most impact with their dollars. Like the Unfunded List, Alex’s organization has their own list of organizations that make the best use of philanthropic dollars. Alexander sat down in the booth and spoke at length with Dave about philanthropy, the arts, the recent rise of criticism in philanthropy and they finished an entire bottle of Pinot.

berkeley berger pinot noir pinot givewell open philanthropy project unfunded list
80,000 Hours Podcast with Rob Wiblin
#10 - Dr Nick Beckstead on how to spend billions of dollars preventing human extinction

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 11, 2017 111:47


What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project - people like Dr Nick Beckstead. Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion. Full transcript, coaching application form, overview of the conversation, and links to resources discussed in the episode: This episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including: * * Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, On the Overwhelming Importance of Shaping the Far Future. (The first 31 minutes is a snappier version of my conversation with Toby Ord.) * Is clean meat (aka *in vitro* meat) technologically feasible any time soon, or should we be looking for plant-based alternatives? * What are the greatest risks to human civilisation? * To stop malaria is it more cost-effective to use technology to eliminate mosquitos than to distribute bed nets? * Should people who want to improve the future work for changes that will be very useful in a specific scenario, or just generally try to improve how well humanity makes decisions? * What specific jobs should our listeners take in order for Nick to be able to spend more money in useful ways to improve the world? * Should we expect the future to be better if the economy grows more quickly - or more slowly? Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.

80,000 Hours Podcast with Rob Wiblin
#8 - Lewis Bollard on how to end factory farming in our lifetimes

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 27, 2017 196:54


Every year tens of billions of animals are raised in terrible conditions in factory farms before being killed for human consumption. Over the last two years Lewis Bollard – Project Officer for Farm Animal Welfare at the Open Philanthropy Project – has conducted extensive research into the best ways to eliminate animal suffering in farms as soon as possible. This has resulted in $30 million in grants to farm animal advocacy. Full transcript, coaching application form, overview of the conversation, and extra resources to learn more: We covered almost every approach being taken, which ones work, and how individuals can best contribute through their careers. We also had time to venture into a wide range of issues that are less often discussed, including: * Why Lewis thinks insect farming would be worse than the status quo, and whether we should look for ‘humane’ insecticides; * How young people can set themselves up to contribute to scientific research into meat alternatives; * How genetic manipulation of chickens has caused them to suffer much more than their ancestors, but could also be used to make them better off; * Why Lewis is skeptical of vegan advocacy; * Why he doubts that much can be done to tackle factory farming through legal advocacy or electoral politics; * Which species of farm animals is best to focus on first; * Whether fish and crustaceans are conscious, and if so what can be done for them; * Many other issues listed below in the Overview of the discussion. Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you. Overview of the discussion **2m40s** What originally drew you to dedicate your career to helping animals and why did Open Philanthropy end up focusing on it? **5m40s** Do you have any concrete way of assessing the severity of animal suffering? **7m10s** Do you think the environmental gains are large compared to those that we might hope to get from animal welfare improvement? **7m55s** What grants have you made at Open Phil? How did you go about deciding which groups to fund and which ones not to fund? **9m50s** Why does Open Phil focus on chickens and fish? Is this the right call? More...

80,000 Hours Podcast with Rob Wiblin
#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 13, 2017 74:16


The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong. Julia Galef - a well-known writer and researcher focused on improving human judgment, especially about high stakes questions - believes that if we could again develop new techniques to predict the future, resolve disagreements and make sound decisions together, it could dramatically improve the world across the board. We brought her in to talk about her ideas. This interview complements a new detailed review of whether and how to follow Julia’s career path. Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more. Julia has been host of the Rationally Speaking podcast since 2010, co-founder of the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements. In our conversation we ended up speaking about a wide range of topics, including: * Her research on how people can have productive intellectual disagreements. * Why she once planned to become an urban designer. * Why she doubts people are more rational than 200 years ago. * What makes her a fan of Twitter (while I think it’s dystopian). * Whether people should write more books. * Whether it’s a good idea to run a podcast, and how she grew her audience. * Why saying you don’t believe X often won’t convince people you don’t. * Why she started a PhD in economics but then stopped. * Whether she would recommend an unconventional career like her own. * Whether the incentives in the intelligence community actually support sound thinking. * Whether big institutions will actually pick up new tools for improving decision-making if they are developed. * How to start out pursuing a career in which you enhance human judgement and foresight. Get free, one-on-one career advice to help you improve judgement and decision-making We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. **If you want to work on any of the problems discussed in this episode, find out if our coaching can help you:** APPLY FOR COACHING Overview of the conversation **1m30s** So what projects are you working on at the moment? **3m50s** How are you working on the problem of expert disagreement? **6m0s** Is this the same method as the double crux process that was developed at the Center for Applied Rationality? **10m** Why did the Open Philanthropy Project decide this was a very valuable project to fund? **13m** Is the double crux process actually that effective? **14m50s** Is Facebook dangerous? **17m** What makes for a good life? Can you be mistaken about having a good life? **19m** Should more people write books? Read more...

80,000 Hours Podcast with Rob Wiblin
#4 - Howie Lempel on pandemics that kill hundreds of millions and how to stop them

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Aug 23, 2017 155:23


What disaster is most likely to kill more than 10 million human beings in the next 20 years? Terrorism? Famine? An asteroid? Actually it’s probably a pandemic: a deadly new disease that spreads out of control. We’ve recently seen the risks with Ebola and swine flu, but they pale in comparison to the Spanish flu which killed 3% of the world’s population in 1918 to 1920. A pandemic of that scale today would kill 200 million. In this in-depth interview I speak to Howie Lempel, who spent years studying pandemic preparedness for the Open Philanthropy Project. We spend the first 20 minutes covering his work at the foundation, then discuss how bad the pandemic problem is, why it’s probably getting worse, and what can be done about it. Full transcript, apply for personalised coaching to help you work on pandemic preparedness, see what questions are asked when, and read extra resources to learn more. In the second half we go through where you personally could study and work to tackle one of the worst threats facing humanity. Want to help ensure we have no severe pandemics in the 21st century? We want to help. We’ve helped dozens of people formulate their plans, and put them in touch with academic mentors. If you want to work on pandemic preparedness safety, apply for our free coaching service. APPLY FOR COACHING 2m - What does the Open Philanthropy Project do? What’s it like to work there? 16m27s - What grants did OpenPhil make in pandemic preparedness? Did they work out? 22m56s - Why is pandemic preparedness such an important thing to work on? 31m23s - How many people could die in a global pandemic? Is Contagion a realistic movie? 37m05s - Why the risk is getting worse due to scientific discoveries 40m10s - How would dangerous pathogens get released? 45m27s - Would society collapse if a billion people die in a pandemic? 49m25s - The plague, Spanish flu, smallpox, and other historical pandemics 58m30s - How are risks affected by sloppy research security or the existence of factory farming? 1h7m30s - What's already being done? Why institutions for dealing with pandemics are really insufficient. 1h14m30s - What the World Health Organisation should do but can’t. 1h21m51s - What charities do about pandemics and why they aren’t able to fix things 1h25m50s - How long would it take to make vaccines? 1h30m40s - What does the US government do to protect Americans? It’s a mess. 1h37m20s - What kind of people do you know work on this problem and what are they doing? 1h46m30s - Are there things that we ought to be banning or technologies that we should be trying not to develop because we're just better off not having them? 1h49m35s - What kind of reforms are needed at the international level? 1h54m40s - Where should people who want to tackle this problem go to work? 1h59m50s - Are there any technologies we need to urgently develop? 2h04m20s - What about trying to stop humans from having contact with wild animals? 2h08m5s - What should people study if they're young and choosing their major; what should they do a PhD in? Where should they study, and with who? More...

Legendary Life | Transform Your Body, Upgrade Your Health & Live Your Best Life
237: Dr. Stephan Guyenet: The New Science Of Fat Loss

Legendary Life | Transform Your Body, Upgrade Your Health & Live Your Best Life

Play Episode Listen Later Feb 13, 2017 70:41


If you struggle with losing weight or find it difficult to maintain your diet today’s guest Dr. Stephan Guyenet is here to explain why overeating is actually a natural behavior based on how your brain is wired. Stephan is a neurobiologist, obesity researcher, health writer and author of, 'The Hungry Brain: Outsmarting the Instincts That Make Us Overeat' will be explaining why so many people struggle with weight issues. His perspective will enlighten and allow you to understand yourself better so you can start implementing lifestyle changes and make better food choices.    Brief Bio: Stephan J. Guyenet, Ph.D. After earning a BS in biochemistry at the University of Virginia, Stephan pursued a Ph.D. in neuroscience at the University of Washington, then continued doing research as a postdoctoral fellow. He spent a total of 12 years in the neuroscience research world studying neurodegenerative disease and the neuroscience of eating behavior and obesity. His publications in scientific journals have been cited over 1,400 times by my peers. Today, he continues his mission to advance science as a writer, speaker, and science consultant. His book, The Hungry Brain, was released on February 7, 2017. Current consulting clients include the Open Philanthropy Project and the Examine.com Research Digest. He is also the co-designer of a web-based fat loss program called the Ideal Weight Program. Stephan lives in the Seattle area, where he grows much of his own food and brew a mean hard cider   In this episode, you’ll learn: What’s really making us fat? (8:17) The brain science behind hunger and satiation (11:07) Is modern life hurting your health? (13:25) Why do some people gain weight more easily than others? (28:20) Fast vs slow weight loss – which is better? (42:21) 3 Ways to suppress appetite (49:41) 7 Practical steps to lose weight (54:44) How to prevent holiday weight gain (58:47)   Ted Takeaways: There is nothing wrong with you. You are completely normal. You are wired to eat and you have a hungry brain that’s getting you to make unconscious decisions about your choices with food. Our responsibility is to manage those choices and our food environment. 1. Manage your food environment Check out our episode: to learn simple ways to control your food environment and slim down for good. 2. Make sleep a priority Check our episode: to learn more about sleep and how you can benefit from this powerful and efficient tool for maintaining your health and hunger hormones. 3. Manage your stress In our episode:  learn simple tips to get stress in check and regain control of your life. 4. Move your body Add daily movement to your daily life with small changes in your routine. Check out our episode: about how to fit health and fitness into your busy life.   Resources: Book:    Connect with Stephan:   Thanks for Listening! Thanks so much for joining us again this week. Have some feedback you’d like to share? Leave a note in the comment section below! If you enjoyed this episode, please share it using the social media buttons you see at the top of the post. If you have any questions (or would like answers to hear previously submitted voicemail questions!), head on over to . Are you tired of following a fitness routine, eating healthier foods, and not be seeing the weight come off the way you hope? Take my  now and find out how to fix that today. Until next time! Ted  

Science Soapbox
Gary McDowell: on reforming the STEM training pipeline

Science Soapbox

Play Episode Listen Later Nov 10, 2016 17:39


Trained as a biologist, Dr. Gary McDowell is the Executive Director of The Future of Research and runs the day-to-day operations of the organization, funded by a grant from the Open Philanthropy Project. In this episode, we chat about gaps in scientific training and how we can reform the system to better serve science and its practitioners. For show notes, visit sciencesoapbox.org/podcast and subscribe on iTunes or Stitcher. And while you're there, leave us a rating or review! Twitter: twitter.com/science_soapbox Facebook: facebook.com/sciencesoapbox

EARadio
EA Global: The Open Philanthropy Project (Holden Karnofsky)

EARadio

Play Episode Listen Later Oct 24, 2015 24:35


Source: Effective Altruism Global (original video).

open philanthropy holden karnofsky open philanthropy project ea global
Rationally Speaking
Rationally Speaking #136 - David Roodman on Why Microfinance Won't Cure Global Poverty

Rationally Speaking

Play Episode Listen Later Jun 15, 2015 42:52


Can we pull the world's poor out of poverty by giving them access to financial services? This episode features a conversation with economist David Roodman, formerly a fellow at the Center for Global Development and senior advisor to the Gates Foundation, currently senior advisor to the Open Philanthropy Project, and the author of Due Diligence: An Impertinent Inquiry into Microfinance. Roodman casts a critical eye on the hype about microfinance as a panacea for global poverty. He and Julia explore why it's hard to design a good study, even a randomized one; three different conceptions of "development,"; and why Goodman doesn't think we should give up on microfinance altogether.