POPULARITY
In this week's episode, we welcome Dr. Shamil Chandaria, a philanthropist, entrepreneur, technologist, and academic with interdisciplinary research interests spanning computational neuroscience, artificial intelligence, and the philosophy and science of human wellbeing. His PhD from the London School of Economics focused on the mathematical modeling of economic systems, and he later earned an MA in philosophy from University College London. Here, he developed an interest in the philosophy of science and explored philosophical issues in biology, neuroscience, and ethics. In 2018, Shamil contributed to founding the Global Priorities Institute at Oxford University—an interdisciplinary research center addressing humanity's most pressing issues. The following year, he co-founded the Centre for Psychedelic Research in the Department of Brain Sciences at Imperial College London, which investigates psychedelic therapies for conditions like treatment-resistant depression. He has also supported research on the neuroscience of meditation at Harvard and the University of California, Berkeley. Shamil is not only a brilliant mind but also a person with a great, open heart, whose smile and enthusiasm are infectious. In our conversation, we discuss the intersections between spirituality, meditative practices, and neuroscience—how the latest brain science connects spiritual experiences with emerging understanding in the field. Shamil introduces the ‘free energy' approach—a paradigm developed over the last 15 years in neuroscience, also known as predictive processing or active inference. We look into questions such as: Why does conscious experience exist at all? What is the brain's goal in experiencing? How do neurotransmitters function? What role does the self play? Why do we get to act as we do? How does this model interact with the tantric view of reality? And many more topics. Read Ruben Laukkonen & Shamil Chandaria's latest paper: A beautiful loop: An active inference theory of consciousnessDiscover a treasure trove of guided meditations, teachings, and courses at tantrailluminated.org. Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary: Against the singularity hypothesis, published by Global Priorities Institute on May 22, 2024 on The Effective Altruism Forum. This is a summary of the GPI Working Paper "Against the singularity hypothesis" by David Thorstad (published in Philosophical Studies). The summary was written by Riley Harris. The singularity is a hypothetical future event in which machines rapidly become significantly smarter than humans. The idea is that we might invent an artificial intelligence (AI) system that can improve itself. After a single round of self-improvement, that system would be better equipped to improve itself than before. This process might repeat many times, and each time the AI system would become more capable and better equipped to improve itself even further. At the end of this (perhaps very rapid) process, the AI system could be much smarter than the average human. Philosophers and computer scientists have thought we should take the possibility of a singularity seriously (Solomonoff 1985, Good 1996, Chalmers 2010, Bostrom 2014, Russell 2019). It is characteristic of the singularity hypothesis that AI will take years or months at the most to become many times more intelligent than even the most intelligent human.[1] Such extraordinary claims require extraordinary evidence. In the paper "Against the singularity hypothesis", David Thorstad claims that we do not have enough evidence to justify the belief in the singularity hypothesis, and we should consider it unlikely unless stronger evidence emerges. Reasons to think the singularity is unlikely Thorstad is sceptical that machine intelligence can grow quickly enough to justify the singularity hypothesis. He gives several reasons for this. Low-hanging fruit. Innovative ideas and technological improvements tend to become more difficult over time. For example, consider "Moore's law", which is (roughly) the observation that hardware capacities double every two years. Between 1971 and 2014 Moore's law was maintained only with an astronomical increase in the amount of capital and labour invested into semiconductor research (Bloom et al. 2020). In fact, according to one leading estimate, there was an eighteen-fold drop in productivity over this period. While some features of future AI systems will allow them to increase the rate of progress compared to human scientists and engineers, they are still likely to experience diminishing returns as the easiest discoveries have already been made and only more difficult ideas are left. Bottlenecks. AI progress relies on improvements in search, computation, storage and so on (each of these areas breaks down into many subcomponents). Progress could be slowed down by any of these subcomponents: if any of these are difficult to speed up, then AI progress will be much slower than we would naively expect. The classic metaphor here concerns the speed a liquid can exit a bottle, which is rate-limited by the narrow space near the opening. AI systems may run into bottlenecks if any essential components cannot be improved quickly (see Aghion et al., 2019). Constraints. Resource and physical constraints may also limit the rate of progress. To take an analogy, Moore's law gets more difficult to maintain because it is expensive, physically difficult and energy-intensive to cram ever more transistors in the same space. Here we might expect progress to eventually slow as physical and financial constraints provide ever greater barriers to maintaining progress. Sublinear growth. How do improvements in hardware translate to intelligence growth? Thompson and colleagues (2022) find that exponential hardware improvements translate to linear gains in performance on problems such as Chess, Go, protein folding, weather prediction and the modelling of underground oil reservoirs. Over the past 50 years,...
Podcast: The Gradient: Perspectives on AI Episode: David Thorstad: Bounded Rationality and the Case Against LongtermismRelease date: 2024-05-02Episode 122I spoke with Professor David Thorstad about:* The practical difficulties of doing interdisciplinary work* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations* why EA epistemics suck (ok, it's a little more nuanced than that)Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:15) David's interest in rationality* (02:45) David's crisis of confidence, models abstracted from psychology* (05:00) Blending formal models with studies of the mind* (06:25) Interaction between academic communities* (08:24) Recognition of and incentives for interdisciplinary work* (09:40) Movement towards interdisciplinary work* (12:10) The Standard Picture of rationality* (14:11) Why the Standard Picture was attractive* (16:30) Violations of and rebellion against the Standard Picture* (19:32) Mistakes made by critics of the Standard Picture* (22:35) Other competing programs vs Standard Picture* (26:27) Characterizing Bounded Rationality* (27:00) A worry: faculties criticizing themselves* (29:28) Self-improving critique and longtermism* (30:25) Central claims in bounded rationality and controversies* (32:33) Heuristics and formal theorizing* (35:02) Violations of Standard Picture, vindicatory epistemology* (37:03) The Reason Responsive Consequentialist View (RRCV)* (38:30) Objective and subjective pictures* (41:35) Reason responsiveness* (43:37) There are no epistemic norms for inquiry* (44:00) Norms vs reasons* (45:15) Arguments against epistemic nihilism for belief* (47:30) Norms and self-delusion* (49:55) Difficulty of holding beliefs for pragmatic reasons* (50:50) The Gibbardian picture, inquiry as an action* (52:15) Thinking how to act and thinking how to live — the power of inquiry* (53:55) Overthinking and conducting inquiry* (56:30) Is thinking how to inquire as an all-things-considered matter?* (58:00) Arguments for the RRCV* (1:00:40) Deciding on minimal criteria for the view, stereotyping* (1:02:15) Eliminating stereotypes from the theory* (1:04:20) Theory construction in epistemology and moral intuition* (1:08:20) Refusing theories for moral reasons and disciplinary boundaries* (1:10:30) The argument from minimal criteria, evaluating against competing views* (1:13:45) Comparing to other theories* (1:15:00) The explanatory argument* (1:17:53) Parfit and Railton, norms of friendship vs utility* (1:20:00) Should you call out your friend for being a womanizer* (1:22:00) Vindicatory Epistemology* (1:23:05) Panglossianism and meliorative epistemology* (1:24:42) Heuristics and recognition-driven investigation* (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing* (1:29:08) Stakes of inquiry and costs of metacognitive processing* (1:30:00) When agents are incoherent, focuses on inquiry* (1:32:05) Indirect normative assessment and its consequences* (1:37:47) Against the Singularity Hypothesis* (1:39:00) Superintelligence and the ontological argument* (1:41:50) Hardware growth and general intelligence growth, AGI definitions* (1:43:55) Difficulties in arguing for hyperbolic growth* (1:46:07) Chalmers and the proportionality argument* (1:47:53) Arguments for/against diminishing growth, research productivity, Moore's Law* (1:50:08) On progress studies* (1:52:40) Improving research productivity and technology growth* (1:54:00) Mistakes in the moral mathematics of existential risk, longtermist epistemics* (1:55:30) Cumulative and per-unit risk* (1:57:37) Back and forth with longtermists, time of perils* (1:59:05) Background risk — risks we can and can't intervene on, total existential risk* (2:00:56) The case for longtermism is inflated* (2:01:40) Epistemic humility and longtermism* (2:03:15) Knowledge production — reliable sources, blog posts vs peer review* (2:04:50) Compounding potential errors in knowledge* (2:06:38) Group deliberation dynamics, academic consensus* (2:08:30) The scope of longtermism* (2:08:30) Money in effective altruism and processes of inquiry* (2:10:15) Swamping longtermist options* (2:12:00) Washing out arguments and justified belief* (2:13:50) The difficulty of long-term forecasting and interventions* (2:15:50) Theory of change in the bounded rationality program* (2:18:45) OutroLinks:* David's homepage and Twitter and blog* Papers mentioned/read* Bounded rationality and inquiry* Why bounded rationality (in epistemology)?* Against the newer evidentialists* The accuracy-coherence tradeoff in cognition* There are no epistemic norms of inquiry* Permissive metaepistemology* Global priorities and effective altruism* What David likes about EA* Against the singularity hypothesis (+ blog posts* Three mistakes in the moral mathematics of existential risk (+ blog posts* The scope of longtermism* Epistemics Get full access to The Gradient at thegradientpub.substack.com/subscribe
Podcast: The Gradient: Perspectives on AI Episode: David Thorstad: Bounded Rationality and the Case Against LongtermismRelease date: 2024-05-02Episode 122I spoke with Professor David Thorstad about:* The practical difficulties of doing interdisciplinary work* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations* why EA epistemics suck (ok, it's a little more nuanced than that)Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:15) David's interest in rationality* (02:45) David's crisis of confidence, models abstracted from psychology* (05:00) Blending formal models with studies of the mind* (06:25) Interaction between academic communities* (08:24) Recognition of and incentives for interdisciplinary work* (09:40) Movement towards interdisciplinary work* (12:10) The Standard Picture of rationality* (14:11) Why the Standard Picture was attractive* (16:30) Violations of and rebellion against the Standard Picture* (19:32) Mistakes made by critics of the Standard Picture* (22:35) Other competing programs vs Standard Picture* (26:27) Characterizing Bounded Rationality* (27:00) A worry: faculties criticizing themselves* (29:28) Self-improving critique and longtermism* (30:25) Central claims in bounded rationality and controversies* (32:33) Heuristics and formal theorizing* (35:02) Violations of Standard Picture, vindicatory epistemology* (37:03) The Reason Responsive Consequentialist View (RRCV)* (38:30) Objective and subjective pictures* (41:35) Reason responsiveness* (43:37) There are no epistemic norms for inquiry* (44:00) Norms vs reasons* (45:15) Arguments against epistemic nihilism for belief* (47:30) Norms and self-delusion* (49:55) Difficulty of holding beliefs for pragmatic reasons* (50:50) The Gibbardian picture, inquiry as an action* (52:15) Thinking how to act and thinking how to live — the power of inquiry* (53:55) Overthinking and conducting inquiry* (56:30) Is thinking how to inquire as an all-things-considered matter?* (58:00) Arguments for the RRCV* (1:00:40) Deciding on minimal criteria for the view, stereotyping* (1:02:15) Eliminating stereotypes from the theory* (1:04:20) Theory construction in epistemology and moral intuition* (1:08:20) Refusing theories for moral reasons and disciplinary boundaries* (1:10:30) The argument from minimal criteria, evaluating against competing views* (1:13:45) Comparing to other theories* (1:15:00) The explanatory argument* (1:17:53) Parfit and Railton, norms of friendship vs utility* (1:20:00) Should you call out your friend for being a womanizer* (1:22:00) Vindicatory Epistemology* (1:23:05) Panglossianism and meliorative epistemology* (1:24:42) Heuristics and recognition-driven investigation* (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing* (1:29:08) Stakes of inquiry and costs of metacognitive processing* (1:30:00) When agents are incoherent, focuses on inquiry* (1:32:05) Indirect normative assessment and its consequences* (1:37:47) Against the Singularity Hypothesis* (1:39:00) Superintelligence and the ontological argument* (1:41:50) Hardware growth and general intelligence growth, AGI definitions* (1:43:55) Difficulties in arguing for hyperbolic growth* (1:46:07) Chalmers and the proportionality argument* (1:47:53) Arguments for/against diminishing growth, research productivity, Moore's Law* (1:50:08) On progress studies* (1:52:40) Improving research productivity and technology growth* (1:54:00) Mistakes in the moral mathematics of existential risk, longtermist epistemics* (1:55:30) Cumulative and per-unit risk* (1:57:37) Back and forth with longtermists, time of perils* (1:59:05) Background risk — risks we can and can't intervene on, total existential risk* (2:00:56) The case for longtermism is inflated* (2:01:40) Epistemic humility and longtermism* (2:03:15) Knowledge production — reliable sources, blog posts vs peer review* (2:04:50) Compounding potential errors in knowledge* (2:06:38) Group deliberation dynamics, academic consensus* (2:08:30) The scope of longtermism* (2:08:30) Money in effective altruism and processes of inquiry* (2:10:15) Swamping longtermist options* (2:12:00) Washing out arguments and justified belief* (2:13:50) The difficulty of long-term forecasting and interventions* (2:15:50) Theory of change in the bounded rationality program* (2:18:45) OutroLinks:* David's homepage and Twitter and blog* Papers mentioned/read* Bounded rationality and inquiry* Why bounded rationality (in epistemology)?* Against the newer evidentialists* The accuracy-coherence tradeoff in cognition* There are no epistemic norms of inquiry* Permissive metaepistemology* Global priorities and effective altruism* What David likes about EA* Against the singularity hypothesis (+ blog posts* Three mistakes in the moral mathematics of existential risk (+ blog posts* The scope of longtermism* Epistemics Get full access to The Gradient at thegradientpub.substack.com/subscribe
Episode 122I spoke with Professor David Thorstad about:* The practical difficulties of doing interdisciplinary work* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations* why EA epistemics suck (ok, it's a little more nuanced than that)Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:15) David's interest in rationality* (02:45) David's crisis of confidence, models abstracted from psychology* (05:00) Blending formal models with studies of the mind* (06:25) Interaction between academic communities* (08:24) Recognition of and incentives for interdisciplinary work* (09:40) Movement towards interdisciplinary work* (12:10) The Standard Picture of rationality* (14:11) Why the Standard Picture was attractive* (16:30) Violations of and rebellion against the Standard Picture* (19:32) Mistakes made by critics of the Standard Picture* (22:35) Other competing programs vs Standard Picture* (26:27) Characterizing Bounded Rationality* (27:00) A worry: faculties criticizing themselves* (29:28) Self-improving critique and longtermism* (30:25) Central claims in bounded rationality and controversies* (32:33) Heuristics and formal theorizing* (35:02) Violations of Standard Picture, vindicatory epistemology* (37:03) The Reason Responsive Consequentialist View (RRCV)* (38:30) Objective and subjective pictures* (41:35) Reason responsiveness* (43:37) There are no epistemic norms for inquiry* (44:00) Norms vs reasons* (45:15) Arguments against epistemic nihilism for belief* (47:30) Norms and self-delusion* (49:55) Difficulty of holding beliefs for pragmatic reasons* (50:50) The Gibbardian picture, inquiry as an action* (52:15) Thinking how to act and thinking how to live — the power of inquiry* (53:55) Overthinking and conducting inquiry* (56:30) Is thinking how to inquire as an all-things-considered matter?* (58:00) Arguments for the RRCV* (1:00:40) Deciding on minimal criteria for the view, stereotyping* (1:02:15) Eliminating stereotypes from the theory* (1:04:20) Theory construction in epistemology and moral intuition* (1:08:20) Refusing theories for moral reasons and disciplinary boundaries* (1:10:30) The argument from minimal criteria, evaluating against competing views* (1:13:45) Comparing to other theories* (1:15:00) The explanatory argument* (1:17:53) Parfit and Railton, norms of friendship vs utility* (1:20:00) Should you call out your friend for being a womanizer* (1:22:00) Vindicatory Epistemology* (1:23:05) Panglossianism and meliorative epistemology* (1:24:42) Heuristics and recognition-driven investigation* (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing* (1:29:08) Stakes of inquiry and costs of metacognitive processing* (1:30:00) When agents are incoherent, focuses on inquiry* (1:32:05) Indirect normative assessment and its consequences* (1:37:47) Against the Singularity Hypothesis* (1:39:00) Superintelligence and the ontological argument* (1:41:50) Hardware growth and general intelligence growth, AGI definitions* (1:43:55) Difficulties in arguing for hyperbolic growth* (1:46:07) Chalmers and the proportionality argument* (1:47:53) Arguments for/against diminishing growth, research productivity, Moore's Law* (1:50:08) On progress studies* (1:52:40) Improving research productivity and technology growth* (1:54:00) Mistakes in the moral mathematics of existential risk, longtermist epistemics* (1:55:30) Cumulative and per-unit risk* (1:57:37) Back and forth with longtermists, time of perils* (1:59:05) Background risk — risks we can and can't intervene on, total existential risk* (2:00:56) The case for longtermism is inflated* (2:01:40) Epistemic humility and longtermism* (2:03:15) Knowledge production — reliable sources, blog posts vs peer review* (2:04:50) Compounding potential errors in knowledge* (2:06:38) Group deliberation dynamics, academic consensus* (2:08:30) The scope of longtermism* (2:08:30) Money in effective altruism and processes of inquiry* (2:10:15) Swamping longtermist options* (2:12:00) Washing out arguments and justified belief* (2:13:50) The difficulty of long-term forecasting and interventions* (2:15:50) Theory of change in the bounded rationality program* (2:18:45) OutroLinks:* David's homepage and Twitter and blog* Papers mentioned/read* Bounded rationality and inquiry* Why bounded rationality (in epistemology)?* Against the newer evidentialists* The accuracy-coherence tradeoff in cognition* There are no epistemic norms of inquiry* Permissive metaepistemology* Global priorities and effective altruism* What David likes about EA* Against the singularity hypothesis (+ blog posts)* Three mistakes in the moral mathematics of existential risk (+ blog posts)* The scope of longtermism* Epistemics Get full access to The Gradient at thegradientpub.substack.com/subscribe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Resist the Fading Qualia Argument (Andreas Mogensen), published by Global Priorities Institute on March 26, 2024 on The Effective Altruism Forum. This paper was published as a GPI working paper in March 2024. Abstract The Fading Qualia Argument is perhaps the strongest argument supporting the view that in order for a system to be conscious, it does not need to be made of anything in particular, so long as its internal parts have the right causal relations to each other and to the system's inputs and outputs. I show how the argument can be resisted given two key assumptions: that consciousness is associated with vagueness at its boundaries and that conscious neural activity has a particular kind of holistic structure. I take this to show that what is arguably our strongest argument supporting the view that consciousness is substrate independent has important weaknesses, as a result of which we should decrease our confidence that consciousness can be realized in systems whose physical composition is very different from our own. Introduction Many believe that in order for a system to be conscious, it does not need to be made of anything in particular, so long as its internal parts have the right causal relations to each other and to the system's inputs and outputs. As a result, many also believe that the right software could in principle allow there to be something it is like to inhabit a digital computer, controlled by an integrated circuit etched in silicon. A recent expert report concludes that if consciousness requires only the right causal relations among a system's inputs, internal states, and outputs, then "conscious AI systems could realistically be built in the near term." (Butlin et al. 2023: 6) If that were to happen, it could be of enormous moral importance, since digital minds could have superhuman capacities for well-being and ill-being (Shulman and Bostrom 2021). But is it really plausible that any system with the right functional organization will be conscious - even if it is made of beer-cans and string (Searle 1980) or consists of a large assembly of people with walky-talkies (Block 1978)? My goal in this paper is to raise doubts about what I take to be our strongest argument supporting the view that consciousness is substrate independent in something like this sense.[1] The argument I have in mind is Chalmers' Fading Qualia Argument (Chalmers 1996: 253-263). I show how it is possible to resist the argument by appeal to two key assumptions: that consciousness is associated with vagueness at its boundaries and that conscious neural activity has a particular kind of holistic structure. Since these assumptions are controversial, I claim only to have exposed important weaknesses in the Fading Qualia Argument. I'll begin in section 2 by explaining what the Fading Qualia Argument is supposed to show and the broader dialectical context it inhabits. In section 3, I give a detailed presentation of the argument. In section 4, I show how the argument can be answered given the right assumptions about vagueness and the structure of conscious neural activity. At this point, I rely on the assumption that vagueness gives rise to truth-value gaps. In section 5, I explain how the argument can be answered even if we reject that assumption. In section 6, I say more about the particular assumption about the holistic structure of conscious neural activity needed to resist the Fading Qualia Argument in the way I outline. I take the need to rely on this assumption to be the greatest weakness of the proposed response. Read the rest of the paper ^ See the third paragraph in section 2 for discussion of two ways in which the conclusion supported by this argument is weaker than some may expect a principle of substrate independence to be. Thanks for listening. To help us out...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary: The scope of longtermism, published by Global Priorities Institute on December 18, 2023 on The Effective Altruism Forum. This is a summary of the GPI Working Paper "The scope of longtermism" by David Thorstad. The summary was written by Riley Harris. Recent work argues for longtermism - the position that often our morally best options will be those with the best long-term consequences.[1] Proponents of longtermism sometimes suggest that in most decisions expected long-term benefits outweigh all short-term effects. In 'The scope of longtermism', David Thorstad argues that most of our decisions do not have this character. He identifies three features of our decisions that suggest long-term effects are only relevant in special cases: rapid diminution - our actions may not have persistent effects, washing out - we might not be able to predict persistent effects, and option unawareness - we may struggle to recognise those options that are best in the long term even when we have them. Rapid diminution We cannot know the details of the future. Picture the effects of your actions rippling out in time - at closer times, the possibilities are clearer. As our prediction journeys further, the details become obscured. Although the probability of desired effects becomes ever lower, the effects might grow larger. In the long run, we could perhaps improve many billions or trillions of lives. When we weight value by probability, the value of our actions will depend on a race between diminishing probabilities and growing possible impact. If the value increases faster than probabilities fall, the expected values of the action might be vast. Alternatively, if the chance we have such large effects falls dramatically compared to the increase in value, the expected value of improving the future might be quite modest. Thorstad suggests that the latter of these effects dominates, so we should believe we have little chance of making an enormous difference. Consider a huge event that would be likely to change the lives of people in your city - perhaps, your city being blown up. Surprisingly, even this might not have large long-run impacts. Studies indicate that just half a century after cities in Japan and Vietnam were bombed, there was no longer any detectable effect on population size, poverty rates and consumption patterns.[2] To be fair, some studies indicate that some events have long-term effects,[3] but Thorstad thinks '...the persistence literature may not provide strong support' to longtermism. Washing out Thorstad's second concern with longtermism relates to our ability to predict the future. If our actions can affect the future in a huge way, these effects could be wonderful or terrible. They will also be very difficult to predict. The possibility that our acts will be enormously beneficial does not make our acts particularly appealing when they might be equally terrible. If our ability to forecast long-term outcomes is limited, the potential positive and negative values would wash out in expectation. Thorstad identifies three reasons to doubt our ability to forecast the long term. First, we have no track record of making predictions at the timescale of centuries or millennia. Our ability to predict only 20-30 years into the future is not great - and things get more difficult when we try to glimpse the further future. Second, economists, risk analysts and forecasting practitioners doubt our ability to make long-term predictions, and often refuse to make them.[5] Third, we want to forecast how valuable our actions are over the long run. But value is a particularly difficult target - it includes many variables such as the number of people alive, their health, longevity, education and social inclusion. That said, we sometimes have some evidence, and this evidence might point t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI Avoid Exploitation? (Adam Bales), published by Global Priorities Institute on December 14, 2023 on The Effective Altruism Forum. This paper was published as a GPI working paper in December 2023. Introduction Recent decades have seen rapid progress in artificial intelligence (AI). Some people expect that in the coming decades, further progress will lead to the development of AI systems that are at least as cognitively capable as humans (see Zhang et al., 2022). Call such systems artificial general intelligences (AGIs). If we develop AGI then humanity will come to share the Earth with agents that are as cognitively sophisticated as we are. Even in the abstract, this seems like a momentous event: while the analogy is imperfect, the development of AGI would have some similarity to the encountering of an intelligent alien species who intend to make the Earth their home. Less abstractly, it has been argued that AGI could have profound economic implications, impacting growth, employment and inequality (Korinek & Juelfs, Forthcoming; Trammell & Korinek, 2020). And it has been argued that AGI could bring with it risks, including those arising from human misuse of powerful AI systems (Brundage et al., 2018; Dafoe, 2018) and those arising more directly from the AI systems themselves (Bostrom, 2014; Carlsmith, Forthcoming). Given the potential stakes, it would be desirable to have some sense of what AGIs will be like if we develop them. Knowing this might help us prepare for a world where such systems are present. Unfortunately, it's difficult to speculate with confidence about what hypothetical future AI systems will be like. However, a surprisingly simple argument suggests we can make predictions about the behaviour of AGIs (this argument is inspired by Omohundro, 2007, 2008; Yudkowsky, 2019).3 According to this argument, we should expect AGIs to behave as if maximising expected utility (EU). In rough terms, the argument claims that unless an agent decides by maximising EU it will be possible to offer them a series of trades that leads to a guaranteed loss of some valued thing (an agent that's susceptible to such trades is said to be exploitable). Sufficiently sophisticated systems are unlikely to be exploitable, as exploitability plausibly interferes with acting competently, and sophisticated systems are likely to act competently. So, the argument concludes, sophisticated systems are likely to be EU maximisers. I'll call this the EU argument. In this paper, I'll discuss this argument in detail. In doing so, I'll have four aims. First, I'll show that the EU argument fails. Second, I'll show that reflecting on this failure is instructive: such reflection points us towards more nuanced and plausible alternative arguments. Third, the nature of these more nuanced arguments will highlight the limitations of our models of AGI, in a way that encourages us to adopt a pluralistic approach. And fourth, reflecting on such models will suggest that at least sometimes what matters is less developing a formal model of an AGI's decision-making procedure and more clarifying what sort of goals, if any, an AGI is likely to develop. So while my discussion will focus on the EU argument, I'll conclude with more general lessons about modelling AGI. Read the rest of the paper Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spiro - New TB charity raising seed funds, published by Habiba Banu on November 17, 2023 on The Effective Altruism Forum. Summary We (Habiba Banu and Roxanne Heston) have launched Spiro, a new TB screening and prevention charity focused on children. Our website is here. We are fundraising $198,000 for our first year. We're currently reaching out to people in the EA network. So far we have between 20%-50% of our budget promised and fundraising is currently one of the main things we're focusing on. The major components of our first year budget are co-founder time, country visits, and delivery of a pilot program, which aims to do household-level TB screening and provision of preventative medication. We think that this project has a lot of promise: Tuberculosis has a huge global burden, killing 1.3 million people every year, and is disproportionately neglected and fatal in young children. The evidence for preventative treatment is robust and household programs are promising, yet few high-burden countries have scaled up this intervention. Modeling by Charity Entrepreneurship and by academics indicate that this can be competitive with the best GiveWell-recommended charities. If we don't manage to raise at least half of our target budget by the beginning of December 2023 then we'll switch from our intended focus for the next month from program planning to additional fundraising. This will push out our timelines for getting to the useful work. If we don't manage to raise our full target budget by the end of 2023 then we'll scale back our ambitions in the immediate term, until we put additional time into fundraising a few months later. The lower budget will also limit the size of our proof-of-concept effort since we and our government partners will need to scale back work to the available funds. You can donate via Giving What We Can's fund for charities incubated through Charity Entrepreneurship. Please also email habiba.banu@spiro.ngo letting us know how much you have donated so that we can identify the funds and allocate them to Spiro. Who are we? Spiro is co-founded by Habiba Banu and Roxanne Heston. Habiba worked for the last three years at 80,000 Hours and before that as Senior Administrator at the Future of Humanity Institute and the Global Priorities Institute. Her background is working as a consultant at PwC with government and non-profit clients. Rox has worked for the last few years on international AI policy in the U.S. Government and at think tanks. She has worked with and for various EA organizations including the Centre for Effective Altruism, the Future of Humanity Institute, Open Philanthropy and the Lead Exposure Elimination Project. We have received Charity Entrepreneurship support so far: Charity Entrepreneurship's research team did the initial research into this idea and shared their work with us. Habiba went through Charity Entrepreneurship's Incubator Programme earlier this year. Rox started working with Habiba to find an idea together about halfway through the program. Charity Entrepreneurship has provided stipend funding, advice, and operational support (e.g. website design). It will continue to provide mentorship from its leadership team and a fiscal sponsorship arrangement. What are we going to do? Spiro will implement sustainable household screening programs in low- and lower-middle income countries. Spiro aims to curb infections and save lives of children in regions with high burdens of tuberculosis by identifying, screening, and treating household contacts of people living with TB. We will initially establish a proof of concept in one region, working closely with the government TB program. We will then aim to scale nationally, with funding from the Global Fund, and expand to other countries. Currently, we are planning a visit to Uganda to shadow e...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concepts of existential catastrophe (Hilary Greaves), published by Global Priorities Institute on November 10, 2023 on The Effective Altruism Forum. This paper was originally published as a working paper in September 2023 and is forthcoming in The Monist. Abstract The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, and what kind of probabilities should be involved in any appeal to expected value. Introduction and motivations Humanity today arguably faces various very significant existential risks, especially from new and anticipated technologies such as nuclear weapons, synthetic biology and advanced artificial intelligence (Rees 2003, Posner 2004, Bostrom 2014, Haggstrom 2016, Ord 2020). Furthermore, the scale of the corresponding possible catastrophes is such that anything we could do to reduce their probability by even a tiny amount could plausibly score very highly in terms of expected value (Bostrom 2013, Beckstead 2013, Greaves and MacAskill 2024). If so, then addressing these risks should plausibly be one of our top priorities. An existential risk is a risk of an existential catastrophe. An existential catastrophe is a particular type of possible event. This much is relatively clear. But there is not complete clarity, or uniformity of terminology, over what exactly it is for a given possible event to count as an existential catastrophe. Unclarity is no friend of fruitful discussion. Because of the importance of the topic, it is worth clarifying this as much as we can. The present paper is intended as a contribution to this task. The aim of the paper is to survey the space of plausibly useful definitions, drawing out the key choice points. I will also offer arguments for the superiority of one definition over another where I see such arguments, but such arguments will often be far from conclusive; the main aim here is to clarify the menu of options. I will discuss four broad approaches to defining "existential catastrophe". The first approach (section 2) is to define existential catastrophe in terms of human extinction. A suitable notion of human extinction is indeed one concept that it is useful to work with. But it does not cover all the cases of interest. In thinking through the worst-case outcomes from technologies such as those listed above, analysts of existential risk are at least equally concerned about various other outcomes that do not involve extinction but would be similarly bad. The other three approaches all seek to include these non-extinction types of existential catastrophe. The second approach appeals to loss of value, either ex post value (section 3) or expected value (section 4). There are several subtleties involved in making precise a definition based on expected value; I will suggest (though without watertight argument) that the best approach focuses on the consequences for expected value of "imaging" one's evidential probabilities on the possible event in question. The fourth approach appeals to a notion of the loss of humanity's potential (section 5). I will suggest (again, without watertight argument) that when the notion of "potential" is optimally understood, this fourth approach is theoretically equivalent to the third. The notion of existential catastrophe has a natural inverse: there could be events that are as good as existential catastrophes are bad. Ord and Cotton-Barratt (2015) suggest coining th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philosophical considerations relevant to valuing continued human survival: Conceptual Analysis, Population Axiology, and Decision Theory (Andreas Mogensen), published by Global Priorities Institute on November 1, 2023 on The Effective Altruism Forum. This paper was published as a GPI working paper in September 2023. Introduction Many think that human extinction would be a catastrophic tragedy, and that we ought to do more to reduce extinction risk. There is less agreement on exactly why. If some catastrophe were to kill everyone, that would obviously be horrific. Still, many think the deaths of billions of people don't exhaust what would be so terrible about extinction. After all, we can be confident that billions of people are going to die - many horribly and before their time - if humanity does not go extinct. The key difference seems to be that they will be survived by others. What's the importance of that? Some take the view that the special moral importance of preventing extinction is explained in terms of the value of increasing the number of flourishing lives that will ever be lived, since there could be so many people in the vast future available to us (see Kavka 1978; Sikora 1978; Parfit 1984; Bostrom 2003; Ord 2021: 43-49). Others emphasize the moral importance of conserving existing things of value and hold that humanity itself is an appropriate object of conservative valuing (see Cohen 2012; Frick 2017). Many other views are possible (see esp. Scheer 2013, 2018). However, not everyone is so sure that human extinction would be regrettable. In the final section of the last book published in his lifetime, Parfit (2011: 920-925) considers what can actually be said about the value of all future history. No doubt, people will continue to suffer and despair. They will also continue to experience love and joy. Will the good be sufficient to outweigh the bad? Will it all be worth it? Parfit's discussion is brief and inconclusive. He leans toward 'Yes,' writing that our "descendants might, I believe, make the future very good." (Parfit 2011: 923) But 'might' falls far short of 'will'. Others are confidently pessimistic. Some take the view that human lives are not worth starting because of the suffering they contain. Benatar (2006) adopts an extreme version of this view, which I discuss in section 3.3. He claims that "it would be better, all things considered, if there were no more people (and indeed no more conscious life)." (Benatar 2006: 146) Scepticism about the disvalue of human extinction is especially likely to arise among those concerned about our effects on non-human animals and the natural world. In his classic paper defending the view that all living things have moral status, Taylor (1981: 209) argues, in passing, that human extinction would "most likely be greeted with a hearty 'Good riddance!' " when viewed from the perspective of the biotic community as a whole. May (2018) argues similarly that because there "is just too much torment wreaked upon too many animals and too certain a prospect that this is going to continue and probably increase," we should take seriously the idea that human extinction would be morally desirable. Our abysmal treatment of non-human animals may also be thought to bode ill for our potential treatment of other kinds of minds with whom we might conceivably share the future and view primarily as tools: namely, minds that might arise from inorganic computational substrates, given suitable developments in the field of artificial intelligence (Saad and Bradley forthcoming). This paper takes up the question of whether and to what extent the continued existence of humanity is morally desirable. For the sake of brevity, I'll refer to this as the value of the future , leaving the assumption that we conditionalize on human survival impl...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary: High risk, low reward: A challenge to the astronomical value of existential risk mitigation, published by Global Priorities Institute on September 13, 2023 on The Effective Altruism Forum. This is a summary of the GPI Working Paper "High risk, low reward: A challenge to the astronomical value of existential risk mitigation" by David Thorstad. The paper is now forthcoming in the journal Philosophy and Public Affairs. The summary was written by Riley Harris. The value of the future may be vast. Human extinction, which would destroy that potential, would be extremely bad. Some argue that making such a catastrophe just a little less likely would be by far the best use of our limited resources--much more important than, for example, tackling poverty, inequality, global health or racial injustice. In "High risk, low reward: A challenge to the astronomical value of existential risk mitigation", David Thorstad argues against this conclusion. Suppose the risks really are severe: existential risk reduction is important, but not overwhelmingly important. In fact, Thorstad finds that the case for reducing existential risk is stronger when the risk is lower. The simple model The paper begins by describing a model of the expected value of existential risk reduction, originally developed by Ord (2020;ms) and Adamczewski (ms). This model discounts the value of each century by the chance that an extinction event would have already occurred, and gives a value to actions that can reduce the risk of extinction in that century. According to this model, reducing the risk of extinction this century is not overwhelmingly important-in fact, completely eliminating the risk we face this century could at most be as valuable as we expect this century to be. This result-that reducing existential risk is not overwhelmingly valuable--can be explained in an intuitive way. If the risk is high, the future of humanity is likely to be short, so the increases in overall value from halving the risk this century are not enormous. If the risk is low, halving the risk would result in a relatively small absolute reduction of risk, which is also not overwhelmingly valuable. Either way, saving the world will not be our only priority. Modifying the simple model This model is overly simplified. Thorstad modifies the simple model in three different ways to see how robust this result is: by assuming we have enduring effects on the risk, by assuming the risk of extinction is high, and by assuming that each century is more valuable than the previous. None of these modifications are strong enough to uphold the idea that existential risk reduction is by far the best use of our resources. A much more powerful assumption is needed (one that combines all of these weaker assumptions). Thorstad argues that there is limited evidence for this stronger assumption. Enduring effects If we could permanently eliminate all threats to humanity, the model says this would be more valuable than anything else we could do--no matter how small the risk or how dismal each century is (as long as each is still of positive value). However, it seems very unlikely that any action we could take today could reduce the risk to an extremely low level for millions of years--let alone permanently eliminate all risk. Higher risk On the simple model, halving the risk from 20% to 10% is exactly as valuable as halving it from 2% to 1%. Existential risk mitigation is no more valuable when the risks are higher. Indeed, the fact that higher existential risk implies a higher discounting of the future indicates a surprising result: the case for existential risk mitigation is strongest when the risk is low. Suppose that each century is more valuable than the last and therefore that most of the value of the world is in the future. Then high existential...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: 80,000 Hours Career Advising Team, published by Abby Hoskin on September 8, 2023 on The Effective Altruism Forum. We're the 80,000 Hours Career Advising team, and you should ask us anything! We are advisors at 80,000 Hours! Ask us anything about careers and career advising! (If you're interested in us personally or other things about 80k we might also answer those.) We'll answer questions on September 13th (British time), so please post questions before then. Unfortunately, we might not be able to get to all the questions, depending on how much time we end up having and what else is asked. So be sure to upvote the questions you most want answered :) Logistics/practical instructions: Please post your questions as comments on this post. The earlier you share your questions, the easier it will be for us to get to them. We'll probably answer questions on September 13th. Questions posted after that aren't likely to get answers. Some context: You have 80,000 hours in your career. This makes it your best opportunity to have a positive impact on the world. If you're fortunate enough to be able to use your career for good, but aren't sure how, our website helps you: Get new ideas for fulfilling careers that do good Compare your options Make a plan you feel confident in You can also check out our free career guide. We are excited to recently launch the second edition! It's based on 10 years of research alongside academics at Oxford. We're a nonprofit, and everything we provide, including our one-on-one career advising, is free. Curious about what happens on a 1:1 career advising call? Check out our EA Forum post on what happens during calls here. If you're ready to use your career to have a greater positive impact on the world, apply for career advising with us! Who are we? I (Abigail Hoskin) have a PhD in psychology and neuroscience and can talk about paths into and out of academia. I can also discuss balancing having an impact while parenting (multiple!) kids. I will be taking the lead on answering questions in this AMA, but other advisors might chime in, especially on questions in their specific areas of expertise. Huon Porteous has a background in philosophy and experience in management consulting. He has run a huge number of useful "cheap tests" to test out his aptitudes for different careers and is always running self experiments to optimise his workflow. Matt Reardon is a lawyer who can talk in depth about paths to value in law, government, and policy, especially in the US. He also works on product improvements and marketing for our team. Sudhanshu Kasewa was a machine learning engineer at a start-up and has experience doing ML research in academia. He's also worked in human resources and consulting. Anemone Franz is a medical doctor who worked for a biotech startup on pandemic preparedness. She particularly enjoys discussing careers in biosecurity, biotech, or global health. We are led by Michelle Hutchinson! Michelle co-founded the Global Priorities Institute, ran Giving What We Can, was a fund manager for EA Funds, and is a top contributor to our cute animals Slack channel. Michelle is the director of the 1on1 team and does not take calls, but she'll be chiming in on the AMA. We have a special guest, Benjamin Hilton, who will be on deck to answer questions about our website's written content. Ben is a researcher at 80,000 Hours, who has written many of our recent articles, including on AI technical career paths, in addition to helping write our updated career guide. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Princeton course on longtermism, published by Calvin Baker on September 2, 2023 on The Effective Altruism Forum. This semester (Fall 2023), Prof Adam Elga and I will be co-instructing Longtermism, Existential Risk, and the Future of Humanity, an upper div undergraduate philosophy seminar at Princeton. (Yes, I did shamelessly steal half of our title from The Precipice.) We are grateful for support from an Open Phil course development grant and share the reading list here for all who may be interested. Part 1: Setting the stage Week 1: Introduction to longtermism and existential risk Core Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury. Read introduction, chapter 1, and chapter 2 (pp. 49-56 optional); chapters 4-5 optional but highly recommended. Optional Roser (2022) "The Future is Vast: Longtermism's perspective on humanity's past, present, and future" Our World in Data Karnofsky (2021) 'This can't go on' Cold Takes (blog) Kurzgesagt (2022) "The Last Human - A Glimpse into the Far Future" Week 2: Introduction to decision theory Core Weisberg, J. (2021). Odds & Ends. Read chapters 8, 11, and 14. Ord, T., Hillerbrand, R., & Sandberg, A. (2010). "Probing the improbable: Methodological challenges for risks with low probabilities and high stakes." Journal of Risk Research, 13(2), 191-205. Read sections 1-2. Optional Weisberg, J. (2021). Odds & Ends chapters 5-7 (these may be helpful background for understanding chapter 8, if you don't have much background in probability). Titelbaum, M. G. (2020) Fundamentals of Bayesian Epistemology chapters 3-4 Week 3: Introduction to population ethics Core Parfit, Derek. 1984. Reasons and Persons. Oxford: Oxford University Press. Read sections 4.16.120-23, 125, and 127 (pp. 355-64; 366-71, and 377-79). Parfit, Derek. 1986. "Overpopulation and the Quality of Life." In Applied Ethics, ed. P. Singer, 145-164. Oxford: Oxford University Press. Read sections 1-3. Optional Remainders of Part IV of Reasons and Persons and "Overpopulation and the Quality of Life" Greaves (2017) "Population Axiology" Philosophy Compass McMahan (2022) "Creating People and Saving People" section 1, first page of section 4, and section 8 Temkin (2012) Rethinking the Good 12.2 pp. 416-17 and section 12.3 (esp. pp. 422-27) Harman (2004) "Can We Harm and Benefit in Creating?" Roberts (2019) "The Nonidentity Problem" SEP Frick (2022) "Context-Dependent Betterness and the Mere Addition Paradox" Mogensen (2019) "Staking our future: deontic long-termism and the non-identity problem" sections 4-5 Week 4: Longtermism: for and against Core Greaves, Hilary and William MacAskill. 2021. "The Case for Strong Longtermism." Global Priorities Institute Working Paper No.5-2021. Read sections 1-6 and 9. Curran, Emma J. 2023. "Longtermism and the Complaints of Future People". Forthcoming in Essays on Longtermism, ed. H. Greaves, J. Barrett, and D. Thorstad. Oxford: OUP. Read section 1. Optional Thorstad (2023) "High risk, low reward: A challenge to the astronomical value of existential risk mitigation." Focus on sections 1-3. Curran, E. J. (2022). "Longtermism, Aggregation, and Catastrophic Risk" (GPI Working Paper 18-2022). Global Priorities Institute. Beckstead (2013) "On the Overwhelming Importance of Shaping the Far Future" Chapter 3 "Toby Ord on why the long-term future of humanity matters more than anything else, and what we should do about it" 80,000 Hours podcast Frick (2015) "Contractualism and Social Risk" sections 7-8 Part 2: Philosophical problems Week 5: Fanaticism Core Bostrom, N. (2009). "Pascal's mugging." Analysis, 69 (3): 443-445. Russell, J. S. "On two arguments for fanaticism." Noûs, forthcoming. Read sections 1, 2.1, and 2.2. Temkin, L. S. (2022). "How Expected Utility Theory Can Drive Us Off the Rails." In L. S. ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The weight of suffering (Andreas Mogensen), published by Global Priorities Institute on August 17, 2023 on The Effective Altruism Forum. This paper was originally published as a working paper in May 2022 and is forthcoming in The Journal of Philosophy. Abstract How should we weigh suffering against happiness? This paper highlights the existence of an argument from intuitively plausible axiological principles to the striking conclusion that in comparing different populations, there exists some depth of suffering that cannot be compensated for by any measure of well-being. In addition to a number of structural principles, the argument relies on two key premises. The first is the contrary of the so-called Reverse Repugnant Conclusion. The second is a principle according to which the addition of any population of lives with positive welfare levels makes the outcome worse if accompanied by sufficiently many lives that are not worth living. I consider whether we should accept the conclusion of the argument and what we may end up committed to if we do not, illustrating the implications of the conclusions for the question of whether suffering in aggregate outweighs happiness among human and non-human animals, now and in future. Introduction There is both great happiness and great suffering in this world. Which has the upper hand? Does the good experienced by human and non-human animals in aggregate counterbalance all the harms they suffer, so that the world is morally good on balance? Or is the moral weight of suffering greater? To answer this question, we need to know how to weigh happiness against suffering from the moral point of view. In this paper, I present an argument from intuitively plausible axiological principles to the conclusion that in comparing different populations, there exists some depth of lifetime suffering that cannot be counterbalanced by any amount of well-being experienced by others. Following Ord (2013), I call this view lexical threshold negative utilitarianism (LTNU). I don't claim that we should accept LTNU. My aim is to explore different ways of responding to the argument. As we'll see, the positions at which we may arrive in rejecting its premises can be nearly as interesting and as striking as the conclusion. In section 2, I define LTNU more rigorously and set out the argument. It relies on a number of structural principles governing the betterness relation on populations, together with two key premises. The first is the contrary of what Carlson (1998) and Mulgan (2002) call the Reverse Repugnant Conclusion (RRC). The second says, roughly, that the addition of lives with positive welfare levels makes the outcome worse if accompanied by sufficiently many lives that are not worth living. In section 3, I consider whether we should be willing to accept the argument's conclusion, especially given that LTNU has been thought to entail the desirability of human extinction or the extinction of all sentient life (Crisp 2021). In section 4, I discuss our options forrejecting the argument's structural principles. I argue that our options for avoiding the disturbing implications of LTNU discussed in section 3 are limited if we are restricted to rejecting one or more of these principles. In section 5, I consider the possibility of rejecting the first of the key non-structural premises. I focus on the possibility of rejecting the contrary of RRC without accepting RRC. This, I claim, is also not promising, considered as a way of avoiding the disturbing implications of LTNU discussed in section 3. I will have nothing original to say about RRC per se, except that the overarching argument of this paper may be taken as a reason to accept it. In section 6, I consider the possibility of rejecting the last remaining premise. Specifically, I consider the possibility t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Hinge of History Hypothesis: Reply to MacAskill (Andreas Mogensen), published by Global Priorities Institute on August 8, 2023 on The Effective Altruism Forum. This paper was originally published as a working paper in August 2022 and is forthcoming in Analysis. Abstract Some believe that the current era is uniquely important with respect to how well the rest of human history goes. Following Parfit, call this the Hinge of History Hypothesis. Recently, MacAskill has argued that our era is actually very unlikely to be especially influential in the way asserted by the Hinge of History Hypothesis. I respond to MacAskill, pointing to important unresolved ambiguities in his proposed definition of what it means for a time to be influential and criticizing the two arguments used to cast doubt on the claim that the current era is a uniquely important moment in human history. Introduction Some believe that the current era is a uniquely important moment in human history. We are living, they claim, at a time of unprecedented risk, heralded by the advent of nuclear weapons and other world-shaping technologies. Only by responding wisely to the anthropogenic risks we now face can we survive into the future and fulfil our potential as a species (Sagan 1994; Parfit 2011, Bostrom 2014, Ord 2020). Following Parfit (2011), call the hypothesis that we live at such a uniquely important time the Hinge of History Hypothesis (3H). Recently, MacAskill (2022) has argued that 3H is "quite unlikely to be true." (332) He interprets 3H as the claim that "[w]e are among the very most influential people ever, out of a truly astronomical number of people who will ever live" (339) and defines a period of time as influential in proportion to "how much expected good one can do with the direct expenditure (rather than investment) of a unit of resources at [that] time" (335), where 'investment' may refer "to both financial investment, and to using one's time to grow the number of people who are also impartial altruists." (335 n.13) MacAskill thus relates the truth or falsity of 3H to the practical question of the optimal time at which to expend resources to achieve morally good outcomes, considered impartially. MacAskill presents two arguments against 3H. The first is an argument that the prior probability that we are living at the most influential time in history should be very low, because we should reason as if we represent a random sample from observers in our reference class. The second is an inductive argument that we should expect future people to have more influence over human history because the overall trend throughout human history is for later generations to be more influential. In my view, neither of these arguments should convince us. As I argue in section 2, MacAskill's priors argument relies on formulating 3H in a way that does not conform to how this hypothesis is traditionally understood. Moreover, I will argue in section 3 that MacAskill's definition of what it means for a time to be influential leaves too many unresolved ambiguities for his inductive argument to work. Read the rest of the paper Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Social Beneficence (Jacob Barrett), published by Global Priorities Institute on July 20, 2023 on The Effective Altruism Forum. (This working paper was published in September 2022.) Abstract A background assumption in much contemporary political philosophy is that justice is the first virtue of social institutions, taking priority over other values such as beneficence. This assumption is typically treated as a methodological starting point, rather than as following from any particular moral or political theory. In this paper, I challenge this assumption. To frame my discussion, I argue, first, that justice doesn't in principle override beneficence, and second, that justice doesn't typically outweigh beneficence, since, in institutional contexts, the stakes of beneficence are often extremely high. While there are various ways one might resist this argument, none challenge the core methodological point that political philosophy should abandon its preoccupation with justice and begin to pay considerably more attention to social beneficence - that is, to beneficence understood as a virtue of social institutions. Along the way, I also highlight areas where focusing on social beneficence would lead political philosophers in new and fruitful directions, and where normative ethicists focused on personal beneficence might scale up their thinking to the institutional case. I. Justice is the first virtue of social institutions, as truth is of systems of thought. A theory however elegant and economical must be rejected or revised if it is untrue; likewise laws and institutions no matter how efficient and well-arranged must be reformed or abolished if they are unjust. The only thing that permits us to acquiesce in an erroneous theory is the lack of a better one; analogously, an injustice is tolerable only when it is necessary to avoid an even greater injustice. Being first virtues of human activities, truth and justice are uncompromising. These propositions seem to express our intuitive conviction of the primacy of justice. No doubt they are expressed too strongly. John Rawls, A Theory of Justice, 4 A background assumption in much contemporary political philosophy is that justice takes priority over beneficence. When evaluating social and political institutions, or thinking through questions of institutional design or reform, we should focus primarily on justice. This assumption is often associated with various further ideas, such as that justice but not beneficence is enforceable, that justice but not beneficence concerns rights, or that justice involves perfect duties but beneficence only imperfect ones. It is also typically assumed that justice is institutional, while beneficence is personal. There is much talk of social justice, and some talk of justice as a personal virtue, but, for the most part, we talk only of personal beneficence - not social beneficence. This phenomenon extends beyond the academy. A similar concern with justice permeates our political discourse. Justice operates as a conversation stopper. If the status quo is unjust, this is taken as an almost conclusive argument against the status quo; if some policy promotes justice, this is taken as an almost conclusive argument in favor of the policy. In both political philosophy and everyday political discourse, we do, of course, recognize exceptions to this rule. In the face of a serious disaster, we may need to override justice - we shouldn't really let justice be done though the heavens fall. But these exceptions are generally assumed to be rare - the heavens are only seldom falling. For the most part, then, contemporary political philosophy and discourse follows John Rawls's statement in the above epigraph. It operates with an "intuitive conviction of the primacy of justice," albeit, one that is sometimes "expre...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Three mistakes in the moral mathematics of existential risk (David Thorstad), published by Global Priorities Institute on July 4, 2023 on The Effective Altruism Forum. Abstract Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk: the importance of treating existential risk as an intergenerational coordination problem; a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation; renewed importance of population dynamics, including the dynamics of digital minds; and a novel form of the cluelessness challenge to longtermism. Introduction Suppose you are an altruist. You want to do as much good as possible with the resources available to you. What might you do? One option is to address pressing short-term challenges. For example, GiveWell (2021) estimates that $5,000 spent on bed nets could save a life from malaria today. Recently, a number of longtermists (Greaves and MacAskill 2021; MacAskill 2022b) have argued that you could do much more good by acting to mitigate existential risks: risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, you might work to regulate chemical and biological weapons, or to reduce the threat of nuclear conflict (Bostrom and Cirkovi ' c' 2011; MacAskill 2022b; Ord 2020). Many authors argue that efforts to mitigate existential risk have enormous value. For example, Nick Bostrom (2013) argues that even on the most conservative assumptions, reducing existential risk by just one-millionth of one percentage point would be as valuable as saving a hundred million lives today. Similarly, Hilary Greaves and Will MacAskill (2021) estimate that early efforts to detect potentially lethal asteroid impacts in the 1980s and 1990s had an expected cost of just fourteen cents per life saved. If this is right, then perhaps an altruist should focus on existential risk mitigation over short term improvements. There are many ways to push back here. Perhaps we might defend population-ethical assumptions such as neutrality (Naverson 1973; Frick 2017) that cut against the importance of creating happy people. Alternatively, perhaps we might introduce decision-theoretic assumptions such as risk aversion (Pettigrew 2022), ambiguity aversion (Buchak forthcoming) or anti-fanaticism (Monton 2019; Smith 2014) that tell against risky, ambiguous and low-probability gambles to prevent existential catastrophe. We might challenge assumptions about aggregation (Curran 2022; Heikkinen 2022), personal prerogatives (Unruh forthcoming), and rights used to build a deontic case for existential risk mitigation. We might discount the well-being of future people (Lloyd 2021; Mogensen 2022), or hold that pressing current duties, such as reparative duties (Cordelli 2016), take precedence over duties to promote far-future welfare. These strategies set themselves a difficult task if they accept the longtermist's framing on which existential risk mitigation is not simply better, but orders of magnitude better than competing short-termist interventions. Is it really so obvious ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities' Worldview Investigation Team: Introductions and Next Steps, published by Bob Fischer on June 21, 2023 on The Effective Altruism Forum. Some months ago, Rethink Priorities announced its interdisciplinary Worldview Investigation Team (WIT). Now, we're pleased to introduce the team's members: Bob Fischer is a Senior Research Manager at Rethink Priorities, an Associate Professor of Philosophy at Texas State University, and the Director of the Society for the Study of Ethics & Animals. Before leading WIT, he ran RP's Moral Weight Project. Laura Duffy is an Executive Research Coordinator for Co-CEO Marcus Davis and works on the Worldview Investigations Project. She is a graduate of the University of Chicago, where she earned a Bachelor of Science in Statistics and co-facilitated UChicago Effective Altruism's Introductory Fellowship. Arvo Muñoz Morán is a Quantitative Researcher working on the Worldview Investigations Team at Rethink Priorities and a research assistant at Oxford's Global Priorities Institute. Before that, he was a Research Analyst at the Forethought Foundation for Global Priorities Research and earned an MPhil in Economics from Oxford. His background is in mathematics and philosophy. Hayley Clatterbuck is a Philosophy Researcher at Rethink Priorities and an Associate Professor of Philosophy at the University of Wisconsin-Madison. She has published on topics in probability, evolutionary biology, and animal minds. Derek Shiller is a Philosophy Researcher at Rethink Priorities. He has a PhD in philosophy and has written on topics in metaethics, consciousness, and the philosophy of probability. Before joining Rethink Priorities, Derek worked as the lead web developer for The Humane League. David Bernard is a Quantitative Researcher at Rethink Priorities. He will soon complete his PhD in economics at the Paris School of Economics, where his research focuses on forecasting and causal inference in the short and long-run. He was a Fulbright Scholar at UC Berkeley and a Global Priorities fellow at the Global Priorities Institute. Over the next few months, the team will be working on cause prioritization—a topic that raises hard normative, metanormative, decision-theoretic, and empirical issues. We aren't going to resolve them anytime soon. So, we need to decide how to navigate a sea of open questions. In part, this involves making our assumptions explicit, producing the best models we can, and then conducting sensitivity analyses to determine both how robust our models are to uncertainty and where the value of information lies. Accordingly, WIT's goal is to make several contributions to the broader conversation about global priorities. Among the planned contributions, you can expect: A cross-cause cost-effectiveness model. This tool will allow users to compare interventions like corporate animal welfare campaigns with work on AI safety, the Against Malaria Foundation with attempts to reduce the risk of nuclear war, biosecurity projects with community building, and so on. We've been working on a draft of this model in recent months and we recently hired two programmers—Chase Carter and Agustín Covarrubias—to accelerate its public release. While this tool won't resolve all disputes about resource allocation, we hope it will help the community reason more transparently about these issues. Surveys of key stakeholders about the inputs to the model. Many people have thought long and hard about how much x-risk certain interventions can reduce, the relative importance of improving human and animal welfare, and the cost of saving lives in developing countries. We want to capture and distill those insights. A series of reports on the cruxes. The model has three key cruxes: animals' “moral weights,” the expected value of the future, and your preference for ...
Some months ago, Rethink Priorities announced its interdisciplinary Worldview Investigation Team (WIT). Now, we're pleased to introduce the team's members:Bob Fischer is a Senior Research Manager at Rethink Priorities, an Associate Professor of Philosophy at Texas State University, and the Director of the Society for the Study of Ethics & Animals. Before leading WIT, he ran RP's Moral Weight Project.Laura Duffy is an Executive Research Coordinator for Co-CEO Marcus Davis and works on the Worldview Investigations Project. She is a graduate of the University of Chicago, where she earned a Bachelor of Science in Statistics and co-facilitated UChicago Effective Altruism's Introductory Fellowship.Arvo Muñoz Morán is a Quantitative Researcher working on the Worldview Investigations Team at Rethink Priorities and a research assistant at Oxford's Global Priorities Institute. Before that, he was a Research Analyst at the Forethought Foundation for Global Priorities Research and earned an MPhil in Economics from Oxford. His background is in mathematics and philosophy.Hayley Clatterbuck is a Philosophy Researcher at Rethink Priorities and an Associate Professor of Philosophy at the University of Wisconsin-Madison. She has published on topics in probability, evolutionary biology, and animal minds. Derek Shiller is a Philosophy Researcher at Rethink Priorities. He has a PhD in philosophy and has written on topics in metaethics, consciousness, and the philosophy of probability. Before joining Rethink Priorities, Derek worked as the lead web developer for The Humane League.David Bernard is a Quantitative Researcher at Rethink Priorities. He will soon complete his PhD in economics at the Paris School of Economics, where his research focuses on forecasting and causal inference in the short and long-run. He was a Fulbright Scholar at UC Berkeley and a Global Priorities fellow at the Global Priorities Institute. Over the next few months, the team will be working on cause prioritization—a topic that raises hard normative, metanormative, decision-theoretic, and empirical issues. We aren't going to resolve them anytime soon. So, we need to decide how to navigate a sea of open questions. In part, this involves making our assumptions explicit, producing the best models we can, and then conducting sensitivity analyses to determine both how robust our models are to uncertainty and where the value of information lies.Accordingly, WIT's goal is to make several contributions to the broader conversation about global priorities. Among the planned contributions, you can expect:A cross-cause cost-effectiveness model. This tool will allow users to compare interventions like corporate animal welfare campaigns with work on AI safety, the Against Malaria Foundation with attempts to reduce the risk of nuclear war, biosecurity projects with community building, and so on. We've been working on a draft of this model in recent months and we recently hired two programmers to accelerate its public release. While this tool won't resolve all disputes about resource allocation, we hope it will help the community reason more transparently about these issues.Surveys of key stakeholders about the inputs to the model. Many people have thought long and hard about how much x-risk certain interventions can reduce, the relative importance of improving human and [...]--- First published: June 21st, 2023 Source: https://forum.effectivealtruism.org/posts/kSrjdtazFhkwwLuK8/rethink-priorities-worldview-investigation-team --- Narrated by TYPE III AUDIO. Share feedback on this narration.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast Interview with David Thorstad on Existential Risk, The Time of Perils, and Billionaire Philanthropy, published by Nick Anyos on June 4, 2023 on The Effective Altruism Forum. I have released a new episode of my podcast, EA Critiques, where I interview David Thorstad. David is a researcher at the Global Priorities Institute and also writes about EA on his blog, Reflective Altruism. In the interview we discuss three of his blog post series: Existential risk pessimism and the time of perils: Based on his academic paper of the same name, David argues that there is a surprising tension between the idea that there is a high probability of extinction (existential risk pessimism) and the idea that the expected value of the future, conditional on no existential catastrophe this century, is astronomically large. Exaggerating the risks: David argues that the probability of an existential catastrophe from any source is much lower than many EAs believe. At time of recording the series only covered risks from climate change, but future posts will make the same case for nuclear war, pandemics, and AI. Billionaire philanthropy: Finally, we talk about the the potential issues with billionaires using philanthropy to have an outsized influence, and how both democratic societies and the EA movement should respond. As always, I would love feedback, on this episode or the podcast in general, and guest suggestions. You can write a comment here, send me a message, or use this anonymous feedback form. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Sam Harris speaks with Shamil Chandaria about how the brain constructs a vision of the self and the world. They discuss the brain from first principles; Bayesian inference; hierarchical predictive processing; the construction of vision; psychedelics and neuroplasticity; beliefs and prior probabilities; the interaction between psychedelics and meditation; the risks and benefits of psychedelics; Sam’s recent experience with MDMA; non-duality; love, gratitude, and bliss; the self model; the Buddhist concept of emptiness; human flourishing; effective altruism; and other topics. Dr. Shamil Chandaria is a philanthropist, serial entrepreneur, technologist, and academic with multidisciplinary research interests spanning computational neuroscience, machine learning and artificial intelligence, and the philosophy and science of human wellbeing. His PhD from the London School of Economics was in mathematical modeling of economic systems using stochastic differential equations and optimal control theory. Later he completed an MA in philosophy with distinction from University College London, where he developed an interest in philosophy of science and philosophical issues in biology, neuroscience, and ethics. In 2018, Dr. Chandaria helped to endow the Global Priorities Institute at Oxford University, an interdisciplinary research institute focusing on the most important issues facing humanity. In 2019 he was a founder of the Centre for Psychedelic Research in the department of Brain Sciences at Imperial College London, a neuroscience research institute investigating psychedelic therapies for a number of conditions including treatment resistant depression. He has also funded research on the neuroscience of meditation at Harvard, and at the University of California in Berkeley. Twitter: @shamilch YouTube: @ShamilChandaria Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
In the popular imagination, the AI alignment debate is between those who say everything is hopeless, and others who tell us there is nothing to worry about.Leopold Aschenbrenner graduated valedictorian from Columbia in 2021 when he was 19 years old. He is currently a research affiliate at the Global Priorities Institute at Oxford, and previously helped run Future Fund, which works on philanthropy in AI and biosecurity. He contends that, contrary to popular perceptions, there aren't that many people working on the alignment issue. Not only that, but he argues that the problem is actually solvable. In this podcast, he discusses what he believes some of the most promising paths forward are. Even if there is only a small probability that AI is dangerous, a small chance of existential risk is something to take seriously.AI is not all potential downsides. Near the end, the discussion turns to the possibility that it may supercharge a new era of economic growth. Aschebrenner and Hanania discuss fundamental questions of how well GDP numbers still capture what we want to measure, the possibility that regulation strangles AI to death, and whether the changes we see in the coming decades will be on the same scale as the internet or more important. Listen in podcast form here, or watch on YouTube.Links:* Leopold Aschenbrenner, “Nobody's on the Ball on AGI Alignment.” * Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt, “Discovering Latent Knowledge in Language Models Without Supervision.” * Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov, “Locating and Editing Factual Associations in GPT.”* Leopold's Tweets: * Using GPT4 to interpret GPT2 .* What a model says is not necessarily what's it's“thinking” internally. Get full access to Center for the Study of Partisanship and Ideology at www.cspicenter.com/subscribe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Blog update: Reflective altruism, published by David Thorstad on May 14, 2023 on The Effective Altruism Forum. About me I'm a research fellow in philosophy at the Global Priorities Institute. Starting in the Fall, I'll be Assistant Professor of Philosophy at Vanderbilt University. (All views are my own, except the worst. Those are to be blamed on my cat.). There are many things I like about effective altruism. I've started a blog to discuss some views and practices in effective altruism that I don't like, in order to drive positive change both within and outside of the movement. About this blog The blog features long-form discussions, structured into thematic series of posts, informed by academic research. Currently, the blog features six thematic series, described below. One distinctive feature of my approach is that I share a number of philosophical views with many effective altruists. I accept or am sympathetic to all of the following: consequentialism; totalism; fanaticism; expected value maximization; and the importance of using science, reasons and evidence to solve global problems. Nevertheless, I am skeptical of many views held by effective altruists including longtermism and the view that humanity currently faces high levels of existential risk. We also have a number of methodological disagreements. I've come to understand that this is a somewhat distinctive approach within the academic literature, as well as in the broader landscape. I think that is a shame. I want to say what can be said for this approach, and what can be learned from it. I try to do that on my blog. About this document The blog is currently five months old. Several readers have asked me to post an update about my blog on the EA Forum. I think that is a good idea: I try to be transparent about what I am up to, and I value feedback from my readers. Below, I say a bit about existing content on the blog; plans for new content; and some lessons learned during the past five months. Existing series Series 1: Academic papers The purpose of this blog is to use academic research to drive positive change within and outside of the effective altruism movement. This series draws insights from academic papers related to effective altruism. Sub-series A: Existential risk pessimism and the time of perils This series is based on my paper “Existential risk pessimism and the time of perils”. The paper develops a tension between two claims: Existential Risk Pessimism (levels of existential risk are very high) and the Astronomical Value Thesis (efforts to reduce existential risk have astronomical value). It explores the Time of Perils hypothesis as a way out of the tension. Status: Completed. Parts 1-6 present the main argument of the paper. Part 7 discusses an application to calculating the cost-effectiveness of biosecurity. Part 8 draws implications. Part 9 responds to objections. Sub-series B: The good it promises This series is based on a volume of essays entitled The good it promises, the harm it does: Critical essays on effective altruism. The volume brings together a diverse collection of scholars, activists and practitioners to critically reflect on effective altruism. In this series, I draw lessons from papers contained in the volume. Status: In progress. Part 1 introduces the series and discusses the foreword to the book by Amia Srinivasan. Part 2 looks at Simone de Lima's discussion of colonialism and animal advocacy in Brazil. Part 3 looks at Carol J Adams' care ethical approach. Series 2: Academics review WWOTF Will MacAskill's book What we owe the future is one of the most influential recent books about effective altruism. A number of prominent academics have written insightful reviews of the book. In this series, I draw lessons from some of my favorite academic reviews of What we owe the future....
You can watch this talk with the video on the GPI YouTube channel. This presentation was given at the 10th Oxford Workshop on Global Priorities Research, June 2022.You can find the full transcript here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can watch this talk with the video on the GPI YouTube channel. This presentation was given at the 10th Oxford Workshop on Global Priorities Research, June 2022.Full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can watch this talk with the video on the GPI YouTube channel. This presentation was given at the 10th Oxford Workshop on Global Priorities Research, June 2022.You can find the full transcript here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can view this talk with the video on the GPI YouTube channel. Public Lecture: The Journey of Humanity - Oded Galor 10 June 2022In The Journey of Humanity Oded Galor offers a revelatory explanation of how humanity became, only very recently, the unique species to have escaped a life of subsistence poverty, enjoying previously unthinkable wealth and longevity. He reveals why this process has been so unequal around the world, resulting in the great disparities between nations that exist today. He shows why so many of our efforts to improve lives have failed and how they might succeed.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/You can find more information about Oded Galor here: https://www.odedgalor.com/The Journey of Humanity: https://www.penguin.co.uk/books/44495...Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can view this talk with the video on the GPI YouTube channel. Parfit Memorial Lecture 2022 hosted by the Global Priorities Institute 16 June 2022The Parfit Memorial Lecture is an annual distinguished lecture series established by the Global Priorities Institute in memory of Professor Derek Parfit. The aim is to encourage research among academic philosophers on topics related to global priorities research - using evidence and reason to figure out the most effective ways to improve the world. This year, we were delighted to have Jeffrey Sanford Russell deliver the Parfit Memorial Lecture. The Parfit Memorial lecture is organised in conjunction with the Atkinson Memorial Lecture.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can watch this talk with the video on the GPI YouTube channel. This presentation was given at the 10th Oxford Workshop on Global Priorities Research, June 2022.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can watch this talk with the video on the GPI YouTube channel. Presentation given March 2021.The full transcript is available here: https://globalprioritiesinstitute.org...Read the working paper: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can view this talk with the video on the GPI YouTube channel. Parfit Memorial Lecture 2021 hosted by the Global Priorities Institute 14 June 2021.The Parfit Memorial Lecture is an annual distinguished lecture series established by the Global Priorities Institute in memory of Professor Derek Parfit. The aim is to encourage research among academic philosophers on topics related to global priorities research - using evidence and reason to figure out the most effective ways to improve the world. This year, we were delighted to have Professor Orri Stefansson deliver the Parfit Memorial Lecture. The Parfit Memorial lecture is organised in conjunction with the Atkinson Memorial Lecture.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Parfit Memorial Lecture 2021: https://globalprioritiesinstitute.org...Atkinson Memorial Lecture 2021: https://globalprioritiesinstitute.org...Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
Presentation given December 2020.The full transcript is available here: https://globalprioritiesinstitute.org...Read the working paper: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can watch this talk with the video on the GPI YouTube channel. Presented as part of the Global Priorities Seminar series 12 June 2020.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/ Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current plans as the incoming director of the Global Priorities Institute, published by Eva on April 26, 2023 on The Effective Altruism Forum. Cross-posted from my blog. I am taking leave from the University of Toronto to serve as the Director of the Global Priorities Institute (GPI) at the University of Oxford. I can't express enough gratitude to the University of Toronto for enabling this. (I'll be back in the fall to fulfill my teaching obligations, though - keep inviting me to seminars and such!) GPI is an interdisciplinary research institute focusing on academic research that informs decision-makers on how to do good more effectively. In its first few years, under the leadership of its founding director, Hilary Greaves, GPI created and grew a community of academics in philosophy and economics interested in global priorities research. I am excited to build from this strong foundation and, in particular, to further develop the economics side. There are several areas I would like to focus on while at GPI. The below items reflect my current views, however, I expect these views to be refined over time. These items are not intended to be an exhaustive list, but they are things I would like GPI to do more of on the margin. 1) Research on decision-making under uncertainty There is a lot of uncertainty in estimates of the effects of various actions. My views here are coloured by my past work. In the early 2010s, I tried to compile estimates of the effects of popular development interventions such as insecticide-treated bed nets for malaria, deworming drugs, and unconditional cash transfers. My initial thought was that by synthesizing the evidence, I'd be able to say something more conclusive about "the best" intervention for a given outcome. Unfortunately, I found that results varied, a lot (you can read more about it in my JEEA paper). If it's really hard to predict effects in global development, which is a very well-studied area, it would seem even harder to know what to do in other areas with less evidence. Yet, decisions still have to be made. One of the core areas GPI has focused on in the past is decision-making under uncertainty, and I expect that to continue to be a priority research area. Some work on robustness might also fall under this category. 2) Increasing empirical research GPI is an interdisciplinary institute combining philosophy and economics. To date, the economics side has largely focused on theoretical issues. But I think it's important for there to be careful, rigorous empirical work at GPI. I think there are relevant hypotheses that can be tested that pertain to global priorities research. Many economists interested in global priorities research come from applied fields like development economics, and there's a talented pool of people who can do empirical work on, e.g., encouraging better uptake of evidence or forecasting. There's simply a lot to be done here, and I look forward to working with colleagues like Julian Jamison (on leave from Exeter), Benjamin Tereick, and Mattie Toma (visiting from Warwick Business School), among many others. 3) Expanding GPI's network in economics There is an existing program at GPI for senior research affiliates based at other institutions. However, I think a lot more can be done with this, especially on the economics side. I'm still exploring the right structures, but suffice it to say, if you are an academic economist interested in global priorities research, please do get in touch. I am envisioning a network of loosely affiliated individuals in core fields of interest who would be sent notifications about research and funding opportunities. There may also be the occasional workshop or conference invitation. 4) Exploring expanding to other fields and topics There are a number of topics that appear relevant to gl...
You can view this talk with the video on the GPI YouTube channel. Presentation given at the Global Priorities Institute December 2019.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can view this talk with the video on the GPI YouTube channel. Presentation given at the Global Priorities Institute September 2019.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can view this talk with the video on the GPI YouTube channel. Originally presented at the Global Priorities Institute June 2019The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can view this talk with the video on the GPI YouTube channel. Originally presented at the Global Priorities Institute June 2019.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can view this talk with the video on the GPI YouTube channel. Presented at the Global Priorities Institute (Oxford University) June 2019.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
You can view this talk with the video on the GPI YouTube channel. The Atkinson Memorial Lecture is an annual distinguished lecture series established in 2018 in memory of Professor Sir Tony Atkinson, jointly by the Global Priorities Institute and the Department of Economics. The aim is to encourage research among academic economists on topics related to global prioritisation - using evidence and reason to figure out the most effective ways to improve the world. This year, we were delighted to have Professor Marc Fleurbaey deliver the Atkinson Memorial Lecture. The Atkinson Memorial lecture is organised in conjunction with the Parfit Memorial Lecture.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Are we living at the hinge of history? (William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum. This is a summary of the GPI Working Paper “Are we living at the hinge of history?” by William MacAskill. (also published in the 2022 edited volume “Ethics and Existence: The Legacy of Derek Parfit”). The summary was written by Riley Harris. Longtermist altruists – who care about how much impact they have, but not about when that impact occurs – have a strong reason to invest resources before using them directly. Invested resources could grow much larger and be used to do much more good in the future. For example, a $1 investment that grows 5% per year would become $17,000 in 200 years. However, some people argue that we are living in an unusual time, during which our best opportunities to improve the world are much better than they ever will be in the future. If so, perhaps we should spend our resources as soon as possible. In “Are we living at the hinge of history?”, William MacAskill investigates whether actions in our current time are likely to be much more influential than other times in the future. (‘Influential' here refers specifically to how much good we expect to do via direct monetary expenditure – the consideration most relevant to our altruistic decision to spend now or later.) After making this ‘hinge of history' claim more precise, MacAskill gives two main arguments against the claim: the base rate and inductive arguments. He then discusses some reasons why our time might be unusual, but ultimately concludes that he does not think that the ‘hinge of history' claim holds true. The base rate argument When we think about the entire future of humanity, we expect there to be a lot of people, and so we should initially be very sceptical that anyone alive today will be amongst the most influential human beings. Indeed, if humanity doesn't go extinct in the near future, there could be a vast number of future people – settling near just 0.1% of stars in the Milky Way with the same population as Earth would mean there were 1024 (a trillion trillion) people to come. Suppose that, before inspecting further evidence, we believe that we are about as likely as anyone else to be particularly influential. Then, our initial belief that anyone alive today is amongst the million most influential people would be 1 in 1018 (1 in a million trillion). From such a sceptical starting point, we would need extremely strong evidence to become convinced that we are presently in the most influential time era. Even if there were only 108 (one hundred trillion) people to come, then in order to move from this extremely sceptical position (1 in 108) to a more moderate position (1 in 10), we would need evidence about 3 million times as strong as a randomised control trial with a p-value of 0.05. MacAskill thinks that, although we do have some evidence that indicates we may be at the most influential time, this evidence is not nearly strong enough. The inductive argument There is another strong reason to think our time is not the most influential, MacAskill argues: Premise 1: Influentialness has been increasing over time. Premise 2: We should expect this trend to continue. Conclusion: We should expect the influentialness of people in the future to be greater than our own influentialness. Premise 1 can be best illustrated with an example: a well-educated and wealthy altruist living in Europe in 1600 would not have been in a position to know about the best opportunities to shape the long-run future. In particular, most of the existential risks they faced (e.g. an asteroid collision or supervolcano) were not known, nor would they have been in a good position to do anything about them even if they were known. Even if they had th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Longtermist institutional reform (Tyler M. John and William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum. This is a summary of the GPI working paper "Longtermist institutional reform" by Tyler M. John and William MacAskill (published in the 2021 edited volume “the long view”). The summary was written by Riley Harris. Political decisions can have lasting effects on the lives and wellbeing of future generations. Yet political institutions tend to make short-term decisions with only the current generation – or even just the current election cycle – in mind. In “longtermist institutional reform”, Tyler M. John and William MacAskill identify the causes of short-termism in government and give four recommendations for how institutions could be improved. These are the creation of in-government research institutes, a futures assembly, posterity impact statements and – more radically – an ‘upper house' representing future generations. Causes of short-termism John and MacAskill discuss three main causes of short-termism. Firstly, politicians may not care about the long term. This may be because they discount the value of future generations, or simply because it is easy to ignore the effects of policies that are not experienced here and now. Secondly, even if politicians are motivated by concern for future generations, it may be difficult to know the long-term effects of different policies. Finally, even motivated and knowledgeable actors might face structural barriers to implementing long-term focussed policies – for instance, these policies might sometimes appear worse in the short-term and reduce a candidate's chances of re-election. Suggested reforms In-government research institutes The first suggested reform is the creation of in-government research institutes that could independently analyse long-term trends, estimate expected long-term impacts of policy and identify matters of long-term importance. These institutes could help fight short-termism by identifying the likely future impacts of policies, making these impacts vivid, and documenting how our leaders are affecting the future. They should also be designed to resist the political incentives that drive short-termism elsewhere. For instance, they could be functionally independent from the government, hire without input from politicians, and be flexible enough to prioritise the most important issues for the future. To ensure their advice is not ignored, the government should be required to read and respond to their recommendations. Futures assembly The futures assembly would be a permanent citizens' assembly which seeks to represent the interests of future generations and give dedicated policy time to issues of importance for the long-term. Several examples already exist where similar citizens' assemblies have helped create consensus on matters of great uncertainty and controversy, enabling timely government action. In-government research institutes excel at producing high quality information, but lack legitimacy. In contrast, a citizens' assembly like this one could be composed of randomly selected citizens that are statistically representative of the general population. John and MacAskill believe this representativeness brings political force –politicians who ignore the assembly put their reputations at risk. We can design futures assemblies to avoid the incentive structures that result in short-termism – such as election cycles, party interests and campaign financing. Members should be empowered to call upon experts, and their terms should be long enough to build expertise but short enough to avoid problems like interest group capture – perhaps two years. They should also be empowered to set their own agenda and publicly disseminate their resul...
Thank you Michelle and to everyone listening and watching! - Timestamps - 00:00 - Start and Intro of Michelle 01:10 - What 80,000 Hours Does 02:39 - One-to-one Coaching By 80,000 Hours 05:46 - Application Criteria: Who Can Apply 07:59 - How 80,000 Hours Curate Its Job Board 11:38 - Michelle's Journey To The Effective Altruism Movement 18:47 - Current and Future Work of Michelle 20:58 - Career Advice By Michelle 26:50 - Global Priorities 31:55 - Existential Risks Of Today 36:27 - The Benefits Specializing and Being A Jack Of All Trades 40:20 - Biggest Misconception About Career Advising 40:57 - Why You Shouldn't Be Afraid To Ask For Help 44:18 - Tips On Applying For Grants 47:36 - One Thing Michelle Learned From Entire Career 49:30 - Outro Relevant Links: - https://80000hours.org/ Who is Michelle Hutchinson? Michelle is the current director of the One-on-one Programme of 80,000 Hours. She holds a PhD in Philosophy from the University of Oxford, where her thesis was on global priorities research. While completing that, she did the operational set-up of the Centre for Effective Altruism and then became Executive Director of Giving What We Can. She came to 80,000 Hours fresh from setting up the Global Priorities Institute at Oxford. As I want to run this podcast ad-free, the best way to support me is through Patreon: https:/ /www. patreon.com/martinskadal If you live in Norway, you can consider becoming a support member in the two organizations I run. It costs NOK 50 a year. The more members we have, the more influence we have and the more funding we get as well. Right now we have around 500 members of World Saving Hustle (WSH) and 300 members of Altruism for Youth (AY). • Become a support member of WSH:https://forms.gle/ogwYPF1c62a59TsRA • Become a support member of AY: https://forms.gle/LSa4P1gyyyUmDsuP7 If you want to become a volunteer for World Saving Hustle or Altruism for Youth, send me an email and I'll forward it to our team. It might take some time before you'll get an answer as we're currently run by volunteers, but you'll get an answer eventually! Do you have any feedback, questions, suggestions for either topics/guests, let me know in the comment section. If you want to get in touch, the best way is through email: martin@worldsavinghustle.com Thanks to everyone in World Saving Hustle backing up this project and thanks to my creative partner Candace for editing this podcast! Thanks everyone and have an amazing day as always!! • instagram https://www.instagram.com/skadal/ • linkedin https://www.linkedin.com/in/martinska. . . • facebook https://www.facebook.com/martinsskadal/ • twitter https://twitter.com/martinskadal • Norwegian YT https://www.youtube.com/@martinskadal353 • Patreon https://www.patreon . com/martinskadal
Shamil Chandaria is an expert in artificial intelligence and computational neuroscience as well as an entrepreneur, philanthropist, and meditator, and is also a good friend. He was a founder of the Centre for Psychedelic Research at Imperial College London, the world's first psychedelic research centre, He provides funding for the Global Priorities Institute at Oxford University as well as for research on the neuroscience of meditation at Harvard University and UC Berkeley. In 2022 he was awarded an OBE in the UK for services to Science and Technology, Finance and Philanthropy. Today we talk about one of the topics he is most passionate about, the connections between awakening and the contemporary model of the brain as a prediction machine. Shamil Chandaria is an expert in artificial intelligence and computational neuroscience as well as an entrepreneur, philanthropist, and meditator, and is also a good friend. He was a founder of the Centre for Psychedelic Research at Imperial College London, the world's first psychedelic research centre. He provides funding for the Global Priorities Institute at Oxford University as well as for research on the neuroscience of meditation at Harvard University and UC Berkeley. In 2022 he was awarded an OBE in the UK for services to Science and Technology, Finance and Philanthropy. Today we talk about one of the topics he is most passionate about, the connections between awakening and the contemporary model of the brain as a prediction machine. The Bayesian Brain and Meditation talk:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Some doubts about effective altruism, published by David Thorstad on December 20, 2022 on The Effective Altruism Forum. I'm a research fellow in philosophy at the Global Priorities Institute. There are many things I like about effective altruism. I've started a blog to discuss some views and practices in effective altruism that I don't like, in order to drive positive change both within and outside of the movement. About me I'm a research fellow in philosophy at the Global Priorities Institute, and a Junior Research Fellow at Kellogg College. Before coming to Oxford, I did a PhD in philosophy at Harvard under the incomparable Ned Hall, and BA in philosophy and mathematics at Haverford College. I held down a few jobs along the way, including a stint teaching high-school mathematics in Lawrence, Massachusetts and a summer gig as a librarian for the North Carolina National Guard. I'm quite fond of dogs. Who should read this blog? The aim of the blog is to feature (1) long-form, serial discussions of views and practices in and around effective altruism, (2) driven by academic research, and from a perspective that (3) shares a number of important views and methods with many effective altruists. This blog might be for you if: You would like to know why someone who shares many background views with effective altruists could nonetheless be worried about some existing views and practices. You are interested in learning more about the implications of academic research for views and practices in effective altruism. You think that empirically-grounded philosophical reflection is a good way to gain knowledge about the world. You have a moderate amount of time to devote to reading and discussion (20-30mins/post). You don't mind reading series of overlapping posts. This blog might not be for you if: You would like to know why someone who has little in common with effective altruists might be worried about the movement. You aren't keen on philosophy, even when empirically grounded. You have a short amount of time to devote to reading. You like standalone posts and hate series. Blog series The blog is primarily organized around series of posts, rather than individual posts. I've kicked off the blog with four series. Academic papers: This series summarizes cutting-edge academic research relevant to questions in and around the effective altruism movement. Existential risk pessimism and the time of perils: Part 1 introduces a tension between Existential Risk Pessimism (risk is high) and the Astronomical Value Thesis (it's very important to drive down risk). Part 2 looks at some failed solutions to the tension. Part 3 looks at a better solution: the Time of Perils Hypothesis. Part 4 looks at one argument for the Time of Perils Hypothesis, which appeals to space settlement. Part 5 looks at a second argument for the Time of Perils Hypothesis, which appeals to the concept of an existential risk Kuznets curve. Parts 6-8 (coming soon) round out the paper and draw implications. Academics review What we owe the future: This series looks at book reviews of MacAskill's What we owe the future by leading academics to draw out insights from those reviews. Part 1 looks at Kieran Setiya's review, focusing on population ethics. Part 2 (coming soon) looks at Richard Chappell's review. Part 3 (coming soon) looks at Regina Rini's review. Exaggerating the risks: I think that current levels of existential risk are substantially lower than many leading EAs take them to be. In this series, I say why I think that. Part 1 introduces the series. Part 2 looks at Ord's discussion of climate risk in The Precipice. Part 3 takes a first look at the Halstead report on climate risk. Parts 4-6 (coming soon) wrap up the discussion of climate risk and draw lessons. Billionaire philanthropy: What is the role of b...
Host Michael Taft talks with philanthropist, serial entrepreneur, technologist, and meditator Dr. Shamil Chandaria about predictive processing as it relates to meditation, our phenomenal self model, recognizing our own fabrication hierarchy, replacing our top level priors as a way to understand nondual enlightenment, and much more.Dr Shamil Chandaria OBE is a philanthropist, serial entrepreneur, technologist, and academic with multi-disciplinary research interests spanning computational neuroscience, machine learning and artificial intelligence, and the philosophy and science of human well-being. His PhD, from the London School of Economics, was in mathematical modelling of economic systems using stochastic differential equations and optimal control theory. Later he completed an MA in Philosophy, with Distinction, from University College London where he developed an interest in philosophy of science and philosophical issues in biology, neuroscience, and ethics. In 2018 Dr. Chandaria helped to endow the Global Priorities Institute at Oxford University, an interdisciplinary research institute focusing on the most important issues facing humanity. In 2019 he was a founder of the Centre for Psychedelic Research, in the department of Brain Sciences at Imperial College London, a neuroscience research institute investigating psychedelic therapies for a number of conditions including treatment resistant depression. He is also funding research on the neuroscience of meditation at Harvard University and Berkeley, University of California.In 2022 Dr Chandaria was awarded a British OBE for services to Science and Technology, Finance and Philanthropy. He is also a long-term meditation practitioner. The Bayesian Brain and Meditation Lecture, by Shamil ChandariaYou can support the creation of future episodes of this podcast by contributing through Patreon.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably good projects for the AI safety ecosystem, published by Ryan Kidd on December 5, 2022 on LessWrong. At EAGxBerkeley 2022, I was asked several times what new projects might benefit the AI safety and longtermist research ecosystem. I think that several existing useful-according-to-me projects (e.g., SERI MATS, REMIX, CAIS, etc.) could urgently absorb strong management and operations talent, but I think the following projects would also probably be useful to the AI safety/longtermist project. Criticisms are welcome. Projects I might be excited to see, in no particular order: A London-based MATS clone to build the AI safety research ecosystem there, leverage mentors in and around London (e.g., DeepMind, CLR, David Krueger, Aligned AI, Conjecture, etc.), and allow regional specialization. This project should probably only happen once MATS has ironed out the bugs in its beta versions and grown too large for one location (possibly by Winter 2023). Please contact the MATS team before starting something like this to ensure good coordination and to learn from our mistakes. Rolling admissions alternatives to MATS' cohort-based structure for mentors and scholars with different needs (e.g., to support alignment researchers who suddenly want to train/use research talent at irregular intervals but don't have the operational support to do this optimally). A combined research mentorship and seminar program that aims to do for AI governance research what MATS is trying to do for technical AI alignment research. A dedicated bi-yearly workshop for AI safety university group leaders that teaches them how to recognize talent, foster useful undergraduate research projects, and build a good talent development pipeline or “user journey” (including a model of alignment macrostrategy and where university groups fit in). An organization that does for the Open Philanthropy worldview investigations team what GCP did to supplement CEA's workshops and 80,000 Hours' career advising calls. Further programs like ARENA that aim to develop ML safety engineering talent at scale by leveraging good ML tutors and proven curricula like CAIS' Intro to ML Safety, Redwood Research's MLAB, and Jacob Hilton's DL curriculum for large language module alignment. More contests like ELK with well-operationalized research problems (i.e., clearly explain what builder/breaker steps look like), clear metrics of success, and have a well-considered target audience (who is being incentivized to apply and why?) and user journey (where do prize winners go next?). Possible contest seeds: Evan Hubinger's SERI MATS deceptive AI challenge problem; Vivek Hebbar's and Nate Soares' SERI MATS diamond maximizer selection problem; Alex Turner's and Quintin Pope's SERI MATS training stories selection problem. More "plug-and-play" curriculums for AI safety university groups, like AGI Safety Fundamentals, Alignment 201, Intro to ML Safety. A well-considered "precipism" university course template that critically analyzes Toby Ord's “The Precipice,” Holden Karnofsky's “The Most Important Century,” Will MacAskill's “What We Owe The Future,” some Open Philanthropy worldview investigations reports, some Global Priorities Institute ethics papers, etc. Hackathons in which people with strong ML knowledge (not ML novices) write good-faith critiques of AI alignment papers and worldviews (e.g., what Jacob Steinhardt's “ML Systems Will Have Weird Failure Modes” does for Hubinger et al.'s “Risks From Learned Optimization”). A New York-based alignment hub that aims to provide talent search and logistical support for NYU Professor Sam Bowman's planned AI safety research group. More organizations like CAIS that aim to recruit established ML talent into alignment research with clear benchmarks, targeted hackathons/contests with prizes, and offers ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably good projects for the AI safety ecosystem, published by Ryan Kidd on December 5, 2022 on LessWrong. At EAGxBerkeley 2022, I was asked several times what new projects might benefit the AI safety and longtermist research ecosystem. I think that several existing useful-according-to-me projects (e.g., SERI MATS, REMIX, CAIS, etc.) could urgently absorb strong management and operations talent, but I think the following projects would also probably be useful to the AI safety/longtermist project. Criticisms are welcome. Projects I might be excited to see, in no particular order: A London-based MATS clone to build the AI safety research ecosystem there, leverage mentors in and around London (e.g., DeepMind, CLR, David Krueger, Aligned AI, Conjecture, etc.), and allow regional specialization. This project should probably only happen once MATS has ironed out the bugs in its beta versions and grown too large for one location (possibly by Winter 2023). Please contact the MATS team before starting something like this to ensure good coordination and to learn from our mistakes. Rolling admissions alternatives to MATS' cohort-based structure for mentors and scholars with different needs (e.g., to support alignment researchers who suddenly want to train/use research talent at irregular intervals but don't have the operational support to do this optimally). A combined research mentorship and seminar program that aims to do for AI governance research what MATS is trying to do for technical AI alignment research. A dedicated bi-yearly workshop for AI safety university group leaders that teaches them how to recognize talent, foster useful undergraduate research projects, and build a good talent development pipeline or “user journey” (including a model of alignment macrostrategy and where university groups fit in). An organization that does for the Open Philanthropy worldview investigations team what GCP did to supplement CEA's workshops and 80,000 Hours' career advising calls. Further programs like ARENA that aim to develop ML safety engineering talent at scale by leveraging good ML tutors and proven curricula like CAIS' Intro to ML Safety, Redwood Research's MLAB, and Jacob Hilton's DL curriculum for large language module alignment. More contests like ELK with well-operationalized research problems (i.e., clearly explain what builder/breaker steps look like), clear metrics of success, and have a well-considered target audience (who is being incentivized to apply and why?) and user journey (where do prize winners go next?). Possible contest seeds: Evan Hubinger's SERI MATS deceptive AI challenge problem; Vivek Hebbar's and Nate Soares' SERI MATS diamond maximizer selection problem; Alex Turner's and Quintin Pope's SERI MATS training stories selection problem. More "plug-and-play" curriculums for AI safety university groups, like AGI Safety Fundamentals, Alignment 201, Intro to ML Safety. A well-considered "precipism" university course template that critically analyzes Toby Ord's “The Precipice,” Holden Karnofsky's “The Most Important Century,” Will MacAskill's “What We Owe The Future,” some Open Philanthropy worldview investigations reports, some Global Priorities Institute ethics papers, etc. Hackathons in which people with strong ML knowledge (not ML novices) write good-faith critiques of AI alignment papers and worldviews (e.g., what Jacob Steinhardt's “ML Systems Will Have Weird Failure Modes” does for Hubinger et al.'s “Risks From Learned Optimization”). A New York-based alignment hub that aims to provide talent search and logistical support for NYU Professor Sam Bowman's planned AI safety research group. More organizations like CAIS that aim to recruit established ML talent into alignment research with clear benchmarks, targeted hackathons/contests with prizes, and offers ...
Det finns forskare som ser en stor risk att mänskligheten går under inom detta sekel. Vore det bra för den övriga skapelsen om människorna försvann? Hur långsiktiga bör vi vara i vårt beslutsfattande? I mitten av november blev vi åtta miljarder människor på jorden och under vår relativt korta tid på den här planeten har vi roffat, skövlat och förstört.En del forskare bedömer att det finns en avsevärd risk för mänsklighetens undergång redan under detta sekel. Det förklarar Oxfordfilosofen Hilary Greaves som forskar vid Institutet för framtidsstudier i Stockholm och tidigare var chef för Global Priorities Institute i Oxford.Och om vi har ett perspektiv på en miljon år blir då vår tids krig, klimatförändringar och tragedier bara små krusningar på ytan till livets stora hav, som filosofen Nick Bostrom har uttryckt saken?I det här avsnittet möter vi, förutom Hilary Greaves, Signe Savén, filosof vid Lunds universitet och Karim Jebari, filosof vid Institutet för framtidsstudier.Producent Thomas Lunderquist, research Paulina Witte, programledare Lars Mogensen
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: The Epistemic Challenge to Longtermism (Christian Tarsney), published by Global Priorities Institute on October 11, 2022 on The Effective Altruism Forum. Note: The Global Priorities Institute (GPI) has started to create summaries of some working papers by GPI researchers with the aim to make our research more accessible to people outside of academic philosophy (e.g. interested people in the effective altruism community). We welcome any feedback on the usefulness of these summaries. Summary: The Epistemic Challenge to Longtermism This is a summary of the GPI Working Paper "The epistemic challenge to longtermism" by Christian Tarsney. The summary was written by Elliott Thornley. According to longtermism, what we should do mainly depends on how our actions might affect the long-term future. This claim faces a challenge: the course of the long-term future is difficult to predict, and the effects of our actions on the long-term future might be so unpredictable as to make longtermism false. In “The epistemic challenge to longtermism”, Christian Tarsney evaluates one version of this epistemic challenge and comes to a mixed conclusion. On some plausible worldviews, longtermism stands up to the epistemic challenge. On others, longtermism's status depends on whether we should take certain high-stakes, long-shot gambles. Tarsney begins by assuming expectational utilitarianism: roughly, the view that we should assign precise probabilities to all decision-relevant possibilities, value possible futures in line with their total welfare, and maximise expected value. This assumption sets aside ethical challenges to longtermism and focuses the discussion on the epistemic challenge. Persistent-difference strategies Tarsney outlines one broad class of strategies for improving the long-term future: persistent-difference strategies. These strategies aim to put the world into some valuable state S when it would otherwise have been in some less valuable state ¬S, in the hope that this difference will persist for a long time. Epistemic persistence skepticism is the view that identifying interventions likely to make a persistent difference is prohibitively difficult — so difficult that the actions with the greatest expected value do most of their good in the near-term. It is this version of the epistemic challenge that Tarsney focuses on in this paper. To assess the truth of epistemic persistence skepticism, Tarsney compares the expected value of a neartermist benchmark intervention N to the expected value of a longtermist intervention L. In his example, N is spending $1 million on public health programmes in the developing world, leading to 10,000 extra quality-adjusted life years in expectation. L is spending $1 million on pandemic-prevention research, with the aim of preventing an existential catastrophe and thereby making a persistent difference. Exogenous nullifying events Persistent-difference strategies are threatened by what Tarsney calls exogenous nullifying events (ENEs), which come in two types. Negative ENEs are far-future events that put the world into the less valuable state ¬S. In the context of the longtermist intervention L, in which the valuable target state S is the existence of an intelligent civilization in the accessible universe, negative ENEs are existential catastrophes that might befall such a civilization. Examples include self-destructive wars, lethal pathogens, and vacuum decay. Positive ENEs, on the other hand, are far-future events that put the world into the more valuable state S. In the context of L, these are events that give rise to an intelligent civilization in the accessible universe where none existed previously. This might happen via evolution, or via the arrival of a civilization from outside the accessible universe. What unites negative...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High-Impact Psychology (HIPsy): Piloting a Global Network, published by Inga on September 29, 2022 on The Effective Altruism Forum. Engaged with psychology or mental health? This is for you. Impartial compassion. Rationality. Wellbeing. For a movement built on these values, EA likely underutilizes psychology professionals. Together with our supporters from the Global Priorities Institute, the Center for Effective Altruism, and the Happier Lives Institute, HIPsy aims to help people engaged with psychology or mental health maximize their impact. Vision We will follow in the footsteps of the EA Consulting Network, High-Impact Medicine, and High-Impact Athletes. Accordingly, the goal of HIPsy shall be to increase the likelihood of high-impact decisions, make collaboration and information processes more effective, and reduce the risk of value drift for people engaged in psychology or mental health. Relevant resources shall be available, easy to access, and use: up-to-date high-quality information, career and work advice, networking and collaboration opportunities. Psychological know-how shall be effectively acquired, shared, and used for EA. Psychology expertise is particularly needed in the fields of: mental health and well-being, both within EA and globally, community building, and outreach, management, HR, and operations, priorities research, and effectiveness research, x-risk-reduction and AI safety, e.g. awareness-building, and persuasion. Summary The goal for the next few months is to find out which of the many potential actions to prioritize, and how to address them most effectively. We will check what materials, events, and services are in-demand, and pilot some of them. You want to help? Let us know here. If you'd like to collaborate, fund us, or if you have got any of the following skills, we want to hear from you: online content creation, running mentorship programs, hosting events, web-dev, community-building, running surveys, research, and cost-effectiveness analyses. Opportunities Engaging with > 50 members of the EA community and their materials revealed three major opportunities:1. EA has skill bottlenecks that psychological professionals can help with Industry: management, entrepreneurship, operations, HR Science: at universities and EA mental health/wellbeing orgs (lack of seniors) Therapy: EA-informed psychotherapists and coaches for members of the EA community 2. It's hard for psychology professionals to enter EA even if they are likely good matches Existing materials and advice are difficult to find, scattered, contradictory, out of date, and selective (e.g. reading list, Effective Thesis) which likely discourages prospects Effectiveness and altruism: People largely enter psychology because they want to help others, and most universities have very high entrance requirements and thus select intelligent and ambitious individuals Mental health as a cause area faces large funding constraints that make it hard to enter, contribute and stick for people engaged in this field. 3. There is unfulfilled potential for synergistic action and systematic exchangeThe impact of psychology-related EA orgs could be boosted by informing members of the community of techniques, knowledge, and best practices they can use to be more impactful, e.g. status quo of psychology research fostering collaboration and joint efforts between organizations that face similar challenges, e.g. mental health orgs that need to learn how to develop scalable and sustainable psychological interventions cooperating with leading psychology experts and EA-aligned individuals outside of EA that can help EA stay up to date with current industry, market, and research standards. Conclusion. It can be easier to access up-to-date high-quality information, advice, and networking opportunities. We imagine...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ETGP 2022: materials, feedback, and lessons for 2023, published by trammell on September 22, 2022 on The Effective Altruism Forum. From August 20 to September 2, I ran a summer course in Oxford titled “Topics in Economic Theory and Global Prioritization”, or ETGP. It aimed to provide a rigorous introduction to a selection of topics in economic theory that appear especially relevant to the project of doing the most good. It was designed primarily for economics graduate students considering careers in global priorities research. The purpose of this post is to share the course materials as presented this year, the feedback, and a summary of the lessons learned. I hope it helps potential attendees get a better sense of whether they would like to attend next year, and potential organizers of similar programs get a better sense of whether and how to go about it. I've erred on the side of thoroughness regarding the feedback and lessons learned. A brief summary is that the course was rated very highly, and that I think this suggests people should consider organizing more structured courses, instead of sticking to the more common “EA formula” of reading groups and summer research fellowships. The course was sponsored by Forethought and made possible by operations support from the Global Priorities Institute. If you would like to be notified when applications open for next year, email me at philip.trammell@economics.ox.ac.uk. Course materials and program schedule The lecture slides and exercises, as presented this year (with corrections), can be found here. Feel free to use them for any purpose. The program was scheduled as follows: Saturday, August 20 - Sunday, August 21: two 1.5h lectures per day on philosophical and mathematical background material, respectively. Monday, August 22 - Friday, August 26: two 1.5h lectures per day on various EA-relevant macroeconomic theory topics. Saturday, August 27: (optional) punting in the afternoon. Monday, August 29 - Friday, September 2: two 1.5h lectures per day on various EA-relevant microeconomic theory topics. Attendees were given the option to stay for a (totally unstructured) third week to discuss research ideas with each other, schedule meetings with others in Oxford, and so on. The lectures, except for those on the first day, came with exercises. Every lecture-day except the first two (August 20-21) opened with a 1-hour session in which I went over previous lecture-day's exercises. They were not graded. Lectures and lunches were held at Trajan House, where GPI and Forethought are based. Breakfasts, dinners, and housing were held at Worcester College, Oxford, except for an opening dinner and a closing dinner, which were held at pubs. Applicant and attendee characteristics There were 179 applicants. 46 were accepted (26%), and 34 attended at least in part. Educational backgrounds of the attendees: 1 was an assistant professor of economics (3%) 16 were enrolled in, or about to begin, doctoral programs in economics (47%) 3 were enrolled in or about to begin master's programs in economics, or recent graduates of master's programs not doing either of the above (9%) 6 were doing pre-doctoral research / research assistance in economics (18%) 6 were undergraduates studying economics, or recent graduates not doing any of the above (18%) 2 had never pursued an economics degree (6%). (One had a graduate degree in a related field, and the other was pursuing one) Genders of the attendees: 29 were male (85%) 5 were female (15%) 45 of the applicants (25%) and 9 of the admits (20%) were female. I noticed the relative scarcity of female applicants when reviewing the applications, and I did my best to ensure that they were not rejected unfairly. Feedback The feedback survey received 24 responses (71%). Aggregate results are as follows: Overall eva...
Are ambition and altruism compatible? How ambitious should we be if we want to do as much good in the world as possible? How should we handle expected values when the probabilities become very small and/or the values of the outcomes become very large? What's a reasonable probability of success for most entrepreneurs to aim for? Are there non-consequentialist justifications for longtermism?Habiba Islam is an advisor at 80,000 Hours where she talks to people one-on-one, helping them to pursue high impact careers. She previously served as the Senior Administrator for the Future of Humanity Institute and the Global Priorities Institute at Oxford. Before that she qualified as a barrister and worked in management consulting at PwC specialising in operations for public and third sector clients. Follow her on Twitter at @FreshMangoLassi or learn more about her work at 80,000 Hours at 80000hours.org.
Read the full transcript here. Are ambition and altruism compatible? How ambitious should we be if we want to do as much good in the world as possible? How should we handle expected values when the probabilities become very small and/or the values of the outcomes become very large? What's a reasonable probability of success for most entrepreneurs to aim for? Are there non-consequentialist justifications for longtermism?Habiba Islam is an advisor at 80,000 Hours where she talks to people one-on-one, helping them to pursue high impact careers. She previously served as the Senior Administrator for the Future of Humanity Institute and the Global Priorities Institute at Oxford. Before that she qualified as a barrister and worked in management consulting at PwC specialising in operations for public and third sector clients. Follow her on Twitter at @FreshMangoLassi or learn more about her work at 80,000 Hours at 80000hours.org. [Read more]
Read the full transcriptAre ambition and altruism compatible? How ambitious should we be if we want to do as much good in the world as possible? How should we handle expected values when the probabilities become very small and/or the values of the outcomes become very large? What's a reasonable probability of success for most entrepreneurs to aim for? Are there non-consequentialist justifications for longtermism?Habiba Islam is an advisor at 80,000 Hours where she talks to people one-on-one, helping them to pursue high impact careers. She previously served as the Senior Administrator for the Future of Humanity Institute and the Global Priorities Institute at Oxford. Before that she qualified as a barrister and worked in management consulting at PwC specialising in operations for public and third sector clients. Follow her on Twitter at @FreshMangoLassi or learn more about her work at 80,000 Hours at 80000hours.org.
Sign up for Intelligence Squared Premium here: https://iq2premium.supercast.com/ for ad-free listening, bonus content, early access and much more. See below for details. Will MacAskill is the philosopher thinking a million years into the future who is also having a bit of a moment in the present. As Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute at the University of Oxford, he is co-founder of the effective altruism movement, which uses evidence and reason as the driver to help maximise how we can better resource the world. MacAskill's writing has found fans ranging from Elon Musk to Stephen Fry and his new book is What We Owe the Future: A Million-Year View. Our host on the show is Max Roser, Director of the Oxford Martin Programme on Global Development and founder and editor of Our World in Data. … We are incredibly grateful for your support. To become an Intelligence Squared Premium subscriber, follow the link: https://iq2premium.supercast.com/ Here's a reminder of the benefits you'll receive as a subscriber: Ad-free listening, because we know some of you would prefer to listen without interruption One early episode per week Two bonus episodes per month A 25% discount on IQ2+, our exciting streaming service, where you can watch and take part in events live at home and enjoy watching past events on demand and without ads A 15% discount and priority access to live, in-person events in London, so you won't miss out on tickets Our premium monthly newsletter Intelligence Squared Merch Learn more about your ad choices. Visit megaphone.fm/adchoices
Ryan talks to professor and writer Will MacAskill about his book What We Owe The Future, how to create effective change in the world, the importance of gaining a better perspective on the world, and more.Will MacAskill is an Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute, University of Oxford. His research focuses on the fundamentals of effective altruism - the use of evidence and reason to help others as much as possible with our time and money - with a particular concentration on how to act given moral uncertainty. He is the author of the upcoming book What We Owe The Future, available for purchase on August 12. Will also wrote Doing Good Better: Effective Altruism and a Radical New Way to Make a Difference and co-authored Moral Uncertainty.✉️ Sign up for the Daily Stoic email: https://dailystoic.com/dailyemail
Our existential risk – the probability that we could wipe ourselves out due to AI, bio-engineering, nuclear war, climate change, etc. in the next 100 years – currently sits at 1 in 6. Let that sink in! Would you get on a plane if there was a 17% chance it would crash? Would you do everything you could to prevent a calamity if you were presented with those odds? My chat today covers a wild idea that could – and should - better our chances of existing as a species…and lead to a human flourishing I struggle to even imagine. Long Termism argues that prioritisng the long term future of humanity has exponential ethical and existential boons. Flipside, if we don't choose the long termist route, the repercussions are, well, devastating.Will MacAskill is one of the world's leading moral philosophers and I travel to Oxford UK, where he runs the Global Centre of Effective Altruism, the Global Priorities Institute and the Forethought Foundation, to talk through these massive moral issues. Will also explains that right now is the most important time in humanity's history. Our generation singularly has the power and responsibility to determine two diametrically different paths for humanity. This excites me; I hope it does you, too.Learn more about Will MacAskill's work Purchase his new book What We Owe the Future: A million year view If you need to know a bit more about me… head to my "about" page. Subscribe to my Substack newsletter for more such conversations. Get your copy of my book, This One Wild and Precious LifeLet's connect on Instagram! It's where I interact the most. Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
It's always a little humbling to think about what affects your words and actions might have on other people, not only right now but potentially well into the future. Now take that humble feeling and promote it to all of humanity, and arbitrarily far in time. How do our actions as a society affect all the potential generations to come? William MacAskill is best known as a founder of the Effective Altruism movement, and is now the author of What We Owe the Future. In this new book he makes the case for longtermism: the idea that we should put substantial effort into positively influencing the long-term future. We talk about the pros and cons of that view, including the underlying philosophical presuppositions.Mindscape listeners can get 50% off What We Owe the Future, thanks to a partnership between the Forethought Foundation and Bookshop.org. Just click here and use code MINDSCAPE50 at checkout.Support Mindscape on Patreon.William (Will) MacAskill received his D.Phil. in philosophy from the University of Oxford. He is currently an associate professor of philosophy at Oxford, as well as a research fellow at the Global Priorities Institute, director of the Forefront Foundation for Global Priorities Research, President of the Centre for Effective Altruism, and co-founder of 80,000 hours and Giving What We Can.Web sitePhilPeople profileGoogle Scholar publicationsWikipediaTwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
To gain access to ALL full-length episodes, you'll need to subscribe. If you’re already subscribed and on the private RSS feed, the podcast logo should appear RED. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Sam Harris speaks with William MacAskill about his new book, What We Owe the Future. They discuss the philosophy of effective altruism (EA), longtermism, existential risk, criticism of EA, problems with expected-value reasoning, doing good vs feeling good, why it's hard to care about future people, how the future gives meaning to the present, why this moment in history is unusual, the pace of economic and technological growth, bad political incentives, value lock-in, the well-being of conscious creatures as the foundation of ethics, the risk of unaligned AI, how bad we are at predicting technological change, and other topics. William MacAskill is an Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute, University of Oxford. He is one of the primary voices in a philanthropic movement known as “effective altruism” and the co-founder of three non-profits based on effective altruist principles: Giving What We Can, 80,000 Hours, and the Centre for Effective Altruism. He is also the Director of the Forethought Foundation for Global Priorities Research and the author of Doing Good Better: Effective Altruism and a Radical New Way to Make a Difference. Website: williammacaskill.com Twitter: @willmacaskill Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Habiba Islam is a member of 80,000 Hours' advising team. She previously served as the Senior Administrator for the Future of Humanity Institute and the Global Priorities Institute at Oxford. Before that, she qualified as a barrister and worked in management consulting, specialising in operations for public and third sector clients.This talk was first published by the Stanford Existential Risks Initiative. Click here to view it with the video.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies, published by MichaelPlant on June 24, 2022 on The Effective Altruism Forum. This is a transcript of a talk I gave at the Moral Foundations of Progress Studies workshop at the University of Texas in March 2022. Or rather, it's a re-recorded and edited version of the talk that was subsequently produced for a Global Priorities Institute reading group on ‘progress' and then updated in light of many helpful comments from that seminar. The original slide deck can be viewed here. 1. Introduction As I understand it, Progress Studies is a nascent intellectual field which starts by asking the question, “Since we seem to have gotten a lot of progress over the last couple of hundred years, where did this come from, and what can we do to get more of it?” (Vox, 2021). Progress Studies has been popularised by academics such as Tyler Cowen and Steven Pinker. However, the Easterlin Paradox presents a real challenge to the claim that if we want more progress, we just need to improve the long-run growth rate - a view that Cowen argues for in his book Stubborn Attachments. This is a possible version of Progress Studies and the one I'm responding to. So what is the Easterlin Paradox? Quoting Easterlin and O'Connor (2022), the Easterlin Paradox states: At a point in time, happiness varies directly with income both among and within nations, but over time the long-term growth rates of happiness and income are not significantly related. There is a common view that economic growth is going to make our lives better, but the Easterlin Paradox challenges this. What's paradoxical is that at a given point in time, richer people are more satisfied than poorer people and richer countries are more satisfied than poorer countries, but over the course of time, countries which grow faster don't seem to get happier faster. In other words, if I get richer, that will be good for me, but if we all get richer, that won't do anything for us collectively. While subjective wellbeing (self-reported happiness and life satisfaction) has gone up in previous decades, the challenge of the Easterlin Paradox is that countries which grow faster do not seem to be getting happier faster; growth per se seems unrelated to average subjective wellbeing. If the paradox holds, the result would be striking and significant. It would suggest that, if we want to increase average wellbeing, we must not rely on growth, but go back to the drawing board and see what really works. There's been quite a bit of debate over the nature and existence of the Paradox. The topic first emerged in 1974 when Richard Easterlin published a paper called, Does Economic Growth Improve the Human Lot? It's been particularly challenged by Stevenson and Wolfers (2008), who claim the paradox is an illusion and growth is making us happier. However, after looking into this myself, I actually think that Easterlin has the better half of the debate and the paradox does propose a real challenge to the idea that economic growth alone will make us happier. My main purpose here is to explain what the Easterlin Paradox is and why - despite doubts - we need to take it seriously. My second purpose is to show that we can work out how to improve subjective wellbeing in society and make some tentative suggestions about this. However, this project is only starting to be taken seriously and there is lots more work to be done. 2. Evidence for the Paradox So where does the Easterlin Paradox data come from? It's based on survey questions such as: Taking all things together, how would you say things are these days? Would you say you're very happy, pretty happy, or not too happy?[1] All things considered, how satisfied are you with your life as a whole nowadays, from one, diss...
William MacAskill is an Associate Professor in Philosophy at Oxford University and a senior research fellow at the Global Priorities Institute. He is also the director of the Forethought Foundation for Global Priorities Research and co-founder and President of the Centre for Effective Altruism. He is also the author of Doing Good Better and Moral Uncertainty, and has an upcoming book on longtermism called What We Owe The Future.This talk was taken from EA Global: London 2021. Click here to watch the talk with the video.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Job: EA Office Manager, published by Jonathan Michel on May 2, 2022 on The Effective Altruism Forum. Summary I think being an Office Manager at an EA office can be a really impactful job, and I would like to share my experience and give some advice. Part of the reason I'm writing this is that I'm the new Head of Property at the CEA Operations Team, but my previous job was being the office manager, and I'm hiring someone to replace me. If you're interested, please apply here or get in touch with me if you have questions. How much you should trust me? I'm biased, but I'm trying to be honest and just share my experience. Why am I writing this? There has already been a bit of past discussion of ops on the forum. The reason I want to add to this pile of writing about operations is that I want to advocate/give context on the specific role of an EA Office Manager — one that oftentimes people underestimate in its importance. There has been some talk regarding EA Hubs and where we should start new ones. I think that EA Hubs can be very impactful and that a great office manager is a crucial component. An outstanding EA office which makes people more productive and happy needs an outstanding office manager. My Background Overall: I had a lot of experience with EA (I ran a local group for over four years, was well-read, and organised a bunch of events), and did a lot of volunteering through which I built operations skills. I did a lot of volunteer work for the German EA community such as organizing fellowships, talks and a retreat. I co-founded two NGOs (one focused on COVID relief and the other on cellular agriculture) I did a bunch of volunteering for GFI, ProVeg, and an internship for www.effektiv-spenden.org Day in the life Last year, I started as the Office Manager of the Oxford EA office, Trajan House. Trajan House currently accommodates the Centre for Effective Altruism, the Future of Humanity Institute, the Global Priorities Institute, the Forethought Foundation, the Centre for the Governance of AI, the Global Challenges Project, Our World in Data, and a number of people working at other EA organisations (such as Rethink Priorities, HLI, LEEP, and OpenPhil). At the moment, around 80 EA professionals work at Trajan House, and this number is growing. If you want to get a better sense of Trajan House (including some photos) you can see the office guide here. Until very recently, the office team consisted of me (as the office manager), and employees of Oxford University working in the reception area and in facilities management. One month ago two office assistants joined my team, so we now have two additional FTEs helping to run the office. As described above, I'm now transitioning out of the office manager role, but the below outlines my week in the position. How I spent my time: 40% - Expanding, changing and optimising the office set-up (including thinking about how we can further expand and improve the services we provide) 20% - Developing the culture and community aspects of the office (e.g. by planning events) 20% - Processing direct requests like “Can I get a MacBook charger, please?”, “Do we have spare copies of The Precipice?” 10% - managing the Office Assistants and liaising with the Facilities Management team 5% - Processing requests of individuals or organisations for office space For a more tangible sense of what I did, here are some specific things I did in the last couple of months: Changing the acoustics (the “soundscape”) of our cafeteria (getting different quotes done, thinking about the interior design of the space, liaising with the contractors implementing it) Finding a new caterer (researching different companies, work-trialling them, negotiating a contract in cooperation with our lawyers, having regular check-ins to ensure quality and improve their s...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should you do an economics PhD (or master's)?, published by david reinstein on April 19, 2022 on The Effective Altruism Forum. David Reinstein: all opinions are mine unless noted. Extensive input from Phil Trammell and an Anonymous Contributor (quoted extensively, henceforth “AC”). Thanks to Pete Wildeford and David Moss for feedback. I intend to continue to update and improve this post in situ (or linking out a ‘permanently updated' version). Overview and some takeaways Should I do an economics PhD (or master's)'? What do I need to learn to work at an EA org? How can I level up on this stuff and prove value? These were the most frequent questions I got at the 2022 EAGx Boston conference. I mainly discussed this with undergraduate students, but also with people at career pivot points.[1] My overall view, epistemic basis/confidence, key points Main ‘pros': Much of EA is based in economics, and economics speaks to most of the important cause areas and debates in EA, as well as to the important empirical questions. Conditional on going for a PhD, I believe economics will be one of the stronger choices for the sort of people reading this post. A PhD in economics, and much of the associated training (over ~2 years of coursework and ~3-5 years of ‘writing') helps you towards a range of career paths with potential for strong impact (and a comfortable life) both within and outside EA organizations. Being a PhD student in the right place and time (and mental state) can be very stimulating, productive, creative, and connection-building.[2] You are typically given a lot of freedom in the research phase, as long as your work meets the general approval and framework of your advisor(s) and what the gatekeepers think is important, credible and ‘is economics'. Typically, you don't have to pay for a PhD, you will get money to support yourself, and PhD stipends are often OK. Important considerations: Economics is broad (in its methods and focus-area paths). Often differences in approaches among economists (pure theory, applied econometrics, macro, etc.) are greater than the difference between some economists and some (e.g.) political scientists or psychologists. Important impact paths that an economics PhD may help with include: Applied work ‘informed by expertise and credibility', Deep work formally/mathematically addressing fundamental questions of global priorities and social welfare, Theoretical, computational, and empirical work considering markets and/or the global economy, informing (e.g.) animal welfare policies or the development of technology, Empirical work assessing the impact of interventions, or considering assessing human behavior, choices, attitudes and preferences. There are a range of relevant career paths Academia and academically-leaning think-tanks; doing EA-relevant research as well as potentially transforming academia and the scholarly debate ‘from the inside', Working in governments or NGOs (many require/prefer PhDs), Working at EA-aligned organizations like Rethink Priorities, Global Priorities Institute, Open Philanthropy, maybe MIRI . note a lot of differences across these, For-profit and entrepreneurial options; possibly impactful for out-of-the-bun thinkers/doers. Main ‘cons and caveats': For many/most paths you could learn most/all of the relevant skills and approaches, and background without getting a PhD,[3] In some key areas economics might not be as strong or relevant as other fields (statistics and data science for robust empirical work and predictions, decision science and cognitive science for AI alignment work), The economics PhD program makes you jump some time-consuming hoops that are likely not going to be relevant to your applied career path,[4] People around you will not mainly be value-aligned; beware value drift towards academic prestige an...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Workshop for underrepresented groups in GPR - Intellectual and cultural diversity in GPR, published by Global Priorities Institute on March 29, 2022 on The Effective Altruism Forum. The Global Priorities Institute (GPI) has recently openend applications for the four-day fully funded Open Student Workshop in Global Priorities Research in Oxford at the end of June. We invite graduate students, junior researchers, and advanced undergraduates from groups underrepresented in GPR or academia to apply. This post gives some background on why we are organising this workshop as well as some further details on the event. Background Global priorities research (GPR) is a relatively homogenous field. Intellectual and cultural diversity within GPR is important for several reasons. First, GPR aims to ask political, moral and empirical questions. Answering these questions well requires incorporating a range of perspectives in order to reduce blindspots. Second, a successful effective altruism community should be welcoming to diverse groups (with different religions, cultures and value systems). This requires working with the input and participation of a diverse range of members. We are excited to help people to enter and contribute to GPR. We are impressed by the projects and efforts of Magnify Mentoring (formerly Wanbam) and hope to build on their efforts to give everyone interested in effective altruism the same opportunity to reach their potential and contribute to the collective effort of making the biggest difference. The workshop In the June Workshop (for which applications are open until 20 April), participants will attend an interdisciplinary workshop on global priorities research organised by the Global Priorities Institute, receive personalised 1:1 career- and research-coaching from GPI staff members and other members of the global priorities research community; attend presentations and discussions on global priorities research, receive career planning and experiences in academic research; and receive networking and social opportunities throughout. Feel free to share this event as much as possible and you can message the organisers, Charlotte and David, with any questions (Charlotte.Siegmann@economics.ox.ac.uk/ David.Thorstad@philosophy.ox.ac.uk). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Add randomness to your EA conferences, published by Vaidehi Agarwalla on March 4, 2022 on The Effective Altruism Forum. TL;DR: Build in time for random spontaneous interactions with people at EA conferences! There is a lot of great advice on how to strategically network during EA Globals - and I think it's great to encourage people to be very intentional and goal-oriented with what they get out of the conference. However, I also think that deliberate randomness is also quite valuable (although higher variance). The chances that I'll randomly meet cool and interesting people at an EA Global is really, really high. What's more, my deliberate attempts to connect with people will heavily bias me towards : People I already know Who generally share my interests or worldview Who fill out their conference profiles or are otherwise 'legible' Who already know (of) me "Shop talk" 1-1 interactions (aside from workshops, most interactive EA events like professional or affiliation meetups tend to do 1-1 speed networking sessions) I find unstructured interactions without any expectations fun and enjoyable and a less intense than back to back 1-1s. I try to budget about 20-30% of each EA conference (sometimes more) for random interactions. Randomness in practice I find joining a group of people talking to be an inherently awkward experience. Luckily there are lots of ways EA conferences provide assisted serendipity! Physical Events Attending general speed networking events Not scheduling 1-1s for a 1-2 hours during lunch or dinners. You could join a table at random that has empty spots. You could also sit at an empty table and wait for other braver souls to join you! I've also noticed that conversations tend to happen near water fountains, washrooms & coat rooms (I think this is some combination of common area + lower social expectations + small enough group of people) Cause / Career / Affinity group networking sessions are semi-random Some people find it easy to join conversations that start up in hallways etc. Don't schedule things immediately after talks and speak with the other attendees Virtual Events In general, it's harder to have random interactions at virtual events. However, it could still be worth trying! Other than attending networking events, one strategy could be to randomly choose people on the networking app, say hi and let them know that you're trying to meet counterfactually new people. I expect this approach will be much higher variance / more time intensive than the physical counterpart but could be interesting! Examples (These are mostly personal examples because I didn't spend too long asking others - if anyone else has some examples, please comment or DM me and I'll add them here! ) I became very good friends with someone I met at a dinner group at EA Global, who later became a co-collaborator on a project for Effective Environmentalism David ended up co-writing a paper with Anders Sandberg based on a discussion after a Q&A session at a conference at the Global Priorities Institute. I had a very interesting conversation with an operations person at AMF about their job at my first EAG I have felt more comfortable reaching out to people I've met at least once, so this has expanded the number of people I can ask for help or introduce to others I felt like I belonged to the EA community because the people I met had similar perspectives to me (see here) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on GPI's activities and plans for 2022, published by Global Priorities Institute on January 28, 2022 on The Effective Altruism Forum. This post gives a summary of the Global Priorities Institute's activities from late 2020 to late 2021 and our priorities and plans for 2022. Introduction and summary GPI is a young research institute, formally established in early 2018. By the end of 2018, we had hired our first postdocs, and settled on an initial research focus of “longtermism”: roughly, the idea that the most cost-effective opportunities to do good are matters of influencing the course of the very far future, on timescales of thousands or even millions of years, rather than focussing on how things go (say) within our own lifetimes. We have continuously grown our team since then, with now 15 full-time researchers on our staff. The academic year 2020-21 has been an exciting year for GPI. One highlight has been welcoming and integrating our first two full-time postdoctoral researchers in economics, enabling us to begin building up the economics arm of GPI in earnest. As part of a strategy of investigating possible diversifications of our research agenda within philosophy, we also added a specialist in political philosophy to our philosophy team, and conducted some exciting preliminary exploration of mission-aligned research directions related to “politics and institutions”. And despite the challenges of the COVID-19 pandemic, we were able to develop increasingly close collaborations with various external researchers on topics central to GPI's mission and continued to build a broad base of earlier-stage contacts via an active program of online workshops. Over the coming year, in addition to building on existing lines of progress, our key priorities include building a network of researchers across a wider range of academic disciplines (including psychology, law and history) who share an interest in GPI's central research themes, and developing a major project on long-run forecasting. Major activities Research Research output This year, GPI researchers have collectively produced a total of seven new working papers on topics central to GPI's mission, placed seven such papers newly under review at top academic journals, and had two such papers accepted for publication. Details on our new working papers are in the Appendix, and all GPI working papers can be found on our website. Research exploration Since GPI takes very seriously the imperative to produce research that is optimised to further its distinctive mission, we spend a significant proportion of our research time on exploration that is aimed at mapping out and uncovering ideas for projects that meet this description, alongside the “exploitation” work that turns existing ideas into finished research products. A major area of exploration this past year was “Institutions”. We ran a year-long pilot project to assess whether GPI should expand its research foci in directions related to politics and institutions. We identified several more specific research topics that the pilot group members were excited about. These include (1) increasing returns and thresholds in the context of pursuing social/political change, and to what extent these phenomena undermine “neglectedness” as a heuristic for expected impact; (2) whether impartial altruists should favour more or less centralisation, consolidation and uniformity of institutions; and (3) how to represent the interests of future generations (and other non-voters) within democratic political institutions. Further investigation of these three topics is underway: Jacob Barrett and Loren Fryxell are co-authoring a paper on (1), and plan to convene additional working groups to undertake deeper dives into (2) and (3). We also ran additional exploratory working groups on “the v...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: A bunch of new GPI papers, published by Pablo on the effective altruism forum. Write a Review Earlier today I posted a link to Andreas Mogensen's paper on "Maximal Cluelessness". But later I realized that this was just one among several important papers published yesterday on the Global Priorities Institute website. Rather than posting separate links to each, I'm linking to all of them below (abstract included when available). Cotton-Barratt & Greaves, A bargaining-theoretic approach to moral uncertainty This paper explores a new approach to the problem of decision under relevant moral uncertainty. We treat the case of an agent making decisions in the face of moral uncertainty on the model of bargaining theory, as if the decision-making process were one of bargaining among different internal parts of the agent, with different parts committed to different moral theories. The resulting approach contrasts interestingly with the extant “maximise expected choiceworthiness” and “my favourite theory” approaches, in several key respects. In particular, it seems somewhat less prone than the MEC approach to ‘fanaticism': allowing decisions to be dictated by a theory in which the agent has extremely low credence, if the relative stakes are high enough. Overall, however, we tentatively conclude that the MEC approach is superior to a bargaining-theoretic approach. Greaves & MacAskill, The case for strong longtermism We believe that this neglect of the very long-term future is a grave moral error. An alternative perspective is given by a burgeoning view called longtermism, on which we should be particularly concerned with ensuring that the long-run future goes well. In this article we accept this view but go further, arguing that impacts on the long run are the most important feature of our actions. More precisely, we argue for two claims. Axiological strong longtermism (AL): In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best. Deontic strong longtermism (DL): In a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best. MacAskill & Mogensen, The paralysis argument Given plausible assumptions about the long-run impact of our everyday actions, we show that standard non-consequentialist constraints on doing harm entail that we should try to do as little as possible in our lives. We call this the Paralysis Argument. After laying out the argument, we consider and respond to a number of objections. We then suggest what we believe is the most promising response: to accept, in practice, a highly demanding morality of beneficence with a long-term focus. Mogensen, Meaning, medicine and merit Given the inevitability of scarcity, should public institutions ration healthcare resources so as to prioritize those who contribute more to society? Intuitively, we may feel that this would be somehow inegalitarian. I argue that the egalitarian objection to prioritizing treatment on the basis of patients' usefulness to others is best thought of as semiotic: i.e. as having to do with what this practice would mean, convey, or express about a person's standing. I explore the implications of this conclusion when taken in conjunction with the observation that semiotic objections are generally flimsy, failing to identify anything wrong with a practice as such and having limited capacity to generalize beyond particular contexts. Mogensen, ‘The only ethical argument for positive
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some AI Governance Research Ideas, published by MarkusAnderljung, Alexis Carlier on the AI Alignment Forum. Compiled by Markus Anderljung and Alexis Carlier Junior researchers are often wondering what they should work on. To potentially help, we asked people at the Centre for the Governance of AI for research ideas related to longtermist AI governance. The compiled ideas are developed to varying degrees, including not just questions, but also some concrete research approaches, arguments, and thoughts on why the questions matter. They differ in scope: while some could be explored over a few months, others could be a productive use of a PhD or several years of research. We do not make strong claims about these questions, e.g. that they are the absolute top priority at current margins. Each idea only represents the views of the person who wrote it. The ideas aren't necessarily original. Where we think someone is already working on or has done thinking about the topic before, we've tried to point to them in the text and reach out to them before publishing this post. If you are interested in pursuing any of these projects, please let us know by filling out this form. We may be able to help you find mentorship, advice, or collaborators. You can also fill out the form if you're intending to work on the project independently, so that we can help avoid duplication of effort. If you have feedback on the ideas, feel free to email researchideas@governance.ai. You can find the ideas here. Our colleagues at the FHI AI Safety team put together a corresponding post with AI safety research project suggestions here. Other Sources Other sources of AI governance research projects include: AI Governance: A Research Agenda, Allan Dafoe Research questions that could have a big social impact, organised by discipline, 80,000 Hours The section on AI in Legal Priorities Research: A Research Agenda, Legal Priorities Project Some parts of A research agenda for the Global Priorities Institute, Global Priorities Institute AI Impact's list of Promising Research Projects Phil Trammell and Anton Korinek's Economic Growth under Transformative AI Luke Muehlhauser's 2014 How to study superintelligence strategy You can also look for mentions of possible extensions in papers you find compelling A list of the ideas in the document: The Impact of US Nuclear Strategists in the early Cold War Transformative AI and the Challenge of Inequality Human-Machine Failing Will there be a California Effect for AI? Nuclear Safety in China History of existential risk concerns around nanotechnology Broader impact statements: Learning lessons from their introduction and evolution Structuring access to AI capabilities: lessons from synthetic biology Bubbles, Winters, and AI Lessons from Self-Governance Mechanisms in AI How does government intervention and corporate self-governance relate? Summary and analysis of “common memes” about AI, in different communities A Review of Strategic-Trade Theory Mind reading technology Compute Governance ideas Compute Funds Compute Providers as a Node of AI Governance China's access to cutting edge chips Compute Provider Actor Analysis Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. This is: EA Leaders Forum: Survey on EA priorities (data and analysis), published by Aaron Gertler on the effective altruism forum. Thanks to Alexander Gordon-Brown, Amy Labenz, Ben Todd, Jenna Peters, Joan Gass, Julia Wise, Rob Wiblin, Sky Mayhew, and Will MacAskill for assisting in various parts of this project, from finalizing survey questions to providing feedback on the final post. Clarification on pronouns: “We” refers to the group of people who worked on the survey and helped with the writeup. “I” refers to me; I use it to note some specific decisions I made about presenting the data and my observations from attending the event. This post is the second in a series of posts where we aim to share summaries of the feedback we have received about our own work and about the effective altruism community more generally. The first can be found here. Overview Each year, the EA Leaders Forum, organized by CEA, brings together executives, researchers, and other experienced staffers from a variety of EA-aligned organizations. At the event, they share ideas and discuss the present state (and possible futures) of effective altruism. This year (during a date range centered around ~1 July), invitees were asked to complete a “Priorities for Effective Altruism” survey, compiled by CEA and 80,000 Hours, which covered the following broad topics: The resources and talents most needed by the community How EA's resources should be allocated between different cause areas Bottlenecks on the community's progress and impact Problems the community is facing, and mistakes we could be making now This post is a summary of the survey's findings (N = 33; 56 people received the survey). Here's a list of organizations respondents worked for, with the number of respondents from each organization in parentheses. Respondents included both leadership and other staff (an organization appearing on this list doesn't mean that the org's leader responded). 80,000 Hours (3) Animal Charity Evaluators (1) Center for Applied Rationality (1) Centre for Effective Altruism (3) Centre for the Study of Existential Risk (1) DeepMind (1) Effective Altruism Foundation (2) Effective Giving (1) Future of Humanity Institute (4) Global Priorities Institute (2) Good Food Institute (1) Machine Intelligence Research Institute (1) Open Philanthropy Project (6) Three respondents work at organizations small enough that naming the organizations would be likely to de-anonymize the respondents. Three respondents don't work at an EA-aligned organization, but are large donors and/or advisors to one or more such organizations. What this data does and does not represent This is a snapshot of some views held by a small group of people (albeit people with broad networks and a lot of experience with EA) as of July 2019. We're sharing it as a conversation-starter, and because we felt that some people might be interested in seeing the data. These results shouldn't be taken as an authoritative or consensus view of effective altruism as a whole. They don't represent everyone in EA, or even every leader of an EA organization. If you're interested in seeing data that comes closer to this kind of representativeness, consider the 2018 EA Survey Series, which compiles responses from thousands of people. Talent Needs What types of talent do you currently think [your organization // EA as a whole] will need more of over the next 5 years? (Pick up to 6) This question was the same as a question asked to Leaders Forum participants in 2018 (see 80,000 Hours' summary of the 2018 Talent Gaps survey for more). Here's a graph showing how the most common responses from 2019 compare to the same categories in the 2018 talent needs survey from 80,000 Hours, for EA as a whole: And for the respondent's organization: The following table contains data on every category ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA, published by Michelle_Hutchinson on the AI Alignment Forum. Write a Review I found Will's and Buck's AMAs really interesting. I'm hoping others follow suit, so I thought I'd do one too. What I work on: I'm head of advising (what we used to call ‘coaching') for 80,000 Hours. That means I chat to people who are in the process of making impact-focused career decisions and help them with those decisions. I also hire people to the team, and manage them - currently we have one other adviser, and we have another joining us next year. Alongside my usual calls, I answer career related questions in other formats, for example on the 80,000 Hours podcast (the episode will come out next year). My background: I joined 80,000 Hours from the Global Priorities Institute, which I set up with Hilary Greaves. Before that I ran Giving What We Can and did the operational set up of the Centre for Effective Altruism. I have a philosophy PhD on prioritising in global health. I wrote about how I initially got involved with effective altruism here. I'll be answering in a personal capacity so I won't comment much on 80,000 Hours overall strategy except as it relates to the advising team. I'm very happy to answer questions related to career decisions, and to work I've done in the past. Right now I'm on maternity leave with my first baby, so how fast I respond will depend on how he behaves himself. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you value future people, why do you consider near term effects? , published by Alex HT on the LessWrong. [Nothing here is original, I've just combined some standard EA arguments all in one place] Introduction I'm confused about why EAs who place non-negligible value on future people justify the effectiveness of interventions by the direct effects of those interventions. By direct effects I mean the kinds of effects that are investigated by GiveWell, Animal Charity Evaluators, and Charity Entrepreneurship. I mean this in contrast to focusing on the effects of an intervention on the long-term future as investigated by places like Open Phil, the Global Priorities Institute, and the Future of Humanity Institute. This post lays out my current understanding of the problem so that I can find out the bits I'm missing or not understanding properly. I think I'm probably wrong about something because plenty of smart, considerate people disagree with me. Also, to clarify, there are people I admire who choose to work on or donate to near-term causes. Section one states the problem of cluelessness (for a richer treatment read this: Cluelessness, Hilary Greaves) and explains why we can't ignore the long-term effects of interventions. Section two points at some implications of this for people focussed on traditionally near-term causes like mental health, animal welfare, and global poverty. I think these causes all seem pressing. I think that they are long-term problems (ie. poverty or factory farms now are just as bad as poverty or factory farms in 1000 years) and that it makes sense to prioritise the interventions that have the best long-term effects on these causes. Section three tries to come up with objections to my view, and respond to them. 1. Cluelessness and Long-term Effects Simple cluelessness All actions we take have huge effects on the future. One way of seeing this is by considering identity-altering actions. Imagine that I pass my friend on the street and I stop to chat. She and I will now be on a different trajectory than we would have been otherwise. We will interact with different people, at a different time, in a different place, or in a different way than if we hadn't paused. This will eventually change the circumstances of a conception event such that a different person will now be born because we paused to speak on the street. Now, when the person who is conceived takes actions, I will be causally responsible for those actions and their effects. I am also causally responsible for all the effects flowing from those effects. This is an example of simple cluelessness, which I don't think is problematic. In the above example, I have no reason to believe that the many consequences that would follow from pausing would be better than the many consequences that follow from not pausing. I have evidential symmetry between the two following claims: Pausing to chat would have catastrophic effects for humanity Not pausing to chat would have catastrophic effects for humanity And similarly, I have evidential symmetry between the two following claims: Pausing to chat would have miraculous effects for humanity Not pausing to chat would have miraculous effects for humanity (I'm assuming there's nothing particularly special about this chat - eg. we're not chatting about starting a nuclear war or influencing AI policy.) And for all resulting states of the world between catastrophe and miracle. I have evidential symmetry between act-consequence pairs. By evidential symmetry between two actions, I mean that, though massive value or disvalue could come from a given action, these effects could equally easily, and in precisely analogous ways, result from the relevant alternative actions. In the previous scenario, I assume that each of the possible people that will be born are as like...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evidence, cluelessness, and the long term - Hilary Greaves, published by velutvulpes, juliakarbing on the AI Alignment Forum. Hilary Greaves is a professor of philosophy at the University of Oxford and the Director of the Global Priorities Institute. This talk was delivered at the Effective Altruism Student Summit in October 2020. This transcript has been lightly edited for clarity. Introduction My talk has three parts. In part one, I'll talk about three of the basic canons of effective altruism, as I think most people understand them. Effectiveness, cost-effectiveness, and the value of evidence. In part two, I'll talk about the limits of evidence. It's really important to pay attention to evidence, if you want to know what works. But a problem we face is that evidence can only go so far. In particular, I argue in the second part of my talk that most of the stuff that we ought to care about is necessarily stuff that we basically have no evidence for. This generates the problem that I call 'cluelessness'. And in the third part of my talk, I'll discuss how we might respond to this fact. I don't know the answer and this is something that I struggle with a lot myself, but what I will do in the third part of the talk is I'll lay out five possible responses and I'll at least tell you what I think about each of those possible responses. Part one: effectiveness, cost-effectiveness, and the importance of evidence. Effectiveness So firstly, then, effectiveness. It's a familiar point in discussions of effective altruism and elsewhere that even most well-intentioned interventions don't in fact work at all, or in some cases, they even do more harm than good, on net. One example (which may be familiar to many of you already) is that of Playpumps. Playpumps were supposed to be a novel way of improving access to clean water across rural Africa. The idea is that instead of the village women laboriously pumping the water by hand themselves, you harness the energy and enthusiasm of youth to get children to play on a roundabout; and the turning of the roundabout is what pumps the water. This perhaps seemed like a great idea at the time, and millions of dollars were spent rolling out thousands of these pumps across Africa. But we now know that, well intentioned though it was, this intervention does more harm than good. The Playpumps are inferior to the original hand pumps that they replaced. For another example, one might be concerned to increase school attendance in poor rural areas. To do that, one starts thinking about: "Well, what might be the reasons children aren't going to school in those areas?" And there are lots of things you might think about: maybe because they're so poor they're staying home to work for the family instead, in which case perhaps sponsoring a child so they don't have to do that would help. Maybe they can't afford the school uniform. Maybe they're teenage girls and they're too embarrassed to go to school if they've got their period because they don't have access to adequate sanitary products. There could be lots of things. But let's seize on that last one, which seems like a plausible thing. Maybe their period is what's keeping many teenage girls away from school. If so, then one might very well think distributing free sanitary products would be a cost-effective way of increasing school attendance. But at least in one study, this too turns out to have zero net effect on the intended outcome. It has zero net effect on child years spent in school. That's maybe surprising, but that's what the evidence seems to be telling us. So many well-intentioned interventions turn out not to work. Cost-effectiveness Secondly, though, comes cost-effectiveness: even amongst the interventions that do work, there's an enormous variation in how well they work. If you have a fixed s...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writing about my job: Economics Professor, published by Kevin Kuruc on the AI Alignment Forum. I am following the advice of Aaron Gertler and writing a post about my job. 80000 hours has independent career path pages dedicated to getting an economics PhD and doing academic research, but the specifics of my personal experience may be of interest. Plus, it was fun to recount! Summary of Current Role Tenure track professor of economics (since 2019) at large state school in the US (University of Oklahoma; Boomer Sooner!) Its not MIT, but I have very bright and active colleagues. Some of you may even know of Joan Hamory of deworming fame. I research macroeconomic topics, primarily questions that are, at least loosely, within Global Priorities Research (GPR). This focus has lead to frequent engagement with the Global Priorities Institute as well as some folks at Open Philanthropy (though the latter has been very limited and informal so far). My Background and Path to Applying Went to a not-very-prestigious, but large, research university (Temple University; Go Owls!) In undergrad I couldn't get enough of my math and economics courses and was (probably) the best economics student at my University while I was there. This allowed me a lot of access to faculty. Neither parent went to college, so I was lucky that a professor pushed the idea of a PhD. That was not on my radar (nor did I understand it). I also enjoyed researching my honors thesis, learning how to write code, and the development economics internship I had in Cape Town. These made me confident a PhD was a good future move. My only useful extra curricular was a job tutoring math at the university learning center (this honed my only marketable skill - math - and I now recommend it to my students with mathematical aptitude). I then went directly from undergrad to a graduate school ranked ~25 the US (University of Texas at Austin; Hook 'Em!). I considered taking a job as a research assistant at a Federal Reserve Bank to improve my grad placement. Ultimately I decided the 2 year life-cost was not worth it. Despite the popularity of that route, I very much continue to think I made a good decision in my case. At Texas I worked in the macroeconomics group, but also had a development economics co-advisor. I sat a bit awkwardly between fields. In my 3rd year of graduate school I became very interested in more (not-yet-longtermist) EA ideas. I figured a job at an international organization would be a good path to impact and ended up landing an internship at the IMF. This internship probably only helped a bit towards my current academic life, but I learned a lot, enjoyed it, and can imagine scenarios where this did help land me in an international organization. Getting my Current Job There is plenty of advice on navigating the PhD economics job market, so I won't recount my general strategy here. If you're an undergrad instead looking for PhD application advice, check out the GPI mentoring program! Personally, I would have been happy at an academic job or a policy making organization (preferably something like IMF or World Bank). I ended up with offers from (i) my current academic institution and (ii) the Reserve Bank of India in their research department. The stark difference in these offers fairly represents the tightrope I was trying to walk between (i) showing I could do academic-style research (ii) working on applied policy questions and (iii) starting to get interested in GPR-style topics. I honestly feel like I didn't blend these very well; yet somehow I managed to land a job I was happy with. I'd be willing to talk with anyone entering the economics job market in the near future about my thoughts on this challenge. Also, I was on the hiring committee at my University this last year, so I now have a clearer understandin...
Phil Trammell is an Oxford PhD student in economics and research associate at the Global Priorities Institute. Phil is one of the smartest person I know, when considering the intersection of the long-term future and economic growth. Funnily enough, Phil was my roomate, a few years ago in Oxford, and last time I called him he casually said that he had written an extensive report on the econ of AI. A few weeks ago, I decided that I would read that report (which actually is a literature review), and that I would translate everything that I learn along the way to diagrams, so you too can learn what's inside that paper. The video covers everything from MacroEconomics 101 to self-improving AI in about 30-ish diagrams. paper: https://globalprioritiesinstitute.org/wp-content/uploads/Philip-Trammell-and-Anton-Korinek_economic-growth-under-transformative-ai.pdf video: https://youtu.be/2GCNmmDrRsk slides: https://www.canva.com/design/DAErBy0hqfQ/sVy6XJmgtJ_cYrGS87_uhw/view Outline: - 00:00 Podcast intro - 01:19 Phil's intro - 08:58 What's GDP - 13:42 Decreasing growth - 15:40 Permanent growth increase - 19:02 Singularity of type I - 22:58 Singularity of type II - 23:24 Production function - 24:10 The Economy as a two-tubes factory - 25:09 Marginal Products of labor/capital - 27:48 Labor/capital-augmenting technology - 29:13 Technological progress since Ford - 38:18 Factor payments - 41:30 Elasticity of substitution - 48:34 Production function with substitution - 53:18 Perfect substitutability - 54:00 Perfect complements - 55:44 Exogenous growth - 59:56 How to get long-run growth - 01:05:40 Endogenous growth - 01:10:40 The research feedback parameter - 01:17:35 AI as an imperfect substitute for human labor - 01:25:25 A simple model for perfect substitution - 01:33:09 AI as a perfect substitute - 01:36:07 Substitutability in robotics production - 01:40:43 OpenAI automating coding - 01:44:38 Growth impacts via impacts on savings - 01:46:44 AI in task-based models of good productions - 01:53:26 AI in technology production - 02:03:55 Limits of the econ model - 02:09:00 Conclusion
Eva discusses the challenges to choosing the most cost-effective causes that are due to uncertainty or lack of knowledge. After describing the problem, Eva presents some possible ways forward. Eva Vivalt is an Assistant Professor in the Department of Economics at the University of Toronto. Dr. Vivalt's main research interests are in investigating stumbling blocks to evidence-based policy decisions, including methodological issues, how evidence is interpreted, and the use of forecasting. Dr. Vivalt is also a PI on Y Combinator Research's basic income RCT and has other interests in labor economics, development, and global priorities research. Dr. Vivalt is the Founder of AidGrade, a research institute that generates and synthesizes evidence in international development, and Co-Founder of the Social Science Prediction Platform, a platform to coordinate the collection of forecasts of research results.Dr. Vivalt holds a Ph.D. in Economics and an M.A. in Mathematics from the University of California, Berkeley and previously worked with the Development Economics Research Group at the World Bank. Prior to the Ph.D., Dr. Vivalt completed an M.Phil. in Development Studies at the University of Oxford on a Commonwealth Scholarship. Dr. Vivalt has visited the Department of Economics at Yale University and Stanford University and was previously a Senior Lecturer (Australian for Assistant Professor) at the Australian National University. Dr. Vivalt has also visited, and is a Research Collaborator at, the Global Priorities Institute at the University of Oxford.This talk was taken from EA Global Asia and Pacific 2020. Click here to watch the talk with the PowerPoint presentation.
Rossa gives a high-level introduction to global priorities research (GPR). He discusses GPI's research plans, and which other organisations are doing GPR. He also offers some thoughts about what students could do to find out more about GPR.Rossa O'Keeffe-O'Donovan is a Postdoctoral Prize Research Fellow in Economics at Nuffield College and the Assistant Director of the Global Priorities Institute at the University of Oxford. He completed my PhD in Economics at the University of Pennsylvania in May 2017. Before Penn, Rossa completed the M.Sc. in Economics for Development at the University of Oxford.His main research interests are in empirical microeconomics:Development economicsNetworks and peer effectsPublic goodsStructural estimation
I had an excellent chat with Leopold Aschenbrenner. Leopold is a grant winner from Tyler Cowen's Emergent Ventures. He went to Columbia University, aged 15, and graduated in 2021 as valedictorian. (Contents below ↓ ↓ ). He is a researcher at the Global Priorities Institute, thinking about long-termism. He has drafted a provocative paper encompassing ideas of long-termisim, existential risk and growth. For some of our conversation we were joined by phantom Tyler Cowen imagining what he might think. We discussed Leopold's critique of German culture and whether he'd swap German infrastructure for the American entrepreneurial spirit. Whether being a valedictorian is efficient, if going to University at 15 is underrated and life at Columbia University. What you can learn from speed solving Rubik's cubes and if Leopold had to make the choice today if he'd still be vegetarian. Thinking about existential risk, Leopold considers whether nuclear or biological warfare risk is a bigger threat than climate change and how growth matters and if the rate of growth matters as much depending on how long you think humanity survives. Considering possible under rated existential risk Leopold sketches out several concerns over the falling global birth rate, how sticky that might be and whether policy would be effective. We consider what is worth seeing in Germany, how good or not GDP is as a measure and what we should do with our lives. Leopold has wide ranging thoughts and in thinking and working on fat tail existential ruin risks is working on saving the human world. Fascinating thoughts. Transcript Here with links and a video version here. Ben Yeoh's microgrants here. 1:35 How to think about a future career (80000 hours) 4:10 Is going to university at 15 years old underrated? 6:22 In favour of college and liberal arts vs Thiel fellowships 9:14 Is being a valedictorian efficient (H/T Tyler Cowen) 13:01 Leopold on externalities and how to sort smart people 15:08 Learnings from Columbia. The importance of work ethic. 19:50 Leopold learning from Adam Tooze and German history 22:16 Leopold critiques German culture on standing out. 23:08 Observations on decline of German universities 25:22 Leopold concerns on the German leadership class 30:25 German infrastructure and if it feels poor 34:13 Critique of too much netflix 35:27 What to learn from speed cubing Rubik's cubes and weird communities 38:04 Leopold's story of Emergent Ventures and what he found valuable 40:08 Embracing weirdness and disagreeableness 42:20 Leopold considering whether US entrepreneurial culture worth swapping for German infrastructure 44:44 Leopold on social ills of alcohol 44:59 Examining Leopold's ideas of existential risk and growth 48:49 Different views depending on time frame:700 years or millions of years 52:18 Leopold's view on importance of growth and risk of dark ages 57:07 Climate as a real risk but not a top existential risk 1:01:02 Nuclear weapons as an underrated existential risk 1:01:45 View on emergent AI risk 1:03:20 Falling fertility as an underrated risk 1:15:35 Mormon and eternal family 1:17:29 Underrated/overrated with phantom Tyler Cowen 1:36:10 What EA gets right/wrong, EA as religion? 1:44:56 Advice: Being independent, creative and writing blogs
The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waiting, or just accept this as a dollar you're never getting back? According to philosophy Professor Hilary Greaves - Director of Oxford University's Global Priorities Institute, this simple decision will completely change the long-term future by altering the identities of almost all future generations. This conversation from 2018 blends philosophy with an exploration of the mission and research agenda of the Global Priorities Institute: to develop the effective altruism movement within academia.Full transcript, related links, and summary of this interviewThis episode first broadcast on the regular 80,000 Hours Podcast feed on October 23, 2018. Some related episodes include:• #16 – Dr Hutchinson on global priorities research & shaping the ideas of intellectuals• #42 – Amanda Askell on moral empathy, the value of information & the ethics of infinity• #67 – Dave Chalmers on the nature and ethics of consciousness• #68 – Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities• #72 – Toby Ord on the precipice and humanity's potential futures• #86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for usSeries produced by Keiran Harris.
In this episode of the podcast, Sam Harris speaks with William MacAskill about how to do the most good in the world. They discuss the “effective altruism” movement, choosing causes to support, the apparent tension between wealth and altruism, how best to think about generosity over the course of one’s lifetime, and other topics. William MacAskill is an Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute, University of Oxford. He is one of the primary voices in a philanthropic movement known as “effective altruism” and the co-founder of three non-profits based on effective altruist principles: Giving What We Can, 80,000 Hours, and the Centre for Effective Altruism. He is also the Director of the Forethought Foundation for Global Priorities Research and the author of Doing Good Better: Effective Altruism and a Radical New Way to Make a Difference. Website: williammacaskill.com Twitter: @willmacaskill
Podcast: 80,000 Hours Podcast with Rob Wiblin (LS 52 · TOP 0.5% )Episode: #46 - Prof Hilary Greaves on moral cluelessness & tackling crucial questions in academiaRelease date: 2018-10-23The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waiting, or just accept this as a dollar you're never getting back? According to philosophy Professor Hilary Greaves - Director of Oxford University's Global Priorities Institute, which is hiring - this simple decision will completely change the long-term future by altering the identities of almost all future generations. How? Because by rushing back to the counter, you slightly change the timing of everything else people in line do during that day - including changing the timing of the interactions they have with everyone else. Eventually these causal links will reach someone who was going to conceive a child. By causing a child to be conceived a few fractions of a second earlier or later, you change the sperm that fertilizes their egg, resulting in a totally different person. So asking for that $1 has now made the difference between all the things that this actual child will do in their life, and all the things that the merely possible child - who didn't exist because of what you did - would have done if you decided not to worry about it. As that child's actions ripple out to everyone else who conceives down the generations, ultimately the entire human population will become different, all for the sake of your dollar. Will your choice cause a future Hitler to be born, or not to be born? Probably both! Links to learn more, summary and full transcript. Some find this concerning. The actual long term effects of your decisions are so unpredictable, it looks like you're totally clueless about what's going to lead to the best outcomes. It might lead to decision paralysis - you won't be able to take any action at all. Prof Greaves doesn't share this concern for most real life decisions. If there's no reasonable way to assign probabilities to far-future outcomes, then the possibility that you might make things better in completely unpredictable ways is more or less canceled out by equally likely opposite possibility. But, if instead we're talking about a decision that involves highly-structured, systematic reasons for thinking there might be a general tendency of your action to make things better or worse -- for example if we increase economic growth -- Prof Greaves says that we don't get to just ignore the unforeseeable effects. When there are complex arguments on both sides, it's unclear what probabilities you should assign to this or that claim. Yet, given its importance, whether you should take the action in question actually does depend on figuring out these numbers. So, what do we do? Today's episode blends philosophy with an exploration of the mission and research agenda of the Global Priorities Institute: to develop the effective altruism movement within academia. We cover: * How controversial is the multiverse interpretation of quantum physics? * Given moral uncertainty, how should population ethics affect our real life decisions? * How should we think about archetypal decision theory problems? * What are the consequences of cluelessness for those who based their donation advice on GiveWell style recommendations? * How could reducing extinction risk be a good cause for risk-averse people? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
Podcast: 80,000 Hours Podcast Episode: #46 - Hilary Greaves on moral cluelessness & tackling crucial questions in academiaRelease date: 2018-10-23The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waiting, or just accept this as a dollar you're never getting back? According to philosophy Professor Hilary Greaves - Director of Oxford University's Global Priorities Institute, which is hiring - this simple decision will completely change the long-term future by altering the identities of almost all future generations. How? Because by rushing back to the counter, you slightly change the timing of everything else people in line do during that day - including changing the timing of the interactions they have with everyone else. Eventually these causal links will reach someone who was going to conceive a child. By causing a child to be conceived a few fractions of a second earlier or later, you change the sperm that fertilizes their egg, resulting in a totally different person. So asking for that $1 has now made the difference between all the things that this actual child will do in their life, and all the things that the merely possible child - who didn't exist because of what you did - would have done if you decided not to worry about it. As that child's actions ripple out to everyone else who conceives down the generations, ultimately the entire human population will become different, all for the sake of your dollar. Will your choice cause a future Hitler to be born, or not to be born? Probably both! Links to learn more, summary and full transcript. Some find this concerning. The actual long term effects of your decisions are so unpredictable, it looks like you're totally clueless about what's going to lead to the best outcomes. It might lead to decision paralysis - you won't be able to take any action at all. Prof Greaves doesn't share this concern for most real life decisions. If there's no reasonable way to assign probabilities to far-future outcomes, then the possibility that you might make things better in completely unpredictable ways is more or less canceled out by equally likely opposite possibility. But, if instead we're talking about a decision that involves highly-structured, systematic reasons for thinking there might be a general tendency of your action to make things better or worse -- for example if we increase economic growth -- Prof Greaves says that we don't get to just ignore the unforeseeable effects. When there are complex arguments on both sides, it's unclear what probabilities you should assign to this or that claim. Yet, given its importance, whether you should take the action in question actually does depend on figuring out these numbers. So, what do we do? Today's episode blends philosophy with an exploration of the mission and research agenda of the Global Priorities Institute: to develop the effective altruism movement within academia. We cover: * How controversial is the multiverse interpretation of quantum physics? * Given moral uncertainty, how should population ethics affect our real life decisions? * How should we think about archetypal decision theory problems? * What are the consequences of cluelessness for those who based their donation advice on GiveWell style recommendations? * How could reducing extinction risk be a good cause for risk-averse people? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.