Podcast appearances and mentions of william macaskill

  • 103PODCASTS
  • 190EPISODES
  • 42mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 31, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about william macaskill

Latest podcast episodes about william macaskill

Effective Altruism Forum Podcast
“Anthropic is not being consistently candid about their connection to EA” by burner2

Effective Altruism Forum Podcast

Play Episode Listen Later Mar 31, 2025 6:41


In a recent Wired article about Anthropic, there's a section where Anthropic's president, Daniela Amodei, and early employee Amanda Askell seem to suggest there's little connection between Anthropic and the EA movement: Ask Daniela about it and she says, "I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term". Yet her husband, Holden Karnofsky, cofounded one of EA's most conspicuous philanthropy wings, is outspoken about AI safety, and, in January 2025, joined Anthropic. Many others also remain engaged with EA. As early employee Amanda Askell puts it, "I definitely have met people here who are effective altruists, but it's not a theme of the organization or anything". (Her ex-husband, William MacAskill, is an originator of the movement.) This led multiple people on Twitter to call out how bizarre this is: In my [...] --- First published: March 30th, 2025 Source: https://forum.effectivealtruism.org/posts/53Gc35vDLK2u5nBxP/anthropic-is-not-being-consistently-candid-about-their --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Effective Altruism Forum Podcast
“Forethought: A new AI macrostrategy group” by Amrit Sidhu-Brar

Effective Altruism Forum Podcast

Play Episode Listen Later Mar 12, 2025 7:35


Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don't yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating [...] ---Outline:(00:34) Why we exist(01:57) Research(02:00) Research agendas(03:13) Recent work(03:34) Approach(03:37) Comparison to other efforts(04:14) Principles(05:35) What you can do(05:39) Engage with our research(06:08) Apply to work with us(06:25) FundingThe original text contained 1 footnote which was omitted from this narration. --- First published: March 11th, 2025 Source: https://forum.effectivealtruism.org/posts/6JnTAifyqz245Kv7S/forethought-a-new-ai-macrostrategy-group --- Narrated by TYPE III AUDIO.

Philosophy for our times
Longtermism SPECIAL: The next stage of effective altruism

Philosophy for our times

Play Episode Listen Later Feb 28, 2025 33:04


Should we sacrifice the present for a better future?Join the team at the IAI for three articles about effective altruism, longtermism, and the complex evolution of moral thought. Written by William MacAskill, James W. Lenman, and Ben Chugg, these three articles pick apart the ethical movement started by Peter Singer, analysing its strengths and weaknesses for both individuals and societies.William MacAskill is a Scottish philosopher and author, best known for writing 2022's "What We Owe the Future." James W. Lenman is Professor of Philosophy at the University of Sheffield, as well as the former president of the British Society for Ethical Theory. Ben Chugg is a BPhD student in the machine learning department at Carnegie Mellon University. He also co-hosts the Increments podcast.To witness such debates live buy tickets for our upcoming festival: https://howthelightgetsin.org/festivals/And visit our website for many more articles, videos, and podcasts like this one: https://iai.tv/You can find everything we referenced here: https://linktr.ee/philosophyforourtimesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Overpopulation Podcast
Émile P. Torres | Highway to Hell: The Dystopian Fantasies of Tech Billionaires

The Overpopulation Podcast

Play Episode Listen Later Apr 30, 2024 66:49


In this episode, we chat with philosopher and historian Dr. Émile P. Torres about the dystopian fantasies of ecologically-blind tech billionaires – transhumanists, longtermists, and effective altruists – of defying nature, transcending humanity, and colonizing the universe. Highlights of our conversation include: how transhumanism is built on the idea of creating God-like AI to reengineer humanity to achieve immortality, sustain capitalist growth, and colonize space; how the effective altruism's utilitarian approach to philanthropy is not only blind to systemic change, social inequalities, and moral integrity, but it also perpetuates neoliberal ideology that further contributes to inequality and exploitation, e.g. Sam Bankman-Fried's ‘Earn to Give' fraud; the intersection of capitalism, longtermism, and the proliferation of global catastrophic risks in the age of AI, and the ethical implications of unchecked technological progress and environmental destruction; the stark differences between the indigenous long-term-thinking approach and the impoverished and hubristic longtermism philosophy put forward by William MacAskill and Nick Bostrom, which prioritizes the existence of trillions of future disembodied people living in computer simulations over the suffering of current day people and nonhumans; their dangerous rhetoric around ‘depopulation panic' based on the underestimation of environmental destruction and anthropocentrism, and how some people like Malcolm and Simone Collins are repopulating humanity with their own seemingly superior genetic material.   See episode website for show notes, links, and transcript: https://www.populationbalance.org/podcast/emile-torres   ABOUT US The Overpopulation Podcast features enlightening conversations between Population Balance executive director Nandita Bajaj, researcher Alan Ware, and expert guests. We cover a broad variety of topics that explore the impacts of our expanding human footprint on human rights, animal protection, and environmental restoration, as well as individual and collective solutions. Learn more here: https://www.populationbalance.org/  Copyright 2024 Population Balance

The Nonlinear Library
EA - Personal reflections on FTX by William MacAskill

The Nonlinear Library

Play Episode Listen Later Apr 18, 2024 1:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal reflections on FTX, published by William MacAskill on April 18, 2024 on The Effective Altruism Forum. The two podcasts where I discuss FTX are now out: Making Sense with Sam Harris Clearer Thinking with Spencer Greenberg The Sam Harris podcast is more aimed at a general audience; the Spencer Greenberg podcast is more aimed at people already familiar with EA. (I've also done another podcast with Chris Anderson from TED that will come out next month, but FTX is a fairly small part of that conversation.) In this post, I'll gather together some things I talk about across these podcasts - this includes updates and lessons, and responses to some questions that have been raised on the Forum recently. I'd recommend listening to the podcasts first, but these comments can be read on their own, too. I cover a variety of different topics, so I'll cover each topic in separate comments underneath this post. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Effective Altruism Forum Podcast
“Personal reflections on FTX” by William_MacAskill

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 18, 2024 1:12


The two podcasts where I discuss FTX are now out: Making Sense with Sam Harris Clearer Thinking with Spencer Greenberg The Sam Harris podcast is more aimed at a general audience; the Spencer Greenberg podcast is more aimed at people already familiar with EA. (I've also done another podcast with Chris Anderson from TED that will come out next month, but FTX is a fairly small part of that conversation.) In this post, I'll gather together some things I talk about across these podcasts — this includes updates and lessons, and responses to some questions that have been raised on the Forum recently. I'd recommend listening to the podcasts first, but these comments can be read on their own, too. I cover a variety of different topics, so I'll cover each topic in separate comments underneath this post. --- First published: April 18th, 2024 Source: https://forum.effectivealtruism.org/posts/A2vBJGEbKDpuKveHk/personal-reflections-on-ftx --- Narrated by TYPE III AUDIO.

The Nonlinear Library
EA - Spencer Greenberg and William MacAskill: What should the EA movement learn from the SBF/FTX scandal? by AnonymousTurtle

The Nonlinear Library

Play Episode Listen Later Apr 16, 2024 1:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spencer Greenberg and William MacAskill: What should the EA movement learn from the SBF/FTX scandal?, published by AnonymousTurtle on April 16, 2024 on The Effective Altruism Forum. What are the facts around Sam Bankman-Fried and FTX about which all parties agree? What was the nature of Will's relationship with SBF? What things, in retrospect, should've been red flags about Sam or FTX? Was Sam's personality problematic? Did he ever really believe in EA principles? Does he lack empathy? Or was he on the autism spectrum? Was he naive in his application of utilitarianism? Did EA intentionally install SBF as a spokesperson, or did he put himself in that position of his own accord? What lessons should EA leaders learn from this? What steps should be taken to prevent it from happening again? What should EA leadership look like moving forward? What are some of the dangers around AI that are not related to alignment? Should AI become the central (or even the sole) focus of the EA movement? the Clearer Thinking podcast is aimed more at people in or related to EA, whereas Sam Harris's wasn't Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Clearer Thinking with Spencer Greenberg
What should the Effective Altruism movement learn from the SBF / FTX scandal? (with Will MacAskill)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Apr 15, 2024 121:52


What are the facts around Sam Bankman-Fried and FTX about which all parties agree? What was the nature of Will's relationship with SBF? What things, in retrospect, should've been red flags about Sam or FTX? Was Sam's personality problematic? Did he ever really believe in EA principles? Does he lack empathy? Or was he on the autism spectrum? Was he naive in his application of utilitarianism? Did EA intentionally install SBF as a spokesperson, or did he put himself in that position of his own accord? What lessons should EA leaders learn from this? What steps should be taken to prevent it from happening again? What should EA leadership look like moving forward? What are some of the dangers around AI that are not related to alignment? Should AI become the central (or even the sole) focus of the EA movement?William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. He also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and 80,000 Hours, which together have moved over $300 million to effective charities. He's the author of What We Owe The Future, Doing Good Better, and Moral Uncertainty.Further reading:Episode 133: The FTX catastrophe (with Byrne Hobart, Vipul Naik, Maomao Hu, Marcus Abramovich, and Ozzie Gooen) — Our previous podcast episode about what happened in the FTX disaster"Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? – three theories and a lot of evidence" — Spencer's essay about SBF's personalityWhy They Do It: Inside the Mind of the White-Collar Criminal by Eugene SoltesStaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsAlexandria D. — Research and Special Projects AssistantMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Making Sense with Sam Harris
#361 — Sam Bankman-Fried & Effective Altruism

Making Sense with Sam Harris

Play Episode Listen Later Apr 1, 2024 85:25


Sam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF, his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.   Learning how to train your mind is the single greatest investment you can make in life. That's why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life's most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

Making Sense with Sam Harris - Subscriber Content
#361 - Sam Bankman-Fried & Effective Altruism

Making Sense with Sam Harris - Subscriber Content

Play Episode Listen Later Apr 1, 2024 85:25


Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/361-sam-bankman-fried-effective-altruism Sam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF, his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics. William MacAskill is an associate professor of moral philosophy at Oxford University, and author of Doing Good Better, Moral Uncertainty, and What We Owe The Future. He cofounded the nonprofits 80,000 Hours, Centre for Effective Altruism, and Giving What We Can, and helped to launch the effective altruism movement, which encourages people to use their time and money to support the projects that are most effectively making the world a better place. Website: ​​www.williammacaskill.com Twitter: @willmacaskill Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

The Nonlinear Library
EA - Recent and upcoming media related to EA by 2ndRichter

The Nonlinear Library

Play Episode Listen Later Mar 28, 2024 1:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recent and upcoming media related to EA, published by 2ndRichter on March 28, 2024 on The Effective Altruism Forum. I'm Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they'll touch on topics - like FTX - that I expect will be of interest to Forum readers. The CEO of CEA, @Zachary Robinson, wrote an op-ed that came out today addressing Sam Bankman-Fried and the continuing value of EA. ( Read here) @William_MacAskill will appear on two podcasts and will discuss FTX: Clearer Thinking with Spencer Greenberg and the Making Sense Podcast with Sam Harris. The podcast episode with Sam Harris will likely be released next week and is aimed at a general audience. The podcast episode with Spencer Greenberg will likely be released in two weeks and is aimed at people more familiar with the EA movement. I'll add links for these episodes once they become available and plan to update this post as needed. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

KERA's Think
The decision to have kids feels more complicated than ever

KERA's Think

Play Episode Listen Later Feb 9, 2024 48:25


Birth rates in the U.S. are on the decline – so why is that? Host Krys Boyd talks about why millennials are having fewer children than previous generations with Washington Post columnist Andrew Van Dam; population projections with Bryan Walsh, editor of Vox's Future Perfect; and we'll hear from philosophy professor William MacAskill on why the welfare of future generations should matter to everyone.

London Futurists
What is your p(doom)? with Darren McKee

London Futurists

Play Episode Listen Later Jan 18, 2024 42:17


In this episode, our subject is Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. That's a new book on a vitally important subject.The book's front cover carries this endorsement from Professor Max Tegmark of MIT: “A captivating, balanced and remarkably up-to-date book on the most important issue of our time.” There's also high praise from William MacAskill, Professor of Philosophy at the University of Oxford: “The most accessible and engaging introduction to the risks of AI that I've read.”Calum and David had lots of questions ready to put to the book's author, Darren McKee, who joined the recording from Ottawa in Canada.Topics covered included Darren's estimates for when artificial superintelligence is 50% likely to exist, and his p(doom), that is, the likelihood that superintelligence will prove catastrophic for humanity. There's also Darren's recommendations on the principles and actions needed to reduce that likelihood.Selected follow-ups:Darren McKee's websiteThe book UncontrollableDarren's podcast The Reality CheckThe Lazarus Heist on BBC SoundsThe Chair's Summary of the AI Safety Summit at Bletchley ParkThe Statement on AI Risk by the Center for AI SafetyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

The Nonlinear Library
EA - ウィリアム・マッカスキル「効果的利他主義の定義」 by EA Japan

The Nonlinear Library

Play Episode Listen Later Jan 10, 2024 68:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ウィリアム・マッカスキル「効果的利他主義の定義」, published by EA Japan on January 10, 2024 on The Effective Altruism Forum. This is a Japanese translation of William MacAskill, 'The Definition of Effective Altruism' available at MacAskill's website. Translated by 清水颯(Hayate Shimizu, link to his Researchmap) 今日、世界にはさまざまな問題がある。7億5千万人以上の人々が1日1.90ドル以下(購買力平価換算)で生活している[1]。マラリアや下痢、肺炎など、簡単に予防できる原因で、毎年約600万人の子どもたちが亡くなっている[2]。気候変動は環境に大打撃を与え、経済に何兆ドルもの損失をもたらすと言われている[3]。世界の女性の3分の1は、性的または身体的な暴力に苦しんだことがある[4]。3,000発以上の核弾頭が世界中で高い警戒状態(high alert)に置かれていて、短時間の内に使える状態にある[5]。細菌は抗生物質に耐性を持ち始めている[6]。党派心は強まり、民主主義は衰退しているかもしれない[7]。 世界はこれほど多くの問題を抱えており、これらの問題が深刻であることを考えると、私たちはこれらの問題に対して何かをする責任があることは確かである。しかし、何をすればよいのだろうか。私たちが取り組みうる問題は数え切れないほどあり、また、それぞれの問題に取り組む方法もさまざまである。しかも、私たちの資源は限られているから、個人として、あるいは地球全体(globe)として、これらの問題を一度に解決することはできない。それゆえ、私たちは自分たちがもつ資源をどのように配分するのかを決めなければならない。しかし、私たちは何を基準にそのような決断を下すべきなのか。 その結果、効果的利他主義コミュニティは、世界の破局的リスクの軽減、家畜のアニマルウェルフェア、グローバルヘルスの分野で大きな成果を上げることに貢献した。2016年だけでも、効果的利他主義コミュニティは、効果の持続する殺虫剤処理された蚊帳を提供することで650万人の子どもをマラリアから守り、3億6000万羽の鶏をケージの檻の中の生活から救い出し、技術的AIセーフティを機械学習研究の主流領域として発展させることに大きな推進力と支援を提供した[13]。 この動きは、学術的な議論にも大きな影響を及ぼしてきた。このテーマに関する書籍には、ピーター・シンガー著『あなたが世界のためにできるたったひとつのこと:〈効果的な利他主義〉のすすめ』や私自身の『〈効果的な利他主義〉宣言!』などがあり[14]、効果的利他主義を支持ないし批判する学術論文は、Philosophy and Public AffairsやUtilitas、Journal of Applied Philosophy、Ethical Theory and Moral Practiceその他の刊行物に掲載されてきた[15]。Essays in Philosophyの一巻はこのテーマに特化しており、Boston Reviewには学者たちによる効果的利他主義についての論考が掲載されている[16]。 しかし、効果的利他主義について有意義な学術的議論を行うには、何について話しているのかについて合意を形成する必要がある。本章では、その一助となるべく、効果的利他主義センターの定義を紹介し、同センターがなぜそのような定義を選んだのかを説明し、その定義に対する正確な哲学的解釈を提供することを目指す。私は、効果的利他主義コミュニティで広く支持されているこの効果的利他主義の理解は、一般の人々の多くや効果的利他主義を批判する多くの人々が持っている効果的利他主義の理解とはかけ離れていると考えている。本稿では、なぜ私がこのような定義を好むのかを説明した後で、この機会を利用して、効果的利他主義に対して広く流布している誤解を訂正する。 始める前に、「効果的利他主義」を定義することで、道徳の根本的な側面を説明しようとしているわけではないことに注意することが重要である。経験的研究分野では、科学と工学を区別することができる。科学は、私たちの住む世界の一般的な真理を発見しようとするものである。工学は、科学的理解を用いて、社会に役立つ構造物やシステムを設計し、構築することである。 道徳哲学でも、同じような区別ができる。典型的に、道徳哲学は、道徳の本質に関する一般的な真理を発見することを目的としている。これは規範的科学に相当する。しかし、道徳哲学の中にも工学に相当する部分があり、例えば、社会で広く採用されれば、世界を改善することになる新しい道徳的概念を作り出すことができる。 「効果的利他主義」を定義することは、道徳性の基本的な側面を説明することではなく、工学的な問題なのだ。この観点から、私は定義が満たすべき二つの主要な要件を提案する。一つ目は、現在、効果的利他主義に従事していると言われている人たちの実際の実践、そしてコミュニティのリーダーが持っている効果的利他主義の理解に沿うことである。二つ目は、その概念が可能な限り公共的な価値を持つようにすることである。つまり、例えば、様々な道徳的見解に支持され、またその道徳的見解にとって有用であるほど十分に広い概念でありながら、その概念の使用者が世界をより良くするために、そうしなかった場合よりも多くのことを行えるほどには限定された概念が望まれる。もちろん、これはバランス感覚を要する作業になる。 1. 効果的利他主義の以前の定義 「効果的利他主義」という言葉は、「効果的利他主義センター」を設立する過程で、2011年12月3日に関係者17名による民主的なプロセスを経て作られた言葉である[17]。しかし、この用語の公式な定義は導入されていない。長年にわたり、効果的利他主義は、さまざまな人々によって、さまざまな方法で定義されてきた。以下はその例である。 私たちにとって「効果的利他主義」とは、持っている1ドル、1時間を使って、最大限の善いことをしようとすることである[18]。 効果的利他主義とは「どうしたら、自分にできる最大の違いを生み出せるだろうか」と問いかけ、その答えを見出すために、証拠と慎重な推論を用いることである[19]。 効果的利他主義は、非常にシンプルな考えに基づいている:私たちは、できる限りで最大の善を行うべきである〔・・・・・・〕最低限受け入れ可能な倫理的な生活を送るには、余剰資源の相当部分を、世界をより善い場所にするために使うことである。完全に倫理的な生活を送るには、できる限り最大の善を行うことである[20]。 効果的利他主義とは、質の高い証拠と慎重な推論を用いて、可能な限りで最大限、他者を助ける方法を考え出す研究分野である。また、そうして出た答えを真剣に受け止め、世界の最も差し迫った問題に対する最も有望な解決策に力を注ぐ人々のコミュニティでもある[21]。 効果的利他主義とは、他者に利益をもたらす最も効果的な方法を決定するために、証拠と理性を用いる哲学であり、社会運動である[22]。 以上の定義には、いくつかの共通点がある[23]。すべての定義が最大化という考え方を引き合いに出し、福利を高めるという価値であれ、ただ一般に善を達成するという価値であれ、ともかく何らかの価値の達成を話題にしている。しかし、相違点もある。定義(1)(3)は「善を行う」ことについて述べているのに対し、定義(4)と(5)は「他者を助ける」「他者に利益をもたらす」ことについて述べている。他の定義と異なり、(3)は効果的利他主義を、活動や研究分野、運動といった非規範的なプロジェクトではなく、規範的な主張としている。定義(2)、(4)、(5)は、証拠と慎重な推論を用いるという考えを引き合いに出しているが、定義(1)、(3)はそうしていない。 効果的利他主義センターの定義は、効果的利他主義を下記のように定義することで、これら各論点に態度を取っている。 効果的利他主義とは、証拠と理由を用いて、どうすれば他人のためになるかをできるだけ考え、それに基づいて行為することである[24]。 この定義は、私が中心となって、効果的利他主義コミュニティの多くのアドバイザーから意見を聞き、Julia WiseとRob Bensingerの多大な協力を得て作成した。この定義と、それに沿った一連の指針的価値は、効果的利他主義コミュニティの大多数のリーダーによって正式に承認されている[25]。 効果的利他主義に「公式」な定義はないが、当センターの定義は他のどの定義よりもそれに近い。しかし、効果的利他主義のこの声明は、哲学的な読者ではなく、一般的な読者を対象としているため、アクセスしやすくするために、ある程度の正確さが失われている。そのため、ここではより正確な定式化を行った上で、定義の内容を詳しく解説していきたい。私の定義は次のようなものであ...

Büchermarkt - Deutschlandfunk
William MacAskill: "Was wir der Zukunft schulden"

Büchermarkt - Deutschlandfunk

Play Episode Listen Later Jan 9, 2024 5:13


Bertsch, Matthiaswww.deutschlandfunk.de, Büchermarkt

Büchermarkt - Deutschlandfunk
Büchermarkt 09.01.2024: Paula Schweers, William MacAskill Rhinozeros Magazin

Büchermarkt - Deutschlandfunk

Play Episode Listen Later Jan 9, 2024 19:33


Albath, Maikewww.deutschlandfunk.de, Büchermarkt

Andruck - Deutschlandfunk
William MacAskill: "Was wir der Zukunft schulden"

Andruck - Deutschlandfunk

Play Episode Listen Later Jan 8, 2024 7:17


Bertsch, Matthiaswww.deutschlandfunk.de, Andruck - Das Magazin für Politische Literatur

The New Statesman Podcast
The philosopher and the crypto king: Sam Bankman-Fried and the effective altruism delusion | Audio Long Read

The New Statesman Podcast

Play Episode Listen Later Sep 23, 2023 36:12


At the time of writing, the crypto billionaire Sam Bankman-Fried is due to stand trial on 3 October 2023. He stands accused of fraud and money-laundering on an epic scale through his currency exchange FTX. Did he gamble with other people's money in a bid to do the maximum good? In this week's long read, the New Statesman's associate editor Sophie McBain examines the relationship between Bankman-Fried and the Oxford-based effective altruism (EA) movement. The billionaire was a close associate and supporter of William MacAskill, the Scottish moral philosopher who many consider EA's leader. It was MacAskill who had persuaded him – and many other young graduates – to earn more, in order to give more. But how much money was enough – and what should they spend it on? Was EA just “a dumb game we woke Westerners play”, as Bankman-Fried told one journalist? In conversations with EA members past and present, McBain hears how the movement was altered by its enormous wealth. As the trial of its biggest sponsor approaches, will effective altruism survive – or be swallowed by its more cynical Silicon Valley devotees? Written and read by Sophie McBain. This article originally appeared in the 22-28 September 2023 edition of the New Statesman; you can read the text version here. If you enjoyed listening to this episode, you might also like Big Tech and the quest for eternal youth, by Jenny Kleeman. Hosted on Acast. See acast.com/privacy for more information.

Audio Long Reads, from the New Statesman
The philosopher and the crypto king: Sam Bankman-Fried and the effective altruism delusion

Audio Long Reads, from the New Statesman

Play Episode Listen Later Sep 23, 2023 36:12


At the time of writing, the crypto billionaire Sam Bankman-Fried is due to stand trial on 3 October 2023. He stands accused of fraud and money-laundering on an epic scale through his currency exchange FTX. Did he gamble with other people's money in a bid to do the maximum good? In this week's long read, the New Statesman's associate editor Sophie McBain examines the relationship between Bankman-Fried and the Oxford-based effective altruism (EA) movement. The billionaire was a close associate and supporter of William MacAskill, the Scottish moral philosopher who many consider EA's leader. It was MacAskill who had persuaded him – and many other young graduates – to earn more, in order to give more. But how much money was enough – and what should they spend it on? Was EA just “a dumb game we woke Westerners play”, as Bankman-Fried told one journalist? In conversations with EA members past and present, McBain hears how the movement was altered by its enormous wealth. As the trial of its biggest sponsor approaches, will effective altruism survive – or be swallowed by its more cynical Silicon Valley devotees? Written and read by Sophie McBain. This article originally appeared in the 22-28 September 2023 edition of the New Statesman; you can read the text version here. If you enjoyed listening to this episode, you might also like Big Tech and the quest for eternal youth, by Jenny Kleeman. Hosted on Acast. See acast.com/privacy for more information.

What's Left of Philosophy
73 | Effective Altruism is Terrible w/ John Duncan

What's Left of Philosophy

Play Episode Listen Later Sep 20, 2023 60:55


In this episode, we are joined by researcher and video essayist John Duncan (@Johntheduncan) to talk about the Effective Altruism movement and why it is so comprehensively awful. Granted, it's got some pretty solid marketing: who could be against altruism, especially if it's effective? But consider: from its individualism to its focus on cost-effectiveness and rates of return, from its idealist historiography to its refusal to cop to its obvious utilitarianism, from its naive empiricism to its wild-eyed obsession for preventing the Singularity—it's really just the spontaneous ideology of 21st century capitalism cosplaying as ethics. Look, if your moral project involves you working in finance or for DARPA, sees new sweatshops in the global south as a good thing, and is beloved by tech bro billionaires, you've made a wrong turn somewhere. It's deeply embarrassing and accordingly we drag it for filth.leftofphilosophy.com | @leftofphil https://www.youtube.com/@JohntheDuncanReferences:William MacAskill, “The Definition of Effective Altruism”, in Effective Altruism: Philosophical Issues, eds. Hilary Greaves and Theron Plummer (New York: Oxford University Press, 2019).  William MacAskill, What We Owe the Future (New York: Hachette, 2022)  Adams et. al., The Good It Promises, The Harm It Does: Critical Essays on Effective Altruism (New York: Oxford University Press, 2023).  Music:  Vintage Memories by Schematist | schematist.bandcamp.com

The Nonlinear Library
EA - The possibility of an indefinite AI pause by Matthew Barnett

The Nonlinear Library

Play Episode Listen Later Sep 19, 2023 24:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The possibility of an indefinite AI pause, published by Matthew Barnett on September 19, 2023 on The Effective Altruism Forum. This post is part of AI Pause Debate Week. Please see this sequence for other posts in the debate. tl;dr An indefinite AI pause is a somewhat plausible outcome and could be made more likely if EAs actively push for a generic pause. I think an indefinite pause proposal is substantially worse than a brief pause proposal, and would probably be net negative. I recommend that alternative policies with greater effectiveness and fewer downsides should be considered instead. Broadly speaking, there seem to be two types of moratoriums on technologies: (1) moratoriums that are quickly lifted, and (2) moratoriums that are later codified into law as indefinite bans. In the first category, we find the voluntary 1974 moratorium on recombinant DNA research, the 2014 moratorium on gain of function research, and the FDA's partial 2013 moratorium on genetic screening. In the second category, we find the 1958 moratorium on conducting nuclear tests above the ground (later codified in the 1963 Partial Nuclear Test Ban Treaty), and the various moratoriums worldwide on human cloning and germline editing of human genomes. In these cases, it is unclear whether the bans will ever be lifted - unless at some point it becomes infeasible to enforce them. Overall I'm quite uncertain about the costs and benefits of a brief AI pause. The foreseeable costs of a brief pause, such as the potential for a compute overhang, have been discussed at length by others, and I will not focus my attention on them here. I recommend reading this essay to find a perspective on brief pauses that I'm sympathetic to. However, I think it's also important to consider whether, conditional on us getting an AI pause at all, we're actually going to get a pause that quickly ends. I currently think there is a considerable chance that society will impose an indefinite de facto ban on AI development, and this scenario seems worth analyzing in closer detail. Note: in this essay, I am only considering the merits of a potential lengthy moratorium on AI, and I freely admit that there are many meaningful axes on which regulatory policy can vary other than "more" or "less". Many forms of AI regulation may be desirable even if we think a long pause is not a good policy. Nevertheless, it still seems worth discussing the long pause as a concrete proposal of its own. The possibility of an indefinite pause Since an "indefinite pause" is vague, let me be more concrete. I currently think there is between a 10% and 50% chance that our society will impose legal restrictions on the development of advanced AI systems that, Prevent the proliferation of advanced AI for more than 10 years beyond the counterfactual under laissez-faire Have no fixed, predictable expiration date (without necessarily lasting forever) Eliezer Yudkowsky, perhaps the most influential person in the AI risk community, has already demanded an "indefinite and worldwide" moratorium on large training runs. This sentiment isn't exactly new. Some effective altruists, such as Toby Ord, have argued that humanity should engage in a "long reflection" before embarking on ambitious and irreversible technological projects, including AGI. William MacAskill suggested that this pause should perhaps last "a million years". Two decades ago, Nick Bostrom considered the ethics of delaying new technologies in a utilitarian framework and concluded a delay of "over 10 million years" may be justified if it reduces existential risk by a single percentage point. I suspect there are approximately three ways that such a pause could come about. The first possibility is that governments could explicitly write such a pause into law, fearing the development of AI in a broad sense,...

The Nonlinear Library
EA - New Princeton course on longtermism by Calvin Baker

The Nonlinear Library

Play Episode Listen Later Sep 2, 2023 16:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Princeton course on longtermism, published by Calvin Baker on September 2, 2023 on The Effective Altruism Forum. This semester (Fall 2023), Prof Adam Elga and I will be co-instructing Longtermism, Existential Risk, and the Future of Humanity, an upper div undergraduate philosophy seminar at Princeton. (Yes, I did shamelessly steal half of our title from The Precipice.) We are grateful for support from an Open Phil course development grant and share the reading list here for all who may be interested. Part 1: Setting the stage Week 1: Introduction to longtermism and existential risk Core Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury. Read introduction, chapter 1, and chapter 2 (pp. 49-56 optional); chapters 4-5 optional but highly recommended. Optional Roser (2022) "The Future is Vast: Longtermism's perspective on humanity's past, present, and future" Our World in Data Karnofsky (2021) 'This can't go on' Cold Takes (blog) Kurzgesagt (2022) "The Last Human - A Glimpse into the Far Future" Week 2: Introduction to decision theory Core Weisberg, J. (2021). Odds & Ends. Read chapters 8, 11, and 14. Ord, T., Hillerbrand, R., & Sandberg, A. (2010). "Probing the improbable: Methodological challenges for risks with low probabilities and high stakes." Journal of Risk Research, 13(2), 191-205. Read sections 1-2. Optional Weisberg, J. (2021). Odds & Ends chapters 5-7 (these may be helpful background for understanding chapter 8, if you don't have much background in probability). Titelbaum, M. G. (2020) Fundamentals of Bayesian Epistemology chapters 3-4 Week 3: Introduction to population ethics Core Parfit, Derek. 1984. Reasons and Persons. Oxford: Oxford University Press. Read sections 4.16.120-23, 125, and 127 (pp. 355-64; 366-71, and 377-79). Parfit, Derek. 1986. "Overpopulation and the Quality of Life." In Applied Ethics, ed. P. Singer, 145-164. Oxford: Oxford University Press. Read sections 1-3. Optional Remainders of Part IV of Reasons and Persons and "Overpopulation and the Quality of Life" Greaves (2017) "Population Axiology" Philosophy Compass McMahan (2022) "Creating People and Saving People" section 1, first page of section 4, and section 8 Temkin (2012) Rethinking the Good 12.2 pp. 416-17 and section 12.3 (esp. pp. 422-27) Harman (2004) "Can We Harm and Benefit in Creating?" Roberts (2019) "The Nonidentity Problem" SEP Frick (2022) "Context-Dependent Betterness and the Mere Addition Paradox" Mogensen (2019) "Staking our future: deontic long-termism and the non-identity problem" sections 4-5 Week 4: Longtermism: for and against Core Greaves, Hilary and William MacAskill. 2021. "The Case for Strong Longtermism." Global Priorities Institute Working Paper No.5-2021. Read sections 1-6 and 9. Curran, Emma J. 2023. "Longtermism and the Complaints of Future People". Forthcoming in Essays on Longtermism, ed. H. Greaves, J. Barrett, and D. Thorstad. Oxford: OUP. Read section 1. Optional Thorstad (2023) "High risk, low reward: A challenge to the astronomical value of existential risk mitigation." Focus on sections 1-3. Curran, E. J. (2022). "Longtermism, Aggregation, and Catastrophic Risk" (GPI Working Paper 18-2022). Global Priorities Institute. Beckstead (2013) "On the Overwhelming Importance of Shaping the Far Future" Chapter 3 "Toby Ord on why the long-term future of humanity matters more than anything else, and what we should do about it" 80,000 Hours podcast Frick (2015) "Contractualism and Social Risk" sections 7-8 Part 2: Philosophical problems Week 5: Fanaticism Core Bostrom, N. (2009). "Pascal's mugging." Analysis, 69 (3): 443-445. Russell, J. S. "On two arguments for fanaticism." Noûs, forthcoming. Read sections 1, 2.1, and 2.2. Temkin, L. S. (2022). "How Expected Utility Theory Can Drive Us Off the Rails." In L. S. ...

The National Security Podcast
Mapping the future: how strategic foresight can supercharge policymaking

The National Security Podcast

Play Episode Listen Later Aug 24, 2023 54:03


What major trends will shape the next two decades? How can futures analysis be used to manage risk and harness opportunities? And how can governments better integrate futures thinking into public administration? In this episode of the National Security Podcast, Dr Joseph Voros, Odette Meli and Dr Ryan Young join Dayle Stanley to discuss the intricacies and applications of future analysis. Dr Joseph Voros is a physicist and futurist with over 25 years of experience in futures analysis. Odette Meli has more than 25 years of professional experience at the Australian Federal Police, where she established and led the Strategic Insights Centre. Dr Ryan Young is the Director, Research & Methods at the NSC Futures Hub. Dayle Stanley is the Director, Strategy and Engagement at the NSC Futures Hub. Show notes: ANU National Security College academic programs: find out more FuturePod: find out more Future Shock by Alvin Toffler: find out more What We Owe the Future by William Macaskill: find out more ANZPAA Futures and Strategic Foresight Toolkit: find out more Futures Hub at the ANU National Security College: find out moreJoseph Voros' Voroscope blog: find out more UK: Government Office on Foresight: find out more Canada: Policy Horizons: find out more Singapore: Centre for Strategic Futures: find out more US: National Intelligence Council Publications: find out more New Zealand: Futures thinking: find out more To connect with the Futures Hub about their work or possible employment opportunities, email the team at futureshub.nsc@anu.edu.au. Hosted on Acast. See acast.com/privacy for more information.

The Ski Podcast
183: Val Thorens in summer, Aussie lift queues & POW's 'Send It' campaign

The Ski Podcast

Play Episode Listen Later Aug 21, 2023 49:58


Val Thorens in summer, how Australian ski resorts are dealing with lift queues and all about the latest campaign from Protect Our Winters. Iain was joined by Jen Tsang from ThatsLaPlagne.com, and Lindsey Dixon from Protect our Winters. Intersport Ski Hire Discount Code Save money on your ski hire by using the code ‘SKIPODCAST' at intersportrent.com, or simply take this link for your discount to be automatically applied at the checkout. SHOW NOTES Lindsey was last on the show in Episode 168, talking about taking the train to Val d'Isere (1:00) Lindsey skied at Skieasy in Chiswick (3:00) Moving Mountains offers the same option in Sussex (4:30) Listen to Iain's report on skiing in Australia in Episode 182 (6:00) Jen and her family took part in the VT Summit Games (8:45) It snowed while Jen was in Les 3 Vallees (9:30) https://twitter.com/3Vallees_france/status/1688461637164929024  Find out more about the new sports centre Le Board (11:00) The soft play at Le Board sounds great fun (14:45) Chez Pepe Nicholas is a wonderful mountain restaurant in Les Menuires (17:00) Jen was in Les Gets for the Tour de France (21:00) It's going to be WARM in the Alps this week (26:45) https://twitter.com/skipedia/status/1692068589220626849  The 'Send It For Climate' campaign from Protect Our Winters is due to launch very soon (28:00) Postcards and postboxes will be available from partners such as Ellis Brigham and Patagonia (31:00) It was the ninth warmest July in Australia (33:15) Perisher has had some tough getting criticism on social, especially TikTok (33:45) Thredbo have brought in a cap on lift pass sales (35:00) Australia has ‘Lodges' that work very much like our chalets (39:00) Listen to Iain's interview with Helen Coffey, the non-flying travel editor of The Independent (42:00) Many offsetting projects have been discredited (42:30) Iain donated to an organisation called ‘Cool Earth' (43:00) William MacAskill is the author of a book called ‘Doing Good Better' and one of the founders of ‘Effective Altruism' (43:30) Feedback I enjoy all feedback about the show, I like to know what you think, ideas for features so please contact on social @theskipodcast or by email theskipodcast@gmail.com Doug: "Listened to #182 on a run and loved it." Simon Burgess: “I really enjoyed the Thredbo/Perisher episode. It brought back some really nice memories.” Bo Spanding: “Truly enjoying the podcast, but I'm wondering why you're never talking about Scandinavian ski resorts. There are a lot of good areas in those countries (except for Denmark of course).”  We featured Svalbard in Episode 94 and Norway in Episode 58, plus our special interview with cross country skiers Andrew Musgrave and Andrew Young, who are based in Norway (47:00) If you like the podcast, there are couple of things you can do to help: 1) Review us on Apple Podcasts 2) Buy me a coffee at BuymeaCoffee.com/theskipodcast You can follow me @skipedia and the podcast @theskipodcast

Kvartal
Inläst: Frågan som får miljardärernas filosof att tystna

Kvartal

Play Episode Listen Later Jun 28, 2023 13:26


Sverigeaktuelle filosofen William MacAskill var tidigt ute med att varna för AI och pandemier och lyckas dra in gigantiska summor till rörelsen för effektiv altruism. Men när Lapo Lappin frågar varför han samarbetar med kryptosvindlare brister rösten. Hosted on Acast. See acast.com/privacy for more information.

The Nonlinear Library
EA - Decision-making and decentralisation in EA by William MacAskill

The Nonlinear Library

Play Episode Listen Later Jun 26, 2023 49:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Decision-making and decentralisation in EA, published by William MacAskill on June 26, 2023 on The Effective Altruism Forum. This post is a slightly belated contribution to the Strategy Fortnight. It represents my personal takes only; I'm not speaking on behalf of any organisation I'm involved with. For some context on how I'm now thinking about talking in public, I've made a shortform post here. Thanks to the many people who provided comments on a draft of this post. Intro and Overview How does decision-making in EA work? How should it work? In particular: to what extent is decision-making in EA centralised, and to what extent should it be centralised? These are the questions I'm going to address in this post. In what follows, I'll use “EA” to refer to the actual set of people, practices and institutions in the EA movement, rather than EA as an idea. My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have. It's hard to know whether the right response to this is to become more centralised or less. In this post, I'm mainly hoping just to start a discussion of this issue, as it's one that impacts a wide number of decisions in EA. At a high level, though, I currently think that the balance of considerations tends to push in favour of decentralisation relative to where we are now. But centralisation isn't a single spectrum, and we can break it down into sub-components. I'll talk about this in more depth later in the post, but here are some ways in which I think EA should become more decentralised: Perception: At the very least, wider perception should reflect reality on how (de)centralised EA is. That means: Core organisations and people should communicate clearly (and repeatedly) about their roles and what they do and do not take ownership for. (I agree with Joey Savoie's post, which he wrote independently of this one.) We should, insofar as we can, cultivate a diversity of EA-associated public figures. [Maybe] The EA Forum could be renamed. (Note that many decisions relating to CEA will wait until it has a new executive director). [Maybe] CEA could be renamed. (This is suggested by Kaleem here.) Funding: It's hard to fix, but it would be great to have a greater diversity of funding sources. That means: Recruiting more large donors. Some significant donor or donors start a regranters program. More people pursue earning to give, or donate more (though I expect this “diversity of funding” consideration to have already been baked-in to most people's decision-making on this). Luke Freeman has a moving essay about the continued need for funding here. Decision-making: Some projects that are currently housed within EV could spin out and become their own legal entities. The various different projects within EV have each been thinking through whether it makes sense for them to spin out. I expect around half of the projects will ultimately spin out over the coming year or two, which seems positive from my perspective. [Maybe] CEA could partly dissolve into sub-projects. Culture: We could try to go further to emphasise that there are many conclusions that one could come to on the grounds of EA values and principles, and celebrate cases where people pursue heterodox paths (as long as their actions are clearly non-harmful). Here are some ways in which I think EA could, ideally, become more centralised (though these ideas crucially depend on someone taking them on and making them happen): Information flow: Someone could create a guide to what EA is, in practice: all the different projects, and the roles they fill, and how they relate to one another. Someone c...

Effective Altruism Forum Podcast
“Decision-making and decentralisation in EA” by William_MacAskill

Effective Altruism Forum Podcast

Play Episode Listen Later Jun 26, 2023


This post is a slightly belated contribution to the Strategy Fortnight. It represents my personal takes only; I'm not speaking on behalf of any organisation I'm involved with. For some context on how I'm now thinking about talking in public, I've made a shortform post here [link]. Thanks to the many people who provided comments on a draft of this post.  Intro and OverviewHow does decision-making in EA work? How should it work?  In particular: to what extent is decision-making in EA centralised, and to what extent should it be centralised? These are the questions I'm going to address in this post. In what follows, I'll use “EA” to refer to the actual set of people, practices and institutions in the EA movement, rather than EA as an idea.My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have.It's hard to know whether the right response to this is to become more centralised or less. In this post, I'm mainly hoping just to start a discussion of this issue, as it's one that impacts a wide number of decisions in EA. [1] At a high level, though, I currently think that the balance of considerations tends to push in favour of decentralisation relative to where we are now. But centralisation isn't a single spectrum, and we can break it down into sub-components.  I'll talk about this in more depth later in the post, but here are some ways in which I think EA should become more decentralised:Perception: At the very least, wider perception should reflect reality on how (de)centralised EA is. That means:Core organisations and people should communicate clearly (and repeatedly) about their roles and what they do and do not take ownership for. (I agree with Joey Savoie's post, which he wrote independently of this one.)We should, insofar as we can, cultivate a diversity of EA-associated public figures.[Maybe] The EA Forum could be renamed. (Note that many decisions relating to CEA will wait until it has a new executive director).[Maybe] CEA could be renamed. (This is suggested by Kaleem here.)Funding: It's hard to fix, but it would be great to have a greater diversity of funding sources. That means:Recruiting more large donors.Some significant donor or donors start a regranters program.More people pursue earning to give, or donate more (though I expect this “diversity of funding” consideration to have already been baked-in to most people's decision-making on this). Luke Freeman has a moving essay about the continued need for funding here.Decision-making: Some projects that are currently housed within EV could spin out and become their own legal entities. The various different projects within EV have each been thinking through whether it makes sense for them to spin out. I expect around half of the projects will ultimately spin out over the coming year or two, which seems positive from my perspective.[Maybe] CEA could partly dissolve into sub-projects.Culture:We could [...]--- First published: June 26th, 2023 Source: https://forum.effectivealtruism.org/posts/DdSszj5NXk45MhQoq/decision-making-and-decentralisation-in-ea --- Narrated by TYPE III AUDIO. Share feedback on this narration.

Mind the Shift
107. What We Owe the Future – William MacAskill

Mind the Shift

Play Episode Listen Later Jun 19, 2023 50:38


The human species has been around for some 300,000 years. A typical mammal lasts for a million years. We are not typical.  ”You might think we are in the middle of history. But given the grand sweep, we are the ancients, we are at the very beginning of time. We live in the distant past compared to everything that will ever happen”, says William MacAskill, associate professor in philosophy at Oxford university. MacAskill is the initiator of the Effective Altruism movement, which is about optimizing the good you can do for this world. In his latest book, What We Owe the Future, he discusses how we should think and act to plan for an extremely long human future. The book is basically optimistic. MacAskill thinks we have immense opportunities to improve the world significantly. But it dwells on the potential risks and threats that we must deal with. MacAskill highlights four categories of risks: Extinction (everyone dying), collapse (so much destroyed that civilization doesn't recover), lock-in (a long future but governed by bad values) and stagnation (which may lead to one of the former). As for the risk of extinction, he concludes that newer risks that are less under control tend to be the largest, such as pandemics caused by man-made pathogens and catastrophes set off by artificial intelligence. Known risks like nuclear war and direct hits by asteroids have a potential to wipe out humankind, but since we are more aware of them we have some understanding of how to mitigate them or at least prepare for them. Climate change tops the global agenda today, but although it is a problem we need to address, it is not an existential threat. Artificial intelligence could lead to intense concentration of power and control. But AI could also have huge benefits. It can speed up science, and it can automate away all monotonous work and give us more time with family and friends and for creativity. ”The scale of the upside is as big as our imagination can take us.” Humans have invented dangerous technology before and not used it to its full detrimental capacity. ”It is a striking thing about the world how much destruction could be reaped if people wanted to. That is actually a source of concern, because AI systems might not have those human safeguards.” One prerequisite to achieve a better future is to actively change our values. There has been tremendous moral progress over the last couple of centuries, but we need to expand our sphere of moral concern, according to MacAskill. ”We care about family and friends and perhaps the nation, but I think we should care as much about everyone, and much more than we do about non-human animals. A hundred billion land animals are killed every year for food, and the vast majority of them are kept in horrific suffering.” William MacAskill thinks some aspects of the course of history are inevitable, such as population growth and technological advancement, but when it comes to moral changes he is not sure. ”We shouldn't be complacent. Moral collapse can happen again.” William thinks we are at a crucial juncture in time. ”The stakes are much higher than before, the level of prosperity or doom that we could face.” William and I have a discussion about the possibility that alien civilizations are monitoring us or have visited Earth. William is not convinced that the recent Pentagon disclosures actually prove alien presence, but he is open to it, and he has some thoughts on what a close encounter would entail. We also talk briefly about the possibility of a lost human civilization and the cause of the extinction of the megafauna during the Younger Dryas. We have some differing views on that. My final question is a biggie: Could humankind's next big leap be an inward leap, a raise in consciousness? ”It is a possibility. Maybe the best thing is not to spread out and become ever bigger but instead have a life of spirituality.”

Crazy Town
How Longtermism Became the Most Dangerous Philosophy You've Never Heard of

Crazy Town

Play Episode Listen Later May 17, 2023 63:25 Transcription Available


Meet William MacAskill, the puerile professor who helps crypto capitalists justify sociopathy today for a universe of transhuman colonization tomorrow. Please share this episode with your friends and start a conversation.Warning: This podcast occasionally uses spicy language.For an entertaining deep dive into the theme of season five (Phalse Prophets), read the definitive peer-reviewed taxonomic analysis from our very own Jason Bradford, PhD. Sources/Links/Notes:Andrew Anthony, "William MacAskill: 'There are 80 trillion people yet to come. They need us to start protecting them'," The Guardian, August 21, 2022.Guiding Principles of the Centre for Effective AltruismPeter Singer, "Famine, Affluence and Morality," givingwhatwecan.org.Sarah Pessin, "Political Spiral Logics," sarahpessin.com.Eliezer Yudkowsky, "Pausing AI Developments Isn't Enough. We Need to Shut it All Down," Time, March 29, 2023.Emile Torres explains the acronym TESCREAL in a Twitter thread.Benjamin Todd and William MacAskill, "Is it ever OK to take a harmful job in order to do more good? An in-depth analysis," 80,000 Hours, March 26, 2023.William MacAskill, "The Case for Longtermism," The New York Times, August 5, 2022.Emile P. Torres, "Understanding “longertermism”: Why this suddenly influential philosophy is so toxic," Salon, August 20, 2022.Nick Bostrom, "Existential Risks," Journal of Evolution and Technology (2002).Nick Bostrom, "Astronomical Waste: The Opportunity Cost of Delayed Technological Development," Utilitas (2003).Emile P. Torres, "How Elon Musk sees the future: His bizarre sci-fi visions should concern us all," Salon,  July 17, 2022.Support the show

The Nonlinear Library
EA - More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios by Vasco Grilo

The Nonlinear Library

Play Episode Listen Later May 1, 2023 25:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios, published by Vasco Grilo on April 29, 2023 on The Effective Altruism Forum. Disclaimer: this is not a project from Alliance to Feed the Earth in Disasters (ALLFED). Summary Global warming increases the risk from climate change. This “has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss”. However, I think global warming also decreases the risk from food shocks caused by abrupt sunlight reduction scenarios (ASRSs), which can be a nuclear winter, volcanic winter, or impact winter. In essence, because low temperature is a major driver for the decrease in crop yields that can lead to widespread starvation (see Xia 2022, and this post from Luisa Rodriguez). Factoring in both of the above, my best guess is that additional emissions of greenhouse gases (GHGs) are beneficial up to an optimal median global warming in 2100 relative to 1880 of 3.3 ºC, after which the increase in the risk from climate change outweighs the reduction in that from ASRSs. This suggests delaying decarbonisation is good at the margin if one trusts (on top of my assumptions!): Metaculus' community median prediction of 2.41 ºC. Climate Action Tracker's projections of 2.6 to 2.9 ºC for current policies and action. Nevertheless, I am not confident the above conclusion is resilient. My sensitivity analysis indicates the optimal median global warming can range from 0.1 to 4.3 ºC. So the takeaway for me is that we do not really know whether additional GHG emissions are good/bad. In any case, it looks like the effect of global warming on the risk from ASRSs is a crucial consideration, and therefore it must be investigated, especially because it is very neglected. Another potentially crucial consideration is that an energy system which relies more on renewables, and less on fossil fuels is less resilient to ASRSs. Robustly good actions would be: Improving civilisation resilience. Prioritising the risk from nuclear war over that from climate change (at the margin). Keeping options open by: Not massively decreasing/increasing GHG emissions. Researching cost-effective ways to decrease/increase GHG emissions. Learning more about the risks posed by ASRSs and climate change. Introduction In the sense that matters most for effective altruism, climate change refers to large-scale shifts in weather patterns that result from emissions of greenhouse gases such as carbon dioxide and methane largely from fossil fuel consumption. Climate change has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss. In What We Owe to the Future (WWOF), William MacAskill argues “decarbonisation [decreasing GHG emissions] is a proof of concept for longtermism”, describing it as a “win-win-win-win-win”. In addition to (supposedly) improving the longterm future: “Moving to clean energy has enormous benefits in terms of present-day human health. Burning fossil fuels pollutes the air with small particles that cause lung cancer, heart disease, and respiratory infections”. “By making energy cheaper [in the long run], clean energy innovation improves living standards in poorer countries”. “By helping keep fossil fuels in the ground, it guards against the risk of unrecovered collapse”. “By furthering technological progress, it reduces the risk of longterm stagnation”. I agree decarbonisation will eventually be beneficial, but I am not sure decreasing GHG emissions is good at the margin now. As I said in my hot takes on counterproductive altruism: Mitigating global warming dec...

In Pursuit of Development
Making every dollar count — Ryan Briggs

In Pursuit of Development

Play Episode Listen Later Apr 26, 2023 53:15


Effective altruism has been in the news of late. Sam Bankman-Fried, the CEO of the FTX cryptocurrency exchange, which collapsed in 2022, was for many years a leading voice for and financial sponsor of the effective altruist movement. He and others have argued for ‘longtermism': the idea that positively influencing the distant future is a key moral priority of our time. As effective altruism and longtermism have become increasingly influential, these ideas have also been subject to greater scrutiny. Ryan Briggs is an associate professor in the Guelph Institute of Development Studies and Department of Political Science at the University of Guelph. He has worked extensively on foreign aid, African politics, and effective altruism. Twitter: @ryancbriggs Resources:Ryan's research on foreign aid and African politicsRethinking Foreign Aid and Legitimacy: Views from Aid Recipients in Kenya (Lindsay R. Dolan)The Life You Can Save (conversation with Peter Singer, in season 3 of In Pursuit of Development) Key highlights:Introduction - 00:43The current status of the effective altruism movement - 03:08Strengthening effective altruism with a capability approach - 15:07The political effects of foreign aid - 21:37Targeting the poorest in World Bank projects - 39:43How effective altruism can shape aid policies - 48:32 Host:Professor Dan Banik, University of Oslo, Twitter: @danbanik  @GlobalDevPodApple Google Spotify YouTubeSubscribe: https://globaldevpod.substack.com/   

Philosophy Bites
William MacAskill on Longtermism

Philosophy Bites

Play Episode Listen Later Mar 9, 2023 22:23


In this episode of the Philosophy Bites podcast David Edmonds interviews Will MacAskill on the controversial idea that we ought to give the interests of future people substantial weight when deciding what we ought to do now.   

At the Coalface
Kaddu Sebunya - Conserving Africa's environment: the key to climate change

At the Coalface

Play Episode Play 35 sec Highlight Listen Later Mar 1, 2023 68:20


In this episode, I speak with Kaddu Sebunya. Kaddu is passionate about nature conservation. In his role as CEO of African Wildlife Foundation, he rallies African elites to lead the fight against the destruction of valuable habitats and wildlife. He believes that conservation by Africans for Africans is at the heart of addressing the continent's challenges around economic development and equality, it's the right place to start. I'm delighted to be having this conversation with Kaddu, he has such an important message that he shares with an infectious energy that I hope will inspire you too!The book that Kaddu mentions is What We Owe the Future by William MacAskill.Recorded on 16 January 2023.Instagram: @at.the.coalfaceConnect with Kaddu on LinkedIn at linkedin.com/in/kaddu-kiwe-sebunya-384b4658 and on Twitter @AWFCEO.Please subscribe to At the Coalface wherever you get your podcasts to receive a new episode every two weeks: Apple Podcasts  |  Spotify  |  Google PodcastsHelp us produce more episodes by becoming a supporter. Your subscription will go towards our hosting and production costs. Supporters get the opportunity to join behind the scenes during upcoming recordings. Thank you.Support the show

entreprequeers podcast
Episode Fifty Seven: Bad Ideas Only

entreprequeers podcast

Play Episode Listen Later Feb 1, 2023 37:49


This week, Kaylene & Anna revisit one of their favorite pastimes. Listen in as they discuss the ins and outs of Carnival season, taking action on new ideas, and harken back to season one's Bywater Business Plans episode as they brainstorm a few half-baked business ideas.Episode Fifty Seven Show NotesTarot Card of the Week: Ace of Swords from The Muse TarotBro Book Review: What We Owe The Future by William MacAskill

Making Sense with Sam Harris
Making Sense of Foundations of Morality | Episode 3 of The Essential Sam Harris

Making Sense with Sam Harris

Play Episode Listen Later Jan 5, 2023 44:37


In this episode, we try to trace morality to its elusive foundations. Throughout the compilation we take a look at Sam's “Moral Landscape” and his effort to defend an objective path towards moral evaluation. We begin with the moral philosopher Peter Singer who outlines his famous “shallow pond” analogy and the framework of utilitarianism. We then hear from the moral psychologist Paul Bloom who makes the case against empathy and points out how it is more often a “bug” in our moral software than a “feature.” Later, William MacAskill describes the way a utilitarian philosophy informs his engagement with the Effective Altruism movement. The moral psychologist Jonathan Haidt then puts pressure on Sam's emphasis on rationality and objective pathways towards morality by injecting a healthy dose of psychological skepticism into the conversation. After, we hear a fascinating exchange with the historian Dan Carlin where he and Sam tangle on the fraught issues of cultural relativism. We end by exploring the intersection of technological innovation and moral progress with the entrepreneur Uma Valeti, whom Sam seeks out when he encounters his own collision with a personal moral failure.   About the Series Filmmaker Jay Shapiro has produced The Essential Sam Harris, a new series of audio documentaries exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you'll find this series fascinating.

Using the Whole Whale Podcast
FTX Collapse & Effective Altruism (news)

Using the Whole Whale Podcast

Play Episode Listen Later Nov 17, 2022 22:06


What The FTX Collapse Does & Does Not Mean For Crypto Philanthropy & Effective Altruism  Crypto-exchange FTX, one of the largest such exchanges, collapsed last week, leaving the cryptocurrency world in disbelief as stakeholders try to piece together what happened and what comes next. The company's founder Sam Bankman-Fried (known by the moniker SBF) was a visible proponent and donor to the effective altruism movement, as well as someone who built a personal brand as a prominent crypto-philanthropist. As noted by The New York Times, SBF was perhaps one of the most visible supporters of Effective Altruism, a community underpinned by a utilitarian approach to giving where donors focus on giving only to the most impact-efficient charitable causes. Created by Oxford philosopher William MacAskill, the Effective Altruism movement faces serious reputational trust issues as supporters worry it was a cover for the reckless FTX founder. It was also revealed by The New York Times that the two largest FTX Foundation grants went to nonprofits where MacAskill was on the board or directly supported the work of Effective Altruism. Bankman-Fried, who has also spoken frequently of his crypto giving, may have abused the crypto-philanthropy space to shield himself from questioning, but nonprofits should still understand that 38% of millennials own crypto and represent a major (and growing) potential source of donation revenue. (Editor's Note: The above link is a blog post written by Whole Whale CEO George Weiner, the publisher of this newsletter. The Giving Block is a proud partner and client of Whole Whale.) Read more ➝   Former executives of nonprofit indicted in alleged $10.7 million fraud scheme | KLBK California expected to partner with nonprofit Civica Rx to produce its own low-cost insulin, sources say | NBC News  New York City nonprofits stepping up to help asylum seekers find jobs | CBS New York    

In the Bubble with Andy Slavitt
A Sneak Peek at our Future (with William MacAskill)

In the Bubble with Andy Slavitt

Play Episode Listen Later Nov 2, 2022 41:29


We are living in a time of incredible technological advances that pose both opportunities and risks to the human species. Andy speaks with futurist William MacAskill about some of the ways humanity could end, from nuclear war to artificial intelligence, and how to take steps now to prevent our own extinction. He explains his approach to living with a long term mindset and the ways in which the future could be a thousand times greater than it is today. Keep up with Andy on Twitter @ASlavitt. Follow William MacAskill on Twitter @willmacaskill. Joining Lemonada Premium is a great way to support our show and get bonus content. Subscribe today at bit.ly/lemonadapremium.    Support the show by checking out our sponsors! CVS Health helps people navigate the healthcare system and their personal healthcare by improving access, lowering costs and being a trusted partner for every meaningful moment of health. At CVS Health, healthier happens together. Learn more at cvshealth.com. Click this link for a list of current sponsors and discount codes for this show and all Lemonada shows: https://lemonadamedia.com/sponsors/    Check out these resources from today's episode:  Order William's book, “What We Owe the Future”: https://whatweowethefuture.com/ Learn about William's organization, Giving What We Can: https://www.givingwhatwecan.org/ Learn about 80,000 Hours, an organization that helps students and graduates find careers that tackle the world's most pressing problems: https://80000hours.org/ Find vaccines, masks, testing, treatments, and other resources in your community: https://www.covid.gov/ Order Andy's book, “Preventable: The Inside Story of How Leadership Failures, Politics, and Selfishness Doomed the U.S. Coronavirus Response”: https://us.macmillan.com/books/9781250770165  Stay up to date with us on Twitter, Facebook, and Instagram at @LemonadaMedia.  For additional resources, information, and a transcript of the episode, visit lemonadamedia.com/show/inthebubble.See omnystudio.com/listener for privacy information.

Tech Won't Save Us
Don't Fall for the Longtermism Sales Pitch w/ Émile Torres

Tech Won't Save Us

Play Episode Listen Later Oct 20, 2022 63:44


Paris Marx is joined by Émile Torres to discuss the ongoing effort to sell effective altruism and longtermism to the public, and why they're philosophies that won't solve the real problems we face.Émile Torres is a PhD candidate at Leibniz University Hannover and the author of the forthcoming book Human Extinction: A History of the Science and Ethics of Annihilation. Follow Émile on Twitter at @xriskology.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, support the show on Patreon, and sign up for the weekly newsletter.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Émile recently wrote about the ongoing effort to sell longtermism and effective altruism to the public.Peter Singer wrote an article published in 1972 arguing that rich people need to give to charity, which went on to influence effective altruists.NYT recently opined on whether it's ethical for lawyers to defend climate villains.Nathan Robinson recently criticized effective altruism for Current Affairs.Support the show

The Next Big Idea
LONGTERMISM: Why You Should Care About Future People

The Next Big Idea

Play Episode Listen Later Oct 13, 2022 77:28


If the human race lasts as long as a typical mammalian species and our population continues at its current size, then there are 80 trillion people yet to come. Oxford philosophy professor William MacAskill says it's up to us to protect them. In his bold new book, "What We Owe the Future," MacAskill makes a case for longtermism. He believes that how long we survive as a species may depend on the actions we take now. --- To hear the Book Bite for "What We Owe the Future," download the Next Big Idea app at nextbigideaclub.com/app

The Daily Show With Trevor Noah: Ears Edition
Russia Coerces Ukrainians Into Voting To Join Russian Federation | William MacAskill

The Daily Show With Trevor Noah: Ears Edition

Play Episode Listen Later Sep 28, 2022 33:43


Russia coerces Ukrainians into voting in favor of joining the Russian Federation, Ronny Chieng teaches a class on K-pop, and William MacAskill discusses his book "What We Owe the Future."See omnystudio.com/listener for privacy information.

The Weeds
Who decides how we'll save the future?

The Weeds

Play Episode Listen Later Sep 13, 2022 66:09


How do we make life better for future generations? Who gets to make those decisions? These are tough questions, and today's guest, philosopher William MacAskill (@willmacaskill), tries to help us answer them. References:  What We Owe the Future by William MacAskill Effective altruism's most controversial idea  How effective altruism went from a niche movement to a billion-dollar force Effective altruism's longtermist goals for the future don't hurt people in the present  Hosts: Bryan Walsh (@bryanrwalsh) Sigal Samuel (@sigalsamuel) Credits: Sofi LaLonde, producer and engineer Libby Nelson, editorial adviser A.M. Hall, deputy editorial director of talk podcasts Want to support The Weeds? Please consider making a donation to Vox: bit.ly/givepodcasts Learn more about your ad choices. Visit podcastchoices.com/adchoices

EconTalk
Will MacAskill on Longtermism and What We Owe the Future

EconTalk

Play Episode Listen Later Sep 5, 2022 76:22


Philosopher William MacAskill of the University of Oxford and a founder of the effective altruism movement talks about his book What We Owe the Future with EconTalk host Russ Roberts. MacAskill advocates "longtermism," giving great attention to the billions of people who will live on into the future long after we are gone. Topics discussed include the importance of moral entrepreneurs, why it's moral to have children, and the importance of trying to steer the future for better outcomes.

10% Happier with Dan Harris
491: A New Way to Think About Your Money | William MacAskill

10% Happier with Dan Harris

Play Episode Listen Later Aug 29, 2022 64:13


Most of us worry about money sometimes, but what if we changed the way we thought about our relationship to finances? Today's guest, William MacAskill, offers a framework in which to do just that. He calls it effective altruism. One of the core arguments of effective altruism is that we all ought to consider giving away a significant chunk of our income because we know, to a mathematical near certainty, that several thousand dollars could save a life.Today we're going to talk about the whys and wherefores of effective altruism. This includes how to get started on a very manageable and doable level (which does not require you to give away most of your income), and the benefits this practice has on both the world and your own psyche.MacAskill is an associate professor of philosophy at Oxford University and one of the founders of the effective altruism movement. He has a new book out called, What We Owe the Future, where he makes a case for longtermism, a term used to describe developing the mental habit of thinking about the welfare of future generations. In this episode we talk about: Effective altruismWhether humans are really wired to consider future generationsPractical tips for thinking and acting on longtermismHis argument for having childrenAnd his somewhat surprising take on how good our future could be if we play our cards rightPodcast listeners can get 50% off What We Owe the Future using the code WWOTF50 at Bookshop.org.Full Shownotes: https://www.tenpercent.com/podcast-episode/william-macaskill-491See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

KERA's Think
A philosopher on why we should care about future generations

KERA's Think

Play Episode Listen Later Aug 19, 2022 34:31


We might consider how our actions will affect the lives of our children and grandchildren. But what about the dozens of generations that hopefully come next? William MacAskill is associate professor of philosophy at the University of Oxford and co-founder of the Centre for Effective Altruism. He joins host Krys Boyd to discuss why we must make long-term thinking a priority if we truly care about the descendants we'll never meet. His book is called “What We Owe the Future.”

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
207 | William MacAskill on Maximizing Good in the Present and Future

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Play Episode Listen Later Aug 15, 2022 102:23


It's always a little humbling to think about what affects your words and actions might have on other people, not only right now but potentially well into the future. Now take that humble feeling and promote it to all of humanity, and arbitrarily far in time. How do our actions as a society affect all the potential generations to come? William MacAskill is best known as a founder of the Effective Altruism movement, and is now the author of What We Owe the Future. In this new book he makes the case for longtermism: the idea that we should put substantial effort into positively influencing the long-term future. We talk about the pros and cons of that view, including the underlying philosophical presuppositions.Mindscape listeners can get 50% off What We Owe the Future, thanks to a partnership between the Forethought Foundation and Bookshop.org. Just click here and use code MINDSCAPE50 at checkout.Support Mindscape on Patreon.William (Will) MacAskill received his D.Phil. in philosophy from the University of Oxford. He is currently an associate professor of philosophy at Oxford, as well as a research fellow at the Global Priorities Institute, director of the Forefront Foundation for Global Priorities Research, President of the Centre for Effective Altruism, and co-founder of 80,000 hours and Giving What We Can.Web sitePhilPeople profileGoogle Scholar publicationsWikipediaTwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Making Sense with Sam Harris
#292 — How Much Does the Future Matter?

Making Sense with Sam Harris

Play Episode Listen Later Aug 14, 2022 120:21


In this episode of the podcast, Sam Harris speaks with William MacAskill about his new book, What We Owe the Future. They discuss the philosophy of effective altruism (EA), longtermism, existential risk, criticism of EA, problems with expected-value reasoning, doing good vs feeling good, why it's hard to care about future people, how the future gives meaning to the present, why this moment in history is unusual, the pace of economic and technological growth, bad political incentives, value lock-in, the well-being of conscious creatures as the foundation of ethics, the risk of unaligned AI, how bad we are at predicting technological change, and other topics. SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.  

Conversations with Tyler
William MacAskill on Effective Altruism, Moral Progress, and Cultural Innovation

Conversations with Tyler

Play Episode Listen Later Aug 10, 2022 50:44


When Tyler is reviewing grants for Emergent Ventures, he is struck by how the ideas of effective altruism have so clearly influenced many of the smartest applicants, particularly the younger ones. And William MacAskill, whom Tyler considers one of the world's most influential philosophers, is a leading light of the community. William joined Tyler to discuss why the movement has gained so much traction and more, including his favorite inefficient charity, what form of utilitarianism should apply to the care of animals, the limits of expected value, whether effective altruists should be anti-abortion, whether he'd would side with aliens over humans, whether he should give up having kids, why donating to a university isn't so bad, whether we are living in “hingey” times, why buildering is overrated, the sociology of the effective altruism movement, why cultural innovation matters, and whether starting a new university might be next on his slate. Read a full transcript enhanced with helpful links, or watch the full video. Recorded July 7th, 2022 Other ways to connect Follow us on Twitter and Instagram Follow Tyler on Twitter  Follow Will on Twitter Email us: cowenconvos@mercatus.gmu.edu Subscribe at our newsletter page to have the latest Conversations with Tyler news sent straight to your inbox. 

The Ezra Klein Show
Three Sentences That Could Change the World — and Your Life

The Ezra Klein Show

Play Episode Listen Later Aug 9, 2022 68:45


Today's show is built around three simple sentences: “Future people count. There could be a lot of them. And we can make their lives better.” Those sentences form the foundation of an ethical framework known as “longtermism.” They might sound obvious, but to take them seriously is a truly radical endeavor — one with the power to change the world and even your life.That second sentence is where things start to get wild. It's possible that there could be tens of trillions of future people, that future people could outnumber current people by a ratio of something like a million to one. And if that's the case, then suddenly most of the things we spend most of our time arguing about shrink in importance compared with the things that will affect humanity's long-term future.William MacAskill is a professor of philosophy at Oxford University, the director of the Forethought Foundation for Global Priorities Research and the author of the forthcoming book, “What We Owe the Future,” which is the best distillation of the longtermist worldview I've read. So this is a conversation about what it means to take the moral weight of the future seriously and the way that everything — from our political priorities to career choices to definitions of heroism — changes when you do.We also cover the host of questions that longtermism raises: How should we weigh the concerns of future generations against those of living people? What are we doing today that future generations will view in the same way we look back on moral atrocities like slavery? Who are the “moral weirdos” of our time we should be paying more attention to? What are the areas we should focus on, the policies we should push, the careers we should choose if we want to guarantee a better future for our posterity?And much more.Mentioned:"Is A.I. the Problem? Or Are We?" by The Ezra Klein Show "How to Do The Most Good" by The Ezra Klein Show "This Conversation With Richard Powers Is a Gift" by The Ezra Klein ShowBook Recommendations:“Moral Capital” by Christopher Leslie Brown“The Precipice” by Toby Ord“The Scout Mindset” by Julia GalefThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.​​“The Ezra Klein Show” is produced by Annie Galvin and Rogé Karma; fact-checking by Michelle Harris, Mary Marge Locker and Kate Sinclair; original music by Isaac Jones; mixing by Sonia Herrero and Isaac Jones; audience strategy by Shannon Busta. Special thanks to Kristin Lin and Kristina Samulewski.

The Tim Ferriss Show
#612: Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change

The Tim Ferriss Show

Play Episode Listen Later Aug 2, 2022 104:35


Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change | Brought to you by LinkedIn Jobs recruitment platform with 800M+ users, Vuori comfortable and durable performance apparel, and Theragun percussive muscle therapy devices. More on all three below. William MacAskill (@willmacaskill) is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hours, which together have moved over $200 million to effective charities. You can find my 2015 conversation with Will at tim.blog/will. His new book is What We Owe the Future. It is blurbed by several guests of the podcast, including Sam Harris, who wrote, “No living philosopher has had a greater impact upon my ethics than Will MacAskill. . . . This is an altogether thrilling and necessary book.” Please enjoy! *This episode is brought to you by Vuori clothing! Vuori is a new and fresh perspective on performance apparel, perfect if you are sick and tired of traditional, old workout gear. Everything is designed for maximum comfort and versatility so that you look and feel as good in everyday life as you do working out.Get yourself some of the most comfortable and versatile clothing on the planet at VuoriClothing.com/Tim. Not only will you receive 20% off your first purchase, but you'll also enjoy free shipping on any US orders over $75 and free returns.*This episode is also brought to you by Theragun! Theragun is my go-to solution for recovery and restoration. It's a famous, handheld percussive therapy device that releases your deepest muscle tension. I own two Theraguns, and my girlfriend and I use them every day after workouts and before bed. The all-new Gen 4 Theragun is easy to use and has a proprietary brushless motor that's surprisingly quiet—about as quiet as an electric toothbrush.Go to Therabody.com/Tim right now and get your Gen 4 Theragun today, starting at only $199.*This episode is also brought to you by LinkedIn Jobs. Whether you are looking to hire now for a critical role or thinking about needs that you may have in the future, LinkedIn Jobs can help. LinkedIn screens candidates for the hard and soft skills you're looking for and puts your job in front of candidates looking for job opportunities that match what you have to offer.Using LinkedIn's active community of more than 800 million professionals worldwide, LinkedIn Jobs can help you find and hire the right person faster. When your business is ready to make that next hire, find the right person with LinkedIn Jobs. And now, you can post a job for free. Just visit LinkedIn.com/Tim.*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Book Pile
Doing Good Better by William MacAskill

The Book Pile

Play Episode Listen Later Aug 1, 2022 24:45


Want to do as much good in the world as possible, but you don't have a comedy podcast? Today's book is about how to use data to make the biggest positive impact on the world. Plus, Dave wants you to know his net worth in Jet Skis and Kellen goes hard after the pogo sticking crowd.*TheBookPilePodcast@gmail.com*Kellen Erskine has appeared on Conan, Comedy Central, Jimmy Kimmel Live!, NBC's America's Got Talent, and the Amazon Original Series Inside Jokes. He has garnered over 50 million views with his clips on Dry Bar Comedy. In 2018 he was selected to perform on the “New Faces” showcase at the Just For Laughs Comedy Festival in Montreal, Quebec. Kellen was named one of TBS's Top Ten Comics to Watch in 2017. He currently tours the country www.KellenErskine.com*David Vance's videos have garnered over 1 billion views. He has written viral ads for companies like Squatty Potty, Chatbooks, and Lumē, and sketches for the comedy show Studio C. His work has received two Webby Awards, and appeared on Conan. He currently works as a writer on the sitcom Freelancers.