FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges. Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.
Philip Reiner joins us to talk about nuclear, command, control and communications systems. Learn more about Philip's work: https://securityandtechnology.org/ Timestamps: [00:00:00] Introduction [00:00:50] Nuclear command, control, and communications [00:03:52] Old technology in nuclear systems [00:12:18] Incentives for nuclear states [00:15:04] Selectively enhancing security [00:17:34] Unilateral de-escalation [00:18:04] Nuclear communications [00:24:08] The CATALINK System [00:31:25] AI in nuclear command, control, and communications [00:40:27] Russia's war in Ukraine
Daniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems. Topics discussed in this episode include: -Anthropic's mission and research strategy -Recent research and papers by Anthropic -Anthropic's structure as a "public benefit corporation" -Career opportunities You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/ Watch the video version of this episode here: https://www.youtube.com/watch?v=uAA6PZkek4A Careers at Anthropic: https://www.anthropic.com/#careers Anthropic's Transformer Circuits research: https://transformer-circuits.pub/ Follow Anthropic on Twitter: https://twitter.com/AnthropicAI microCOVID Project: https://www.microcovid.org/ Follow Lucas on Twitter: https://twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:44 What was the intention behind forming Anthropic? 6:28 Do the founders of Anthropic share a similar view on AI? 7:55 What is Anthropic's focused research bet? 11:10 Does AI existential safety fit into Anthropic's work and thinking? 14:14 Examples of AI models today that have properties relevant to future AI existential safety 16:12 Why work on large scale models? 20:02 What does it mean for a model to lie? 22:44 Safety concerns around the open-endedness of large models 29:01 How does safety work fit into race dynamics to more and more powerful AI? 36:16 Anthropic's mission and how it fits into AI alignment 38:40 Why explore large models for AI safety and scaling to more intelligent systems? 43:24 Is Anthropic's research strategy a form of prosaic alignment? 46:22 Anthropic's recent research and papers 49:52 How difficult is it to interpret current AI models? 52:40 Anthropic's research on alignment and societal impact 55:35 Why did you decide to release tools and videos alongside your interpretability research 1:01:04 What is it like working with your sibling? 1:05:33 Inspiration around creating Anthropic 1:12:40 Is there an upward bound on capability gains from scaling current models? 1:18:00 Why is it unlikely that continuously increasing the number of parameters on models will lead to AGI? 1:21:10 Bootstrapping models 1:22:26 How does Anthropic see itself as positioned in the AI safety space? 1:25:35 What does being a public benefit corporation mean for Anthropic? 1:30:55 Anthropic's perspective on windfall profits from powerful AI systems 1:34:07 Issues with current AI systems and their relationship with long-term safety concerns 1:39:30 Anthropic's plan to communicate it's work to technical researchers and policy makers 1:41:28 AI evaluations and monitoring 1:42:50 AI governance 1:45:12 Careers at Anthropic 1:48:30 What it's like working at Anthropic 1:52:48 Why hire people of a wide variety of technical backgrounds? 1:54:33 What's a future you're excited about or hopeful for? 1:59:42 Where to find and follow Anthropic This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Anthony Aguirre and Anna Yelizarova join us to discuss FLI's new Worldbuilding Contest. Topics discussed in this episode include: -Motivations behind the contest -The importance of worldbuilding -The rules of the contest -What a submission consists of -Due date and prizes Learn more about the contest here: https://worldbuild.ai/ Join the discord: https://discord.com/invite/njZyTJpwMz You can find the page for the podcast here: https://futureoflife.org/2022/02/08/anthony-aguirre-and-anna-yelizarova-on-flis-worldbuilding-contest/ Watch the video version of this episode here: https://www.youtube.com/watch?v=WZBXSiyienI Follow Lucas on Twitter here: twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:30 What is "worldbuilding" and FLI's Worldbuilding Contest? 6:32 Why do worldbuilding for 2045? 7:22 Why is it important to practice worldbuilding? 13:50 What are the rules of the contest? 19:53 What does a submission consist of? 22:16 Due dates and prizes? 25:58 Final thoughts and how the contest contributes to creating beneficial futures This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy. Topics discussed in this episode include: -Virtual reality as genuine reality -Why VR is compatible with the good life -Why we can never know whether we're in a simulation -Consciousness in virtual realities -The ethics of simulated beings You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers-on-reality-virtual-worlds-and-the-problems-of-philosophy/ Watch the video version of this episode here: https://www.youtube.com/watch?v=hePEg_h90KI Check out David's book and website here: http://consc.net/ Follow Lucas on Twitter here: https://twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:43 How this books fits into David's philosophical journey 9:40 David's favorite part(s) of the book 12:04 What is the thesis of the book? 14:00 The core areas of philosophy and how they fit into Reality+ 16:48 Techno-philosophy 19:38 What is "virtual reality?" 21:06 Why is virtual reality "genuine reality?" 25:27 What is the dust theory and what's it have to do with the simulation hypothesis? 29:59 How does the dust theory fit in with arguing for virtual reality as genuine reality? 34:45 Exploring criteria for what it means for something to be real 42:38 What is the common sense view of what is real? 46:19 Is your book intended to address common sense intuitions about virtual reality? 48:51 Nozick's experience machine and how questions of value fit in 54:20 Technological implementations of virtual reality 58:40 How does consciousness fit into all of this? 1:00:18 Substrate independence and if classical computers can be conscious 1:02:35 How do problems of identity fit into virtual reality? 1:04:54 How would David upload himself? 1:08:00 How does the mind body problem fit into Reality+? 1:11:40 Is consciousness the foundation of value? 1:14:23 Does your moral theory affect whether you can live a good life in a virtual reality? 1:17:20 What does a good life in virtual reality look like? 1:19:08 David's favorite VR experiences 1:20:42 What is the moral status of simulated people? 1:22:38 Will there be unconscious simulated people with moral patiency? 1:24:41 Why we can never know we're not in a simulation 1:27:56 David's credences for whether we live in a simulation 1:30:29 Digital physics and what is says about the simulation hypothesis 1:35:21 Imperfect realism and how David sees the world after writing Reality+ 1:37:51 David's thoughts on God 1:39:42 Moral realism or anti-realism? 1:40:55 Where to follow David and find Reality+ This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk. Topics discussed in this episode include: - Inner Alignment versus Outer Alignment - Foundation Models - Structural AI Risks - Unipolar versus Multipolar Scenarios - The Most Important Thing That Impacts the Future of Life You can find the page for the podcast here: https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021 Watch the video version of this episode here: https://youtu.be/_5xkh-Rh6Ec Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 00:02:22 What is AI alignment? 00:06:00 How has your perspective of this problem changed over the past year? 00:06:28 Inner Alignment 00:13:00 Ways that AI could actually lead to human extinction 00:18:53 Inner Alignment and MACE optimizers 00:20:15 Outer Alignment 00:23:12 The core problem of AI alignment 00:24:54 Learning Systems versus Planning Systems 00:28:10 AI and Existential Risk 00:32:05 The probability of AI existential risk 00:51:31 Core problems in AI alignment 00:54:46 How has AI alignment, as a field of research changed in the last year? 00:54:02 Large scale language models 00:54:50 Foundation Models 00:59:58 Why don't we know that AI systems won't totally kill us all? 01:09:05 How much of the alignment and safety problems in AI will be solved by industry? 01:14:44 Do you think about what beneficial futures look like? 01:19:31 Moral Anti-Realism and AI 01:27:25 Unipolar versus Multipolar Scenarios 01:35:33 What is the safety team at DeepMind up to? 01:35:41 What is the most important thing that impacts the future of life? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Future of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program. Topics discussed in this episode include: - The reason Future of Life Institute is offering AI Existential Safety Grants - How receiving a grant changed Max's career early on - Details on the fellowships and future grant priorities Check out our grants programs here: https://grants.futureoflife.org/ Join our AI Existential Safety Community: https://futureoflife.org/team/ai-exis... Watch the video version of this episode here: https://www.youtube.com/watch?v=VbFNcbJjidU Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 1:08 What inspired you to start this grants program? 4:16 Where would you rate AI technology in terms of its potential impact and power? 6:16 What kind of impact would you like the new FLI grants program to have on the development and outcomes of artificial intelligence? 8:25 How does your personal experience with grants inform this grants process at the Future of Life Institute. 13:41 Do you have any inspiring futures that speak to your heart that you'd be interested in sharing? 15:59 Do you have any final words for anyone who might be listening that's considering applying to this grants program but isn't quite sure 17:29 Could you tell us a little bit more about what the grants program is? 18:29 What are the details of the fellowships? 19:56 Is there a total amount that is on offer between these two programs? 21:20 What are FLI's other grants-related priorities? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Dr. Filippa Lentzos, Senior Lecturer in Science and International Security at King's College London, joins us to discuss the most pressing issues in biosecurity, big data in biology and life sciences, and governance in biological risk. Topics discussed in this episode include: - The most pressing issue in biosecurity - Stories from when biosafety labs failed to contain dangerous pathogens - The lethality of pathogens being worked on at biolaboratories - Lessons from COVID-19 You can find the page for the podcast here: https://futureoflife.org/2021/10/01/filippa-lentzos-on-emerging-threats-in-biosecurity/ Watch the video version of this episode here: https://www.youtube.com/watch?v=I6M34oQ4v4w Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:35 What are the least understood aspects of biological risk? 8:32 Which groups are interested biotechnologies that could be used for harm? 16:30 Why countries may pursue the development of dangerous pathogens 18:45 Dr. Lentzos' strands of research 25:41 Stories from when biosafety labs failed to contain dangerous pathogens 28:34 The most pressing issue in biosecurity 31:06 What is gain of function research? What are the risks? 34:57 Examples of gain of function research 36:14 What are the benefits of gain of function research? 37:54 The lethality of pathogens being worked on at biolaboratorie 40:25 Benefits and risks of big data in biology and the life sciences 45:03 Creating a bioweather map or using big data for biodefense 48:35 Lessons from COVID-19 53:46 How does governance fit in to biological risk? 55:59 Key takeaways from Dr. Lentzos This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Susan Solomon, internationally recognized atmospheric chemist, and Stephen Andersen, leader of the Montreal Protocol, join us to tell the story of the ozone hole and their roles in helping to bring us back from the brink of disaster. Topics discussed in this episode include: -The industrial and commercial uses of chlorofluorocarbons (CFCs) -How we discovered the atmospheric effects of CFCs -The Montreal Protocol and its significance -Dr. Solomon's, Dr. Farman's, and Dr. Andersen's crucial roles in helping to solve the ozone hole crisis -Lessons we can take away for climate change and other global catastrophic risks You can find the page for this podcast here: https://futureoflife.org/2021/09/16/susan-solomon-and-stephen-andersen-on-saving-the-ozone-layer/ Check out the video version of the episode here: https://www.youtube.com/watch?v=7hwh-uDo-6A&ab_channel=FutureofLifeInstitute Check out the story of the ozone hole crisis here: https://undsci.berkeley.edu/article/0_0_0/ozone_depletion_01 Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 3:13 What are CFCs and what was their role in society? 7:09 James Lovelock discovering an abundance of CFCs in the lower atmosphere 12:43 F. Sherwood Rowland's and Mario Molina's research on the atmospheric science of CFCs 19:52 How a single chlorine atom from a CFC molecule can destroy a large amount of ozone 23:12 Moving from models of ozone depletion to empirical evidence of the ozone depleting mechanism 24:41 Joseph Farman and discovering the ozone hole 30:36 Susan Solomon's discovery of the surfaces of high altitude Arctic clouds being crucial for ozone depletion 47:22 The Montreal Protocol 1:00:00 Who were the key stake holders in the Montreal Protocol? 1:03:46 Stephen Andersen's efforts to phase out CFCs as the co-chair of the Montreal Protocol Technology and Economic Assessment Panel 1:13:28 The Montreal Protocol helping to prevent 11 billion metric tons of CO2 emissions per year 1:18:30 Susan and Stephen's key takeaways from their experience with the ozone hole crisis 1:24:24 What world did we avoid through our efforts to save the ozone layer? 1:28:37 The lessons Stephen and Susan take away from their experience working to phase out CFCs from industry 1:34:30 Is action on climate change practical? 1:40:34 Does the Paris Agreement have something like the Montreal Protocol Technology and Economic Assessment Panel? 1:43:23 Final words from Susan and Stephen This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
James Manyika, Chairman and Director of the McKinsey Global Institute, joins us to discuss the rapidly evolving landscape of the modern global economy and the role of technology in it. Topics discussed in this episode include: -The modern social contract -Reskilling, wage stagnation, and inequality -Technology induced unemployment -The structure of the global economy -The geographic concentration of economic growth You can find the page for this podcast here: https://futureoflife.org/2021/09/06/james-manyika-on-global-economic-and-technological-trends/ Check out the video version of the episode here: https://youtu.be/zLXmFiwT0-M Check out the McKinsey Global Institute here: https://www.mckinsey.com/mgi/overview Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:14 What are the most important problems in the world today? 4:30 The issue of inequality 8:17 How the structure of the global economy is changing 10:21 How does the role of incentives fit into global issues? 13:00 How the social contract has evolved in the 21st century 18:20 A billion people lifted out of poverty 19:04 What drives economic growth? 29:28 How does AI automation affect the virtuous and vicious versions of productivity growth? 38:06 Automation and reflecting on jobs lost, jobs gained, and jobs changed 43:15 AGI and automation 48:00 How do we address the issue of technology induced unemployment 58:05 Developing countries and economies 1:01:29 The central forces in the global economy 1:07:36 The global economic center of gravity 1:09:42 Understanding the core impacts of AI 1:12:32 How do global catastrophic and existential risks fit into the modern global economy? 1:17:52 The economics of climate change and AI risk 1:20:50 Will we use AI technology like we've used fossil fuel technology? 1:24:34 The risks of AI contributing to inequality and bias 1:31:45 How do we integrate developing countries voices in the development and deployment of AI systems 1:33:42 James' core takeaway 1:37:19 Where to follow and learn more about James' work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Michael Klare, Five College Professor of Peace & World Security Studies, joins us to discuss the Pentagon's view of climate change, why it's distinctive, and how this all ultimately relates to the risks of great powers conflict and state collapse. Topics discussed in this episode include: -How the US military views and takes action on climate change -Examples of existing climate related difficulties and what they tell us about the future -Threat multiplication from climate change -The risks of climate change catalyzed nuclear war and major conflict -The melting of the Arctic and the geopolitical situation which arises from that -Messaging on climate change You can find the page for this podcast here: https://futureoflife.org/2021/07/30/michael-klare-on-the-pentagons-view-of-climate-change-and-the-risks-of-state-collapse/ Check out the video version of the episode here: https://www.youtube.com/watch?v=bn57jxEoW24 Check out Michael's website here: http://michaelklare.com/ Apply for the Podcast Producer position here: futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:28 How does the Pentagon view climate change and why are they interested in it? 5:30 What are the Pentagon's main priorities besides climate change? 8:31 What are the objectives of career officers at the Pentagon and how do they see climate change? 10:32 The relationship between Pentagon career officers and the Trump administration on climate change 15:47 How is the Pentagon's view of climate change unique and important? 19:54 How climate change exacerbates existing difficulties and the issue of threat multiplication 24:25 How will climate change increase the tensions between the nuclear weapons states of India, Pakistan, and China? 26:32 What happened to Tacloban City and how is it relevant? 32:27 Why does the US military provide global humanitarian assistance? 34:39 How has climate change impacted the conditions in Nigeria and how does this inform the Pentagon's perspective? 39:40 What is the ladder of escalation for climate change related issues? 46:54 What is "all hell breaking loose?" 48:26 What is the geopolitical situation arising from the melting of the Arctic? 52:48 Why does the Bering Strait matter for the Arctic? 54:23 The Arctic as a main source of conflict for the great powers in the coming years 58:01 Are there ongoing proposals for resolving territorial disputes in the Arctic? 1:01:40 Nuclear weapons risk and climate change 1:03:32 How does the Pentagon intend to address climate change? 1:06:20 Hardening US military bases and going green 1:11:50 How climate change will affect critical infrastructure 1:15:47 How do lethal autonomous weapons fit into the risks of escalation in a world stressed by climate change? 1:19:42 How does this all affect existential risk? 1:24:39 Are there timelines for when climate change induced stresses will occur? 1:27:03 Does tying existential risks to national security issues benefit awareness around existential risk? 1:30:18 Does relating climate change to migration issues help with climate messaging? 1:31:08 A summary of the Pentagon's interest, view, and action on climate change 1:33:00 Final words from Michael 1:34:33 Where to find more of Michael's work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Avi Loeb, Professor of Science at Harvard University, joins us to discuss unidentified aerial phenomena and a recent US Government report assessing their existence and threat. Topics discussed in this episode include: -Evidence counting for the natural, human, and extraterrestrial origins of UAPs -The culture of science and how it deals with UAP reports -How humanity should respond if we discover UAPs are alien in origin -A project for collecting high quality data on UAPs You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-ufos-and-if-theyre-alien-in-origin/ Apply for the Podcast Producer position here: futureoflife.org/job-postings/ Check out the video version of the episode here: https://www.youtube.com/watch?v=AyNlLaFTeFI&ab_channel=FutureofLifeInstitute Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 1:41 Why is the US Government report on UAPs significant? 7:08 Multiple different sensors detecting the same phenomena 11:50 Are UAPs a US technology? 13:20 Incentives to deploy powerful technology 15:48 What are the flight and capability characteristics of UAPs? 17:53 The similarities between 'Oumuamua and UAP reports 20:11 Are UAPs some form of spoofing technology? 22:48 What is the most convincing natural or conventional explanation of UAPs? 25:09 UAPs as potentially containing artificial intelligence 28:15 Can you give a credence to UAPs being alien in origin? 29:32 Why aren't UAPs far more technologically advanced? 32:15 How should humanity respond if UAPs are found to be alien in origin? 35:15 A plan to get better data on UAPs 38:56 Final thoughts from Avi 39:40 Getting in contact with Avi to support his project This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Avi Loeb, Professor of Science at Harvard University, joins us to discuss a recent interstellar visitor, if we've already encountered alien technology, and whether we're ultimately alone in the cosmos. Topics discussed in this episode include: -Whether 'Oumuamua is alien or natural in origin -The culture of science and how it affects fruitful inquiry -Looking for signs of alien life throughout the solar system and beyond -Alien artefacts and galactic treaties -How humanity should handle a potential first contact with extraterrestrials -The relationship between what is true and what is good You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-oumuamua-aliens-space-archeology-great-filters-and-superstructures/ Apply for the Podcast Producer position here: https://futureoflife.org/job-postings/ Check out the video version of the episode here: https://www.youtube.com/watch?v=qcxJ8QZQkwE&ab_channel=FutureofLifeInstitute See our second interview with Avi here: https://soundcloud.com/futureoflife/avi-loeb-on-ufos-and-if-theyre-alien-in-origin Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 3:28 What is 'Oumuamua's wager? 11:29 The properties of 'Oumuamua and how they lend credence to the theory of it being artificial in origin 17:23 Theories of 'Oumuamua being natural in origin 21:42 Why was the smooth acceleration of 'Oumuamua significant? 23:35 What are comets and asteroids? 28:30 What we know about Oort clouds and how 'Oumuamua relates to what we expect of Oort clouds 33:40 Could there be exotic objects in Oort clouds that would account for 'Oumuamua 38:08 What is your credence that 'Oumuamua is alien in origin? 44:50 Bayesian reasoning and 'Oumuamua 46:34 How do UFO reports and sightings affect your perspective of 'Oumuamua? 54:35 Might alien artefacts be more common than we expect? 58:48 The Drake equation 1:01:50 Where are the most likely great filters? 1:11:22 Difficulties in scientific culture and how they affect fruitful inquiry 1:27:03 The cosmic endowment, traveling to galactic clusters, and galactic treaties 1:31:34 Why don't we find evidence of alien superstructures? 1:36:36 Looking for the bio and techno signatures of alien life 1:40:27 Do alien civilizations converge on beneficence? 1:43:05 Is there a necessary relationship between what is true and good? 1:47:02 Is morality evidence based knowledge? 1:48:18 Axiomatic based knowledge and testing moral systems 1:54:08 International governance and making contact with alien life 1:55:59 The need for an elite scientific body to advise on global catastrophic and existential risk 1:59:57 What are the most fundamental questions? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Nicolas Berggruen, investor and philanthropist, joins us to explore the dynamics of power, wisdom, technology and ideas in the 21st century. Topics discussed in this episode include: -What wisdom consists of -The role of ideas in society and civilization -The increasing concentration of power and wealth -The technological displacement of human labor -Democracy, universal basic income, and universal basic capital -Living an examined life You can find the page for this podcast here: https://futureoflife.org/2021/05/31/nicolas-berggruen-on-the-dynamics-of-power-wisdom-technology-and-ideas-in-the-age-of-ai/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 1:45 The race between the power of our technology and the wisdom with which we manage it 5:19 What is wisdom? 8:30 The power of ideas 11:06 Humanity’s investment in wisdom vs the power of our technology 15:39 Why does our wisdom lag behind our power? 20:51 Technology evolving into an agent 24:28 How ideas play a role in the value alignment of technology 30:14 Wisdom for building beneficial AI and mitigating the race to power 34:37 Does Mark Zuckerberg have control of Facebook? 36:39 Safeguarding the human mind and maintaining control of AI 42:26 The importance of the examined life in the 21st century 45:56 An example of the examined life 48:54 Important ideas for the 21st century 52:46 The concentration of power and wealth, and a proposal for universal basic capital 1:03:07 Negative and positive futures 1:06:30 Final thoughts from Nicolas This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence. Topics discussed in this episode include: -Negative and positive outcomes from AI in the short, medium, and long-terms -The perils and promises of AGI and superintelligence -AI alignment and AI existential risk -Lethal autonomous weapons -AI governance and racing to powerful AI systems -AI consciousness You can find the page for this podcast here: https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 1:35 Futures that Bart is excited about 4:08 Positive futures in the short, medium, and long-terms 7:23 AGI timelines 8:11 Bart’s research on “planning” through the game of Sokoban 13:10 If we don’t go extinct, is the creation of AGI and superintelligence inevitable? 15:28 What’s exciting about futures with AGI and superintelligence? 17:10 How long does it take for superintelligence to arise after AGI? 21:08 Would a superintelligence have something intelligent to say about income inequality? 23:24 Are there true or false answers to moral questions? 25:30 Can AGI and superintelligence assist with moral and philosophical issues? 28:07 Do you think superintelligences converge on ethics? 29:32 Are you most excited about the short or long-term benefits of AI? 34:30 Is existential risk from AI a legitimate threat? 35:22 Is the AI alignment problem legitimate? 43:29 What are futures that you fear? 46:24 Do social media algorithms represent an instance of the alignment problem? 51:46 The importance of educating the public on AI 55:00 Income inequality, cyber security, and negative futures 1:00:06 Lethal autonomous weapons 1:01:50 Negative futures in the long-term 1:03:26 How have your views of AI alignment evolved? 1:06:53 Bart’s plans and intentions for the Association for the Advancement of Artificial Intelligence 1:13:45 Policy recommendations for existing AIs and the AI ecosystem 1:15:35 Solving the parts of the AI alignment that won’t be solved by industry incentives 1:18:17 Narratives of an international race to powerful AI systems 1:20:42 How does an international race to AI affect the chances of successful AI alignment? 1:23:20 Is AI a zero sum game? 1:28:51 Lethal autonomous weapons governance 1:31:38 Does the governance of autonomous weapons affect outcomes from AGI 1:33:00 AI consciousness 1:39:37 Alignment is important and the benefits of AI can be great This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century. Topics discussed in this episode include: -Intelligence and coordination -Existential risk from AI, synthetic biology, and unknown unknowns -AI adoption as a delegation process -Jaan's investments and philanthropic efforts -International coordination and incentive structures -The short-term and long-term AI safety communities You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 1:29 How can humanity improve? 3:10 The importance of intelligence and coordination 8:30 The bottlenecks of input and output bandwidth as well as processing speed between AIs and humans 15:20 Making the creation of AI feel dangerous and how the nuclear power industry killed itself by downplaying risks 17:15 How Jaan evaluates and thinks about existential risk 18:30 Nuclear weapons as the first existential risk we faced 20:47 The likelihood of unknown unknown existential risks 25:04 Why Jaan doesn't see nuclear war as an existential risk 27:54 Climate change 29:00 Existential risk from synthetic biology 31:29 Learning from mistakes, lacking foresight, and the importance of generational knowledge 36:23 AI adoption as a delegation process 42:52 Attractors in the design space of AI 44:24 The regulation of AI 45:31 Jaan's investments and philanthropy in AI 55:18 International coordination issues from AI adoption as a delegation process 57:29 AI today and the negative impacts of recommender algorithms 1:02:43 Collective, institutional, and interpersonal coordination 1:05:23 The benefits and risks of longevity research 1:08:29 The long-term and short-term AI safety communities and their relationship with one another 1:12:35 Jaan's current philanthropic efforts 1:16:28 Software as a philanthropic target 1:19:03 How do we move towards beneficial futures with AI? 1:22:30 An idea Jaan finds meaningful 1:23:33 Final thoughts from Jaan 1:25:27 Where to find Jaan This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures. Topics discussed in this episode include: -Understanding the universe through digital physics -How human consciousness operates and is structured -The path to aligned AGI and bottlenecks to beneficial futures -Incentive structures and collective coordination You can find the page for this podcast here: https://futureoflife.org/2021/03/31/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/ You can find FLI's three new policy focused job postings here: futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 3:17 What is truth and knowledge? 11:39 What is subjectivity and objectivity? 14:32 What is the universe ultimately? 19:22 Is the universe a cellular automaton? Is the universe ultimately digital or analogue? 24:05 Hilbert's hotel from the point of view of computation 35:18 Seeing the world as a fractal 38:48 Describing human consciousness 51:10 Meaning, purpose, and harvesting negentropy 55:08 The path to aligned AGI 57:37 Bottlenecks to beneficial futures and existential security 1:06:53 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures? 1:19:39 Non-duality and collective coordination 1:22:53 What difficulties are there for an idealist worldview that involves computation? 1:27:20 Which features of mind and consciousness are necessarily coupled and which aren't? 1:36:40 Joscha's final thoughts on AGI This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety. Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:35 Roman’s primary research interests 4:09 How theoretical proofs help AI safety research 6:23 How impossibility results constrain computer science systems 10:18 The inability to tell if arbitrary code is friendly or unfriendly 12:06 Impossibility results clarify what we can do 14:19 Roman’s results on unexplainability and incomprehensibility 22:34 Focusing on comprehensibility 26:17 Roman’s results on uncontrollability 28:33 Alignment as a subset of safety and control 30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment 33:40 What does it mean to solve AI safety? 34:19 What do the impossibility results really mean? 37:07 Virtual worlds and AI alignment 49:55 AI security and malevolent agents 53:00 Air gapping, boxing, and other security methods 58:43 Some examples of historical failures of AI systems and what we can learn from them 1:01:20 Clarifying impossibility results 1:06 55 Examples of systems failing and what these demonstrate about AI 1:08:20 Are oracles a valid approach to AI safety? 1:10:30 Roman’s final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons. Topics discussed in this episode include: -The current state of the deployment and development of lethal autonomous weapons and swarm technologies -Drone swarms as a potential weapon of mass destruction -The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons -The difficulty of attribution, verification, and accountability with autonomous weapons -Autonomous weapons governance as norm setting for global AI issues You can find the page for this podcast here: https://futureoflife.org/2021/02/25/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/ You can check out the new lethal autonomous weapons website here: https://autonomousweapons.org/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:23 Emilia Javorsky on lethal autonomous weapons 7:27 What is a lethal autonomous weapon? 11:33 Autonomous weapons that exist today 16:57 The concerns of collateral damage, accidental escalation, scalability, control, and error risk 26:57 The proliferation risk of autonomous weapons 32:30 To what extent are global superpowers pursuing these weapons? What is the state of industry's pursuit of the research and manufacturing of this technology 42:13 A possible proposal for a selective ban on small anti-personnel autonomous weapons 47:20 Lethal autonomous weapons as a potential weapon of mass destruction 53:49 The unpredictability of autonomous weapons, especially when swarms are interacting with other swarms 58:09 The risk of autonomous weapons escalating conflicts 01:10:50 The risk of drone swarms proliferating 01:20:16 The risk of assassination 01:23:25 The difficulty of attribution and accountability 01:26:05 The governance of autonomous weapons being relevant to the global governance of AI 01:30:11 The importance of verification for responsibility, accountability, and regulation 01:35:50 Concerns about the beginning of an arms race and the need for regulation 01:38:46 Wrapping up 01:39:23 Outro This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
John Prendergast, former adjunct professor of psychology at the California Institute of Integral Studies, joins Lucas Perry for a discussion about the experience and effects of ego-identification, how to shift to new levels of identity, the nature of non-dual awareness, and the potential relationship between waking up and collective human problems. This is not an FLI Podcast, but a special release where Lucas shares a direction he feels has an important relationship with AI alignment and existential risk issues. Topics discussed in this episode include: -The experience of egocentricity and ego-identification -Waking up into heart awareness -The movement towards and qualities of non-dual consciousness -The ways in which the condition of our minds collectively affect the world -How waking up may be relevant to the creation of AGI You can find the page for this podcast here: https://futureoflife.org/2021/02/09/john-prendergast-on-non-dual-awareness-and-wisdom-for-the-21st-century/ Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 7:10 The modern human condition 9:29 What egocentricity and ego-identification are 15:38 Moving beyond the experience of self 17:38 The origins and structure of self 20:25 A pointing out instruction for noticing ego-identification and waking up out of it 24:34 A pointing out instruction for abiding in heart-mind or heart awareness 28:53 The qualities of and moving into heart awareness and pure awareness 33:48 An explanation of non-dual awareness 40:50 Exploring the relationship between awareness, belief, and action 46:25 Growing up and improving the egoic structure 48:29 Waking up as recognizing true nature 51:04 Exploring awareness as primitive and primary 53:56 John's dream of Sri Nisargadatta Maharaj 57:57 The use and value of conceptual thought and the mind 1:00:57 The epistemics of heart-mind and the conceptual mind as we shift levels of identity 1:17:46 A pointing out instruction for inquiring into core beliefs 1:27:28 The universal heart, qualities of awakening, and the ethical implications of such shifts 1:31:38 Wisdom, waking up, and growing up for the transgenerational issues of the 21st century 1:38:44 Waking up and its applicability to the creation of AGI 1:43:25 Where to find, follow, and reach out to John 1:45:56 Outro This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons (ICAN) and Nobel Peace Prize recipient, joins us to discuss the current risks of nuclear war, policies that can reduce the risks of nuclear conflict, and how to move towards a nuclear weapons free world. Topics discussed in this episode include: -The current nuclear weapons geopolitical situation -The risks and mechanics of accidental and intentional nuclear war -Policy proposals for reducing the risks of nuclear war -Deterrence theory -The Treaty on the Prohibition of Nuclear Weapons -Working towards the total elimination of nuclear weapons You can find the page for this podcast here: https://futureoflife.org/2021/01/21/beatrice-fihn-on-the-total-elimination-of-nuclear-weapons/ Timestamps: 0:00 Intro 4:28 Overview of the current nuclear weapons situation 6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war 9:27 Accidental nuclear war and human systems 12:08 The risks of nuclear war in 2021 and nuclear stability 17:49 Toxic personalities and the human component of nuclear weapons 23:23 Policy proposals for reducing the risk of nuclear war 23:55 New START Treaty 25:42 What does it mean to maintain credible deterrence 26:45 ICAN and working on the Treaty on the Prohibition of Nuclear Weapons 28:00 Deterrence theoretic arguments for nuclear weapons 32:36 The reduction of nuclear weapons, no first use, removing ground based missile systems, removing hair-trigger alert, removing presidential authority to use nuclear weapons 39:13 Arguments for and against nuclear risk reduction policy proposals 46:02 Moving all of the United State's nuclear weapons to bombers and nuclear submarines 48:27 Working towards and the theory of the total elimination of nuclear weapons 1:11:40 The value of the Treaty on the Prohibition of Nuclear Weapons 1:14:26 Elevating activism around nuclear weapons and messaging more skillfully 1:15:40 What the public needs to understand about nuclear weapons 1:16:35 World leaders' views of the treaty 1:17:15 How to get involved This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021. Topics discussed in this episode include: -FLI's perspectives on 2020 and hopes for 2021 -What our favorite projects from 2020 were -The biggest lessons we've learned from 2020 -What we see as crucial and needed in 2021 to ensure and make -improvements towards existential safety You can find the page for this podcast here: https://futureoflife.org/2021/01/08/max-tegmark-and-the-fli-team-on-2020-and-existential-risk-reduction-in-the-new-year/ Timestamps: 0:00 Intro 00:52 First question: What was your favorite project from 2020? 1:03 Max Tegmark on the Future of Life Award 4:15 Anthony Aguirre on AI Loyalty 9:18 David Nicholson on the Future of Life Award 12:23 Emilia Javorksy on being a co-champion for the UN Secretary-General's effort on digital cooperation 14:03 Jared Brown on developing comments on the European Union's White Paper on AI through community collaboration 16:40 Tucker Davey on editing the biography of Victor Zhdanov 19:49 Lucas Perry on the podcast and Pindex video 23:17 Second question: What lessons do you take away from 2020? 23:26 Max Tegmark on human fragility and vulnerability 25:14 Max Tegmark on learning from history 26:47 Max Tegmark on the growing threats of AI 29:45 Anthony Aguirre on the inability of present-day institutions to deal with large unexpected problems 33:00 David Nicholson on the need for self-reflection on the use and development of technology 38:05 Emilia Javorsky on the global community coming to awareness about tail risks 39:48 Jared Brown on our vulnerability to low probability, high impact events and the importance of adaptability and policy engagement 41:43 Tucker Davey on taking existential risks more seriously and ethics-washing 43:57 Lucas Perry on the fragility of human systems 45:40 Third question: What is needed in 2021 to make progress on existential risk mitigation 45:50 Max Tegmark on holding Big Tech accountable, repairing geopolitics, and fighting the myth of the technological zero-sum game 49:58 Anthony Aguirre on the importance of spreading understanding of expected value reasoning and fixing the information crisis 53:41 David Nicholson on the need to reflect on our values and relationship with technology 54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue 56:00 Jared Brown on the need for robust government engagement 57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation 1:00:10 Outro This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
The recipients of the 2020 Future of Life Award, William Foege, Michael Burkinsky, and Victor Zhdanov Jr., join us on this episode of the FLI Podcast to recount the story of smallpox eradication, William Foege's and Victor Zhdanov Sr.'s involvement in the eradication, and their personal experience of the events. Topics discussed in this episode include: -William Foege's and Victor Zhdanov's efforts to eradicate smallpox -Personal stories from Foege's and Zhdanov's lives -The history of smallpox -Biological issues of the 21st century You can find the page for this podcast here: https://futureoflife.org/2020/12/11/future-of-life-award-2020-saving-200000000-lives-by-eradicating-smallpox/ You can watch the 2020 Future of Life Award ceremony here: https://www.youtube.com/watch?v=73WQvR5iIgk&feature=emb_title&ab_channel=FutureofLifeInstitute You can learn more about the Future of Life Award here: https://futureoflife.org/future-of-life-award/ Timestamps: 0:00 Intro 3:13 Part 1: How William Foege got into smallpox efforts and his work in Eastern Nigeria 14:12 The USSR's smallpox eradication efforts and convincing the WHO to take up global smallpox eradication 15:46 William Foege's efforts in and with the WHO for smallpox eradication 18:00 Surveillance and containment as a viable strategy 18:51 Implementing surveillance and containment throughout the world after success in West Africa 23:55 Wrapping up with eradication and dealing with the remnants of smallpox 25:35 Lab escape of smallpox in Birmingham England and the final natural case 27:20 Part 2: Introducing Michael Burkinsky as well as Victor and Katia Zhdanov 29:45 Introducing Victor Zhdanov Sr. and Alissa Zhdanov 31:05 Michael Burkinsky's memories of Victor Zhdanov Sr. 39:26 Victor Zhdanov Jr.'s memories of Victor Zhdanov Sr. 46:15 Mushrooms with meat 47:56 Stealing the family car 49:27 Victor Zhdanov Sr.'s efforts at the WHO for smallpox eradication 58:27 Exploring Alissa's book on Victor Zhdanov Sr.'s life 1:06:09 Michael's view that Victor Zhdanov Sr. is unsung, especially in Russia 1:07:18 Part 3: William Foege on the history of smallpox and biology in the 21st century 1:07:32 The origin and history of smallpox 1:10:34 The origin and history of variolation and the vaccine 1:20:15 West African "healers" who would create smallpox outbreaks 1:22:25 The safety of the smallpox vaccine vs. modern vaccines 1:29:40 A favorite story of William Foege's 1:35:50 Larry Brilliant and people central to the eradication efforts 1:37:33 Foege's perspective on modern pandemics and human bias 1:47:56 What should we do after COVID-19 ends 1:49:30 Bio-terrorism, existential risk, and synthetic pandemics 1:53:20 Foege's final thoughts on the importance of global health experts in politics This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Sean Carroll, theoretical physicist at Caltech, joins us on this episode of the FLI Podcast to comb through the history of human thought, the strengths and weaknesses of various intellectual movements, and how we are to situate ourselves in the 21st century given progress thus far. Topics discussed in this episode include: -Important intellectual movements and their merits -The evolution of metaphysical and epistemological views over human history -Consciousness, free will, and philosophical blunders -Lessons for the 21st century You can find the page for this podcast here: https://futureoflife.org/2020/12/01/sean-carroll-on-consciousness-physicalism-and-the-history-of-intellectual-progress/ You can find the video for this podcast here: https://youtu.be/6HNjL8_fsTk Timestamps: 0:00 Intro 2:06 The problem of beliefs and the strengths and weaknesses of religion 6:40 The Age of Enlightenment and importance of reason 10:13 The importance of humility and the is--ought gap 17:53 The advantages of religion and mysticism 19:50 Materialism and Newtonianism 28:00 Duality, self, suffering, and philosophical blunders 36:56 Quantum physics as a paradigm shift 39:24 Physicalism, the problem of consciousness, and free will 01:01:50 What does it mean for something to be real? 01:09:40 The hard problem of consciousness 01:14:20 The multiple worlds interpretation of quantum mechanics and utilitarianism 01:21:16 The importance of being charitable in conversation 1:24:55 Sean's position in the philosophy of consciousness 01:27:29 Sean's metaethical position 01:29:36 Where to find and follow Sean This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Mohamed Abdalla, PhD student at the University of Toronto, joins us to discuss how Big Tobacco and Big Tech work to manipulate public opinion and academic institutions in order to maximize profits and avoid regulation. Topics discussed in this episode include: -How Big Tobacco uses it's wealth to obfuscate the harm of tobacco and appear socially responsible -The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation -How Big Tech and Big Tobacco work to influence universities, scientists, researchers, and policy makers -How to combat the problem of ethics-washing in Big Tech You can find the page for this podcast here: https://futureoflife.org/2020/11/17/mohamed-abdalla-on-big-tech-ethics-washing-and-the-threat-on-academic-integrity/ The Future of Life Institute AI policy page: https://futureoflife.org/AI-policy/ Timestamps: 0:00 Intro 1:55 How Big Tech actively distorts the academic landscape and what counts as big tech 6:00 How Big Tobacco has shaped industry research 12:17 The four tactics of Big Tobacco and Big Tech 13:34 Big Tech and Big Tobacco working to appear socially responsible 22:15 Big Tech and Big Tobacco working to influence the decisions made by funded universities 32:25 Big Tech and Big Tobacco working to influence research questions and the plans of individual scientists 51:53 Big Tech and Big Tobacco finding skeptics and critics of them and funding them to give the impression of social responsibility 1:00:24 Big Tech and being authentically socially responsible 1:11:41 Transformative AI, social responsibility, and the race to powerful AI systems 1:16:56 Ethics-washing as systemic 1:17:30 Action items for solving Ethics-washing 1:19:42 Has Mohamed received criticism for this paper? 1:20:07 Final thoughts from Mohamed This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Maria Arpa, Executive Director of the Center for Nonviolent Communication, joins the FLI Podcast to share the ins and outs of the powerful needs-based framework of nonviolent communication. Topics discussed in this episode include: -What nonviolent communication (NVC) consists of -How NVC is different from normal discourse -How NVC is composed of observations, feelings, needs, and requests -NVC for systemic change -Foundational assumptions in NVC -An NVC exercise You can find the page for this podcast here: https://futureoflife.org/2020/11/02/maria-arpa-on-the-power-of-nonviolent-communication/ Timestamps: 0:00 Intro 2:50 What is nonviolent communication? 4:05 How is NVC different from normal discourse? 18:40 NVC’s four components: observations, feelings, needs, and requests 34:50 NVC for systemic change 54:20 The foundational assumptions of NVC 58:00 An exercise in NVC This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Stephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats. Topics discussed in this episode include: -The projects of awakening and growing the wisdom with which to manage technologies -What might be possible of embarking on the project of waking up -Facets of human nature that contribute to existential risk -The dangers of the problem solving mindset -Improving the effective altruism and existential risk communities You can find the page for this podcast here: https://futureoflife.org/2020/10/15/stephen-batchelor-on-awakening-embracing-existential-risk-and-secular-buddhism/ Timestamps: 0:00 Intro 3:40 Albert Einstein and the quest for awakening 8:45 Non-self, emptiness, and non-duality 25:48 Stephen's conception of awakening, and making the wise more powerful vs the powerful more wise 33:32 The importance of insight 49:45 The present moment, creativity, and suffering/pain/dukkha 58:44 Stephen's article, Embracing Extinction 1:04:48 The dangers of the problem solving mindset 1:26:12 Improving the effective altruism and existential risk communities 1:37:30 Where to find and follow Stephen This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change. Topics discussed in this episode include: - The risks of climate change in the short-term - Tipping points and tipping cascades - Climate intervention via marine cloud brightening and releasing particles in the stratosphere - The benefits and risks of climate intervention techniques - The international politics of climate change and weather modification You can find the page for this podcast here: https://futureoflife.org/2020/09/30/kelly-wanser-on-marine-cloud-brightening-for-mitigating-climate-change/ Video recording of this podcast here: https://youtu.be/CEUEFUkSMHU Timestamps: 0:00 Intro 2:30 What is SilverLining’s mission? 4:27 Why is climate change thought to be very risky in the next 10-30 years? 8:40 Tipping points and tipping cascades 13:25 Is climate change an existential risk? 17:39 Earth systems that help to stabilize the climate 21:23 Days where it will be unsafe to work outside 25:03 Marine cloud brightening, stratospheric sunlight reflection, and other climate interventions SilverLining is interested in 41:46 What experiments are happening to understand tropospheric and stratospheric climate interventions? 50:20 International politics of weather modification 53:52 How do efforts to reduce greenhouse gas emissions fit into the project of reflecting sunlight? 57:35 How would you respond to someone who views climate intervention by marine cloud brightening as too dangerous? 59:33 What are the main points of persons skeptical of climate intervention approaches 01:13:21 The international problem of coordinating on climate change 01:24:50 Is climate change a global catastrophic or existential risk, and how does it relate to other large risks? 01:33:20 Should effective altruists spend more time on the issue of climate change and climate intervention? 01:37:48 What can listeners do to help with this issue? 01:40:00 Climate change and mars colonization 01:44:55 Where to find and follow Kelly This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives. Topics discussed in this episode include: - The mainstream computer science view of AI existential risk - Distinguishing AI safety from AI existential safety - The need for more precise terminology in the field of AI existential safety and alignment - The concept of prepotent AI systems and the problem of delegation - Which alignment problems get solved by commercial incentives and which don’t - The threat of diffusion of responsibility on AI existential safety considerations not covered by commercial incentives - Prepotent AI risk types that lead to unsurvivability for humanity You can find the page for this podcast here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/ Timestamps: 0:00 Intro 2:53 Why Andrew wrote ARCHES and what it’s about 6:46 The perspective of the mainstream CS community on AI existential risk 13:03 ARCHES in relation to AI existential risk literature 16:05 The distinction between safety and existential safety 24:27 Existential risk is most likely to obtain through externalities 29:03 The relationship between existential safety and safety for current systems 33:17 Research areas that may not be solved by natural commercial incentives 51:40 What’s an AI system and an AI technology? 53:42 Prepotent AI 59:41 Misaligned prepotent AI technology 01:05:13 Human frailty 01:07:37 The importance of delegation 01:14:11 Single-single, single-multi, multi-single, and multi-multi 01:15:26 Control, instruction, and comprehension 01:20:40 The multiplicity thesis 01:22:16 Risk types from prepotent AI that lead to human unsurvivability 01:34:06 Flow-through effects 01:41:00 Multi-stakeholder objectives 01:49:08 Final words from Andrew This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment. Topics discussed in this episode include: -How moral philosophy and political theory are deeply related to AI alignment -The problem of dealing with a plurality of preferences and philosophical views in AI alignment -How the is-ought problem and metaethics fits into alignment -What we should be aligning AI systems to -The importance of democratic solutions to questions of AI alignment -The long reflection You can find the page for this podcast here: https://futureoflife.org/2020/09/03/iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment/ Timestamps: 0:00 Intro 2:10 Why Iason wrote Artificial Intelligence, Values and Alignment 3:12 What AI alignment is 6:07 The technical and normative aspects of AI alignment 9:11 The normative being dependent on the technical 14:30 Coming up with an appropriate alignment procedure given the is-ought problem 31:15 What systems are subject to an alignment procedure? 39:55 What is it that we're trying to align AI systems to? 01:02:30 Single agent and multi agent alignment scenarios 01:27:00 What is the procedure for choosing which evaluative model(s) will be used to judge different alignment proposals 01:30:28 The long reflection 01:53:55 Where to follow and contact Iason This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics. Topics discussed in this episode include: -Moral epistemology -The potential relevance of metaethics to AI alignment -The importance of moral learning in AI systems -Peter Railton's, Derek Parfit's, and Peter Singer's metaethical views You can find the page for this podcast here: https://futureoflife.org/2020/08/18/peter-railton-on-moral-learning-and-metaethics-in-ai-systems/ Timestamps: 0:00 Intro 3:05 Does metaethics matter for AI alignment? 22:49 Long-reflection considerations 26:05 Moral learning in humans 35:07 The need for moral learning in artificial intelligence 53:57 Peter Railton's views on metaethics and his discussions with Derek Parfit 1:38:50 The need for engagement between philosophers and the AI alignment community 1:40:37 Where to find Peter's work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want. Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI. Topics discussed in this episode include: -Inner and outer alignment -How and why inner alignment can fail -Training competitiveness and performance competitiveness -Evaluating imitative amplification, AI safety via debate, and microscope AI You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/ Timestamps: 0:00 Intro 2:07 How Evan got into AI alignment research 4:42 What is AI alignment? 7:30 How Evan approaches AI alignment 13:05 What are inner alignment and outer alignment? 24:23 Gradient descent 36:30 Testing for inner alignment 38:38 Wrapping up on outer alignment 44:24 Why is inner alignment a priority? 45:30 How inner alignment fails 01:11:12 Training competitiveness and performance competitiveness 01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness 01:17:30 Imitative amplification 01:23:00 AI safety via debate 01:26:32 Microscope AI 01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment 01:34:45 Where to follow Evan and find more of his work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape. You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/ Tracklist: Delta Rain Dance - 1 John Beltran - A Different Dream Rrose - Horizon Alexandroid - lvpt3 Datassette - Drizzle Fort Conrad Sprenger - Opening JakoJako - Wavetable#1 Barker & David Goldberg - #3 Barker & Baumecker - Organik (Intro) Anthony Linell - Fractal Vision Ametsub - Skydroppin’ LadyfishMewark - Comfortable JakoJako & Barker - [unreleased] Where to follow Sam Barker : Soundcloud: @voltek Twitter: twitter.com/samvoltek Instagram: www.instagram.com/samvoltek/ Website: www.voltek-labs.net/ Bandcamp: sambarker.bandcamp.com/ Where to follow Sam's label, Ostgut Ton: Soundcloud: @ostgutton-official Facebook: www.facebook.com/Ostgut.Ton.OFFICIAL/ Twitter: twitter.com/ostgutton Instagram: www.instagram.com/ostgut_ton/ Bandcamp: ostgut.bandcamp.com/ This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named "Utility." Sam's artistic excellence, motivated by blissful visions of the future, and David's philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David's work, how it informed his music production, and Sam and David's optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content. Topics discussed in this episode include: -The relationship between Sam's music and David's writing -Existential hope -Ideas from the Hedonistic Imperative -Sam's albums -The future of art and music You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/ You can find the mix with no interview portion of the podcast here: https://soundcloud.com/futureoflife/barker-hedonic-recalibration-mix Where to follow Sam Barker : Soundcloud: https://soundcloud.com/voltek Twitter: https://twitter.com/samvoltek Instagram: https://www.instagram.com/samvoltek/ Website: https://www.voltek-labs.net/ Bandcamp: https://sambarker.bandcamp.com/ Where to follow Sam's label, Ostgut Ton: Soundcloud: https://soundcloud.com/ostgutton-official Facebook: https://www.facebook.com/Ostgut.Ton.OFFICIAL/ Twitter: https://twitter.com/ostgutton Instagram: https://www.instagram.com/ostgut_ton/ Bandcamp: https://ostgut.bandcamp.com/ Timestamps: 0:00 Intro 5:40 The inspiration around Sam's music 17:38 Barker - Maximum Utility 20:03 David and Sam on their work 23:45 Do any of the tracks evoke specific visions or hopes? 24:40 Barker - Die-Hards Of The Darwinian Order 28:15 Barker - Paradise Engineering 31:20 Barker - Hedonic Treadmill 33:05 The future and evolution of art 54:03 David on how good the future can be 58:36 Guest mix by Barker Tracklist: Delta Rain Dance – 1 John Beltran – A Different Dream Rrose – Horizon Alexandroid – lvpt3 Datassette – Drizzle Fort Conrad Sprenger – Opening JakoJako – Wavetable#1 Barker & David Goldberg – #3 Barker & Baumecker – Organik (Intro) Anthony Linell – Fractal Vision Ametsub – Skydroppin’ LadyfishMewark – Comfortable JakoJako & Barker – [unreleased] This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. Topics discussed in this episode include: -The historical and intellectual foundations of AI -How AI systems achieve or do not achieve intelligence in the same way as the human mind -The rise of AI and what it signifies -The benefits and risks of AI in both the short and long term -Whether superintelligent AI will pose an existential risk to humanity You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/ You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps: 0:00 Intro 4:30 The historical and intellectual foundations of AI 11:11 Moving beyond dualism 13:16 Regarding the objectives of an agent as fixed 17:20 The distinction between artificial intelligence and deep learning 22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind 49:46 What changes to human society does the rise of AI signal? 54:57 What are the benefits and risks of AI? 01:09:38 Do superintelligent AI systems pose an existential threat to humanity? 01:51:30 Where to find and follow Steve and Stuart This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them. Topics discussed in this episode include: -The problem of communication -Global priorities -Existential risk -Animal suffering in both wild animals and factory farmed animals -Global poverty -Artificial general intelligence risk and AI alignment -Ethics -Sam’s book, The Moral Landscape You can find the page for this podcast here: https://futureoflife.org/2020/06/01/on-global-priorities-existential-risk-and-what-matters-most-with-sam-harris/ You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps: 0:00 Intro 3:52 What are the most important problems in the world? 13:14 Global priorities: existential risk 20:15 Why global catastrophic risks are more likely than existential risks 25:09 Longtermist philosophy 31:36 Making existential and global catastrophic risk more emotionally salient 34:41 How analyzing the self makes longtermism more attractive 40:28 Global priorities & effective altruism: animal suffering and global poverty 56:03 Is machine suffering the next global moral catastrophe? 59:36 AI alignment and artificial general intelligence/superintelligence risk 01:11:25 Expanding our moral circle of compassion 01:13:00 The Moral Landscape, consciousness, and moral realism 01:30:14 Can bliss and wellbeing be mathematically defined? 01:31:03 Where to follow Sam and concluding thoughts Photo by Christopher Michel: https://www.flickr.com/photos/cmichel67/ This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities? Topics discussed in this episode include: -Existential risk -Computational substrates and AGI -Genetics and aging -Risks of synthetic biology -Obstacles to space colonization -Great Filters, consciousness, and eliminating suffering You can find the page for this podcast here: https://futureoflife.org/2020/05/15/on-the-future-of-computation-synthetic-biology-and-life-with-george-church/ You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps: 0:00 Intro 3:58 What are the most important issues in the world? 12:20 Collective intelligence, AI, and the evolution of computational systems 33:06 Where we are with genetics 38:20 Timeline on progress for anti-aging technology 39:29 Synthetic biology risk 46:19 George's thoughts on COVID-19 49:44 Obstacles to overcome for space colonization 56:36 Possibilities for "Great Filters" 59:57 Genetic engineering for combating climate change 01:02:00 George's thoughts on the topic of "consciousness" 01:08:40 Using genetic engineering to phase out voluntary suffering 01:12:17 Where to find and follow George This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of "superforecasters" are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it's done, and the ways it can help us with crucial decision making. Topics discussed in this episode include: -What superforecasting is and what the community looks like -How superforecasting is done and its potential use in decision making -The challenges of making predictions -Predictions about and lessons from COVID-19 You can find the page for this podcast here: https://futureoflife.org/2020/04/30/on-superforecasting-with-robert-de-neufville/ You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps: 0:00 Intro 5:00 What is superforecasting? 7:22 Who are superforecasters and where did they come from? 10:43 How is superforecasting done and what are the relevant skills? 15:12 Developing a better understanding of probabilities 18:42 How is it that superforecasters are better at making predictions than subject matter experts? 21:43 COVID-19 and a failure to understand exponentials 24:27 What organizations and platforms exist in the space of superforecasting? 27:31 Whats up for consideration in an actual forecast 28:55 How are forecasts aggregated? Are they used? 31:37 How accurate are superforecasters? 34:34 How is superforecasting complementary to global catastrophic risk research and efforts? 39:15 The kinds of superforecasting platforms that exist 43:00 How accurate can we get around global catastrophic and existential risks? 46:20 How to deal with extremely rare risk and how to evaluate your prediction after the fact 53:33 Superforecasting, expected value calculations, and their use in decision making 56:46 Failure to prepare for COVID-19 and if superforecasting will be increasingly applied to critical decision making 01:01:55 What can we do to improve the use of superforecasting? 01:02:54 Forecasts about COVID-19 01:11:43 How do you convince others of your ability as a superforecaster? 01:13:55 Expanding the kinds of questions we do forecasting on 01:15:49 How to utilize subject experts and superforecasters 01:17:54 Where to find and follow Robert This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we've invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today's episode focuses especially on the state of current research efforts for beneficial AI, as well as Buck's and Rohin's thoughts about the varying approaches and the difficulties we still face. This podcast thus serves as a non-exhaustive overview of how the field of AI alignment has updated and how thinking is progressing. Topics discussed in this episode include: -Rohin's and Buck's optimism and pessimism about different approaches to aligned AI -Traditional arguments for AI as an x-risk -Modeling agents as expected utility maximizers -Ambitious value learning and specification learning/narrow value learning -Agency and optimization -Robustness -Scaling to superhuman abilities -Universality -Impact regularization -Causal models, oracles, and decision theory -Discontinuous and continuous takeoff scenarios -Probability of AI-induced existential risk -Timelines for AGI -Information hazards You can find the page for this podcast here: https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/ Timestamps: 0:00 Intro 3:48 Traditional arguments for AI as an existential risk 5:40 What is AI alignment? 7:30 Back to a basic analysis of AI as an existential risk 18:25 Can we model agents in ways other than as expected utility maximizers? 19:34 Is it skillful to try and model human preferences as a utility function? 27:09 Suggestions for alternatives to modeling humans with utility functions 40:30 Agency and optimization 45:55 Embedded decision theory 48:30 More on value learning 49:58 What is robustness and why does it matter? 01:13:00 Scaling to superhuman abilities 01:26:13 Universality 01:33:40 Impact regularization 01:40:34 Causal models, oracles, and decision theory 01:43:05 Forecasting as well as discontinuous and continuous takeoff scenarios 01:53:18 What is the probability of AI-induced existential risk? 02:00:53 Likelihood of continuous and discontinuous take off scenarios 02:08:08 What would you both do if you had more power and resources? 02:12:38 AI timelines 02:14:00 Information hazards 02:19:19 Where to follow Buck and Rohin and learn more This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting on the strengths and weaknesses of human civilization and what we can do to help make humanity more resilient. The Future of Life Institute's Emilia Javorsky and Anthony Aguirre join us on this special episode of the FLI Podcast to explore the lessons that might be learned from COVID-19 and the perspective this gives us for global catastrophic and existential risk. Topics discussed in this episode include: -The importance of taking expected value calculations seriously -The need for making accurate predictions -The difficulty of taking probabilities seriously -Human psychological bias around estimating and acting on risk -The massive online prediction solicitation and aggregation engine, Metaculus -The risks and benefits of synthetic biology in the 21st Century You can find the page for this podcast here: https://futureoflife.org/2020/04/08/lessons-from-covid-19-with-emilia-javorsky-and-anthony-aguirre/ Timestamps: 0:00 Intro 2:35 How has COVID-19 demonstrated weakness in human systems and risk preparedness 4:50 The importance of expected value calculations and considering risks over timescales 10:50 The importance of being able to make accurate predictions 14:15 The difficulty of trusting probabilities and acting on low probability high cost risks 21:22 Taking expected value calculations seriously 24:03 The lack of transparency, explanation, and context around how probabilities are estimated and shared 28:00 Diffusion of responsibility and other human psychological weaknesses in thinking about risk 38:19 What Metaculus is and its relevance to COVID-19 45:57 What is the accuracy of predictions on Metaculus and what has it said about COVID-19? 50:31 Lessons for existential risk from COVID-19 58:42 The risk of synthetic bio enabled pandemics in the 21st century 01:17:35 The extent to which COVID-19 poses challenges to democratic institutions This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. "The Precipice" thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time. Topics discussed in this episode include: -An overview of Toby's new book -What it means to be standing at the precipice and how we got here -Useful arguments for why existential risk matters -The risks themselves and their likelihoods -What we can do to safeguard humanity's potential You can find the page for this podcast here: https://futureoflife.org/2020/03/31/he-precipice-existential-risk-and-the-future-of-humanity-with-toby-ord/ Timestamps: 0:00 Intro 03:35 What the book is about 05:17 What does it mean for us to be standing at the precipice? 06:22 Historical cases of global catastrophic and existential risk in the real world 10:38 The development of humanity’s wisdom and power over time 15:53 Reaching existential escape velocity and humanity’s continued evolution 22:30 On effective altruism and writing the book for a general audience 25:53 Defining “existential risk” 28:19 What is compelling or important about humanity’s potential or future persons? 32:43 Various and broadly appealing arguments for why existential risk matters 50:46 Short overview of natural existential risks 54:33 Anthropogenic risks 58:35 The risks of engineered pandemics 01:02:43 Suggestions for working to mitigate x-risk and safeguard the potential of humanity 01:09:43 How and where to follow Toby and pick up his book This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Lethal autonomous weapons represent the novel miniaturization and integration of modern AI and robotics technologies for military use. This emerging technology thus represents a potentially critical inflection point in the development of AI governance. Whether we allow AI to make the decision to take human life and where we draw lines around the acceptable and unacceptable uses of this technology will set precedents and grounds for future international AI collaboration and governance. Such regulation efforts or lack thereof will also shape the kinds of weapons technologies that proliferate in the 21st century. On this episode of the AI Alignment Podcast, Paul Scharre joins us to discuss autonomous weapons, their potential benefits and risks, and the ongoing debate around the regulation of their development and use. Topics discussed in this episode include: -What autonomous weapons are and how they may be used -The debate around acceptable and unacceptable uses of autonomous weapons -Degrees and kinds of ways of integrating human decision making in autonomous weapons -Risks and benefits of autonomous weapons -Whether there is an arms race for autonomous weapons -How autonomous weapons issues may matter for AI alignment and long-term AI safety You can find the page for this podcast here: https://futureoflife.org/2020/03/16/on-lethal-autonomous-weapons-with-paul-scharre/ Timestamps: 0:00 Intro 3:50 Why care about autonomous weapons? 4:31 What are autonomous weapons? 06:47 What does “autonomy” mean? 09:13 Will we see autonomous weapons in civilian contexts? 11:29 How do we draw lines of acceptable and unacceptable uses of autonomous weapons? 24:34 Defining and exploring human “in the loop,” “on the loop,” and “out of loop” 31:14 The possibility of generating international lethal laws of robotics 36:15 Whether autonomous weapons will sanitize war and psychologically distance humans in detrimental ways 44:57 Are persons studying the psychological aspects of autonomous weapons use? 47:05 Risks of the accidental escalation of war and conflict 52:26 Is there an arms race for autonomous weapons? 01:00:10 Further clarifying what autonomous weapons are 01:05:33 Does the successful regulation of autonomous weapons matter for long-term AI alignment considerations? 01:09:25 Does Paul see AI as an existential risk? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O'Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally. Topics discussed in this episode include: -What the Windfall Clause is and how it might function -The need for such a mechanism given AGI generated economic windfall -Problems the Windfall Clause would help to remedy -The mechanism for distributing windfall profit and the function for defining such profit -The legal permissibility of the Windfall Clause -Objections and alternatives to the Windfall Clause You can find the page for this podcast here: https://futureoflife.org/2020/02/28/distributing-the-benefits-of-ai-via-the-windfall-clause-with-cullen-okeefe/ Timestamps: 0:00 Intro 2:13 What is the Windfall Clause? 4:51 Why do we need a Windfall Clause? 06:01 When we might reach windfall profit and what that profit looks like 08:01 Motivations for the Windfall Clause and its ability to help with job loss 11:51 How the Windfall Clause improves allocation of economic windfall 16:22 The Windfall Clause assisting in a smooth transition to advanced AI systems 18:45 The Windfall Clause as assisting with general norm setting 20:26 The Windfall Clause as serving AI firms by generating goodwill, improving employee relations, and reducing political risk 23:02 The mechanism for distributing windfall profit and desiderata for guiding it’s formation 25:03 The windfall function and desiderata for guiding it’s formation 26:56 How the Windfall Clause is different from being a new taxation scheme 30:20 Developing the mechanism for distributing the windfall 32:56 The legal permissibility of the Windfall Clause in the United States 40:57 The legal permissibility of the Windfall Clause in China and the Cayman Islands 43:28 Historical precedents for the Windfall Clause 44:45 Objections to the Windfall Clause 57:54 Alternatives to the Windfall Clause 01:02:51 Final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse. Topics discussed in this episode include: -The importance of current AI policy work for long-term AI risk -Where we currently stand in the process of forming AI policy -Why persons worried about existential risk should care about present day AI policy -AI and the global community -The rationality and irrationality around AI race narratives You can find the page for this podcast here: https://futureoflife.org/2020/02/17/on-the-long-term-importance-of-current-ai-policy-with-nicolas-moes-and-jared-brown/ Timestamps: 0:00 Intro 4:58 Why it’s important to work on AI policy 12:08 Our historical position in the process of AI policy 21:54 For long-termists and those concerned about AGI risk, how is AI policy today important and relevant? 33:46 AI policy and shorter-term global catastrophic and existential risks 38:18 The Brussels and Sacramento effects 41:23 Why is racing on AI technology bad? 48:45 The rationality of racing to AGI 58:22 Where is AI policy currently? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and personal identity? Just how seriously should we take our everyday experiences of the world? Anthony Aguirre, cosmologist and FLI co-founder, returns for a second episode to offer his perspective on these complex questions. This conversation explores the view that reality fundamentally consists of information and examines its implications for our understandings of existence and identity. Topics discussed in this episode include: - Views on the nature of reality - Quantum mechanics and the implications of quantum uncertainty - Identity, information and description - Continuum of objectivity/subjectivity You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/31/fli-podcast-identity-information-the-nature-of-reality-with-anthony-aguirre/ Timestamps: 3:35 - General history of views on fundamental reality 9:45 - Quantum uncertainty and observation as interaction 24:43 - The universe as constituted of information 29:26 - What is information and what does the view of reality as information have to say about objects and identity 37:14 - Identity as on a continuum of objectivity and subjectivity 46:09 - What makes something more or less objective? 58:25 - Emergence in physical reality and identity 1:15:35 - Questions about the philosophy of identity in the 21st century 1:27:13 - Differing views on identity changing human desires 1:33:28 - How the reality as information perspective informs questions of identity 1:39:25 - Concluding thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide? Would you step into this machine? Is the person who emerges on Mars really you? Questions like these –– those that explore the nature of personal identity and challenge our commonly held intuitions about it –– are becoming increasingly important in the face of 21st century technology. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. AI enabled bio-engineering will allow for human-species divergence via upgrades, and as we arrive at AGI and beyond we may see a world where it is possible to merge with AI directly, upload ourselves, copy and duplicate ourselves arbitrarily, or even manipulate and re-program our sense of identity. Are there ways we can inform and shape human understanding of identity to nudge civilization in the right direction? Topics discussed in this episode include: -Identity from epistemic, ontological, and phenomenological perspectives -Identity formation in biological evolution -Open, closed, and empty individualism -The moral relevance of views on identity -Identity in the world today and on the path to superintelligence and beyond You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/15/identity-and-the-ai-revolution-with-david-pearce-and-andres-gomez-emilsson/ Timestamps: 0:00 - Intro 6:33 - What is identity? 9:52 - Ontological aspects of identity 12:50 - Epistemological and phenomenological aspects of identity 18:21 - Biological evolution of identity 26:23 - Functionality or arbitrariness of identity / whether or not there are right or wrong answers 31:23 - Moral relevance of identity 34:20 - Religion as codifying views on identity 37:50 - Different views on identity 53:16 - The hard problem and the binding problem 56:52 - The problem of causal efficacy, and the palette problem 1:00:12 - Navigating views of identity towards truth 1:08:34 - The relationship between identity and the self model 1:10:43 - The ethical implications of different views on identity 1:21:11 - The consequences of different views on identity on preference weighting 1:26:34 - Identity and AI alignment 1:37:50 - Nationalism and AI alignment 1:42:09 - Cryonics, species divergence, immortality, uploads, and merging. 1:50:28 - Future scenarios from Life 3.0 1:58:35 - The role of identity in the AI itself This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us. Topics discussed include: -Max and Yuval's views and intuitions about consciousness -How they ground and think about morality -Effective altruism and its cause areas of global health/poverty, animal suffering, and existential risk -The function of myths and stories in human society -How emerging science, technology, and global paradigms challenge the foundations of many of our stories -Technological risks of the 21st century You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/31/on-consciousness-morality-effective-altruism-myth-with-yuval-noah-harari-max-tegmark/ Timestamps: 0:00 Intro 3:14 Grounding morality and the need for a science of consciousness 11:45 The effective altruism community and it's main cause areas 13:05 Global health 14:44 Animal suffering and factory farming 17:38 Existential risk and the ethics of the long-term future 23:07 Nuclear war as a neglected global risk 24:45 On the risks of near-term AI and of artificial general intelligence and superintelligence 28:37 On creating new stories for the challenges of the 21st century 32:33 The risks of big data and AI enabled human hacking and monitoring 47:40 What does it mean to be human and what should we want to want? 52:29 On positive global visions for the future 59:29 Goodbyes and appreciations 01:00:20 Outro and supporting the Future of Life Institute Podcast This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it's a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common good. It can be skillful to reflect on this progress to see how far we've come, to develop hope for the future, and to map out our path ahead. This podcast is a special end of the year episode focused on meeting and introducing the FLI team, discussing what we've accomplished and are working on, and sharing our feelings and reasons for existential hope going into 2020 and beyond. Topics discussed include: -Introductions to the FLI team and our work -Motivations for our projects and existential risk mitigation efforts -The goals and outcomes of our work -Our favorite projects at FLI in 2019 -Optimistic directions for projects in 2020 -Reasons for existential hope going into 2020 and beyond You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/27/existential-hope-in-2020-and-beyond-with-the-fli-team/ Timestamps: 0:00 Intro 1:30 Meeting the Future of Life Institute team 18:30 Motivations for our projects and work at FLI 30:04 What we strive to result from our work at FLI 44:44 Favorite accomplishments of FLI in 2019 01:06:20 Project directions we are most excited about for 2020 01:19:43 Reasons for existential hope in 2020 and beyond 01:38:30 Outro
Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he's optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind. Topics discussed in this episode include: -Theoretical and empirical AI safety research -Jan's and DeepMind's approaches to AI safety -Jan's work and thoughts on recursive reward modeling -AI safety benchmarking at DeepMind -The potential modularity of AGI -Comments on the cultural and intellectual differences between the AI safety and mainstream AI communities -Joining the DeepMind safety team You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/16/ai-alignment-podcast-on-deepmind-ai-safety-and-recursive-reward-modeling-with-jan-leike/ Timestamps: 0:00 intro 2:15 Jan's intellectual journey in computer science to AI safety 7:35 Transitioning from theoretical to empirical research 11:25 Jan's and DeepMind's approach to AI safety 17:23 Recursive reward modeling 29:26 Experimenting with recursive reward modeling 32:42 How recursive reward modeling serves AI safety 34:55 Pessimism about recursive reward modeling 38:35 How this research direction fits in the safety landscape 42:10 Can deep reinforcement learning get us to AGI? 42:50 How modular will AGI be? 44:25 Efforts at DeepMind for AI safety benchmarking 49:30 Differences between the AI safety and mainstream AI communities 55:15 Most exciting piece of empirical safety work in the next 5 years 56:35 Joining the DeepMind safety team
We could all be more altruistic and effective in our service of others, but what exactly is it that's stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford's Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. Topics discussed include: -The psychology of existential risk, longtermism, effective altruism, and speciesism -Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction" -Various works and studies Stefan Schubert has co-authored in these spaces -How this enables us to be more altruistic You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/02/the-psychology-of-existential-risk-and-effective-altruism-with-stefan-schubert/ Timestamps: 0:00 Intro 2:31 Stefan's academic and intellectual journey 5:20 How large is this field? 7:49 Why study the psychology of X-risk and EA? 16:54 What does a better understanding of psychology here enable? 21:10 What are the cognitive limitations psychology helps to elucidate? 23:12 Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction" 34:45 Messaging on existential risk 37:30 Further areas of study 43:29 Speciesism 49:18 Further studies and work by Stefan
In this brief epilogue, Ariel reflects on what she's learned during the making of Not Cool, and the actions she'll be taking going forward.
It’s the Not Cool series finale, and by now we’ve heard from climate scientists, meteorologists, physicists, psychologists, epidemiologists and ecologists. We’ve gotten expert opinions on everything from mitigation and adaptation to security, policy and finance. Today, we’re tackling one final question: why should we trust them? Ariel is joined by Naomi Oreskes, Harvard professor and author of seven books, including the newly released "Why Trust Science?" Naomi lays out her case for why we should listen to experts, how we can identify the best experts in a field, and why we should be open to the idea of more than one type of "scientific method." She also discusses industry-funded science, scientists’ misconceptions about the public, and the role of the media in proliferating bad research. Topics discussed include: -Why Trust Science? -5 tenets of reliable science -How to decide which experts to trust -Why non-scientists can't debate science -Industry disinformation -How to communicate science -Fact-value distinction -Why people reject science -Shifting arguments from climate deniers -Individual vs. structural change -State- and city-level policy change