Podcasts about global catastrophic risk institute

  • 9PODCASTS
  • 13EPISODES
  • 41mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jun 15, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about global catastrophic risk institute

Latest podcast episodes about global catastrophic risk institute

The Antinatalist Advocacy Podcast
AAP Ep. 12 - Introduction to Artificial Sentience with Space Science Guy

The Antinatalist Advocacy Podcast

Play Episode Listen Later Jun 15, 2024 55:04


Welcome to episode 12 of AAP! We're joined this month by return guest Space Science Guy (formerly Vegan Space Scientist) to introduce our fourth and final cause area at AA - Artificial Sentience. What on earth is artificial sentience? What does it have to with antinatalism? And what, if anything, can we do to protect potentially sentient AI from the harms of coming into existence? Let us know your thoughts below!  TIMESTAMPS00:00 Intro to the episode 02:18 Intro to Michael05:02 Defining sentience12:56 How Michael first became aware of artificial sentience16:17 Why artificial sentience is an area of concern28:07 Why this is a particularly challenging issue to address31:57 Why might antinatalists in particular be concerned about artificial sentience?37:02 Arguments against being concerned about artificial sentience39:39 How we can have a positive impact on this issue44:10 Key players in this space47:28 Final comments / positive note to end on53:49 Outro  ANTINATALIST ADVOCACY Newsletter: https://antinatalistadvocacy.org/newsletterWebsite: https://antinatalistadvocacy.org/YouTube: https://www.youtube.com/@AntinatalistAdvocacyTwitter / X: https://twitter.com/AN_advocacyInstagram: https://instagram.com/an_advocacy Space Science Guy YouTube: https://www.youtube.com/@spacescienceguy TikTok: https://www.tiktok.com/@spacescienceguy Website: https://www.michaeldello.com/ Check out the links below! AAP Ep. 3 - Introduction to Effective Altruism: https://youtu.be/ewOlZl1yfgM Rethink Priorities: https://rethinkpriorities.org/ Effective Altruism: https://www.effectivealtruism.org/ Sentience Institute: https://www.sentienceinstitute.org/ Brian Tomasik: https://reducing-suffering.org/ Thomas Metzinger - Benevolent Artificial Anti-Natalism (BAAN): https://www.edge.org/conversation/thomas_metzinger-benevolent-artificial-anti-natalism-baan Centre for Reducing Suffering: https://centerforreducingsuffering.org/ Center on Long-Term Risk: https://longtermrisk.org/ Global Catastrophic Risk Institute: https://gcrinstitute.org/ Legal Priorities Project: https://forum.effectivealtruism.org/topics/legal-priorities-project

The Nonlinear Library
EA - Population After a Catastrophe by Stan Pinsent

The Nonlinear Library

Play Episode Listen Later Oct 3, 2023 25:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Population After a Catastrophe, published by Stan Pinsent on October 3, 2023 on The Effective Altruism Forum. This was written in my role as researcher at CEARCH, but any opinions expressed are my own. This report uses population dynamics to explore the effects of a near-existential catastrophe on long-term value. Summary Global population would probably not recover to current levels after a major catastrophe. Low-fertility values would largely endure. If we reindustrialize quickly, population will stabilize far lower. Population "peaking lower" after a catastrophe would make it harder to avoid terminal population decline. Tech solutions would be harder to reach, and there would be less time to find a solution. Post-catastrophe worlds that avoid terminal population decline are likely to emerge with values very different to our own. Population could stabilize because of authoritarian governments, prescriptive gender roles or civil strife, or alternatively from increased collective concern for the future. Conclusion: Near-existential catastrophes are likely to decrease the value of the future through decreased resilience and the lock-in of bad values. Avoiding these catastrophes should rank alongside avoiding existential catastrophes. Introduction In this report I use population dynamics to explore the question "What are the long-term existential consequences of a non-existential catastrophe?". I do not claim that population dynamics are the only, or even the most important, consideration. Others have written about the short-term existential effects of a global catastrophe. Luisa Rodriguez argues that even in cases where >90% of the global population is killed, it is unlikely that all viable groups of survivors will fail to make it through the ensuing decades (Rodriguez, 2020). The Global Catastrophic Risk Institute has begun to explore the long-term consequences of catastrophe, although they consider this "rather grim and difficult-to-study topic" to be neglected (GCRI). What comes after the aftermath of a catastrophe is very difficult to predict, as life will be driven by unknown political and cultural forces. However, I argue that many of the familiar features of population dynamics will continue to apply. Even without a catastrophe, we face a possible population problem. As countries develop, their populations peak and begin to decline. If these trends continue, global population will shrink until either we "master" the problem of population, or we can no longer maintain industrialized civilization (multiple working papers, Population Wellbeing Initiative, 2023). It could be argued that this is not a pressing problem. It will be centuries before global population drops below 1 billion, so we have time to overcome demographic decline or to make it irrelevant by relying on artificial people. But in the aftermath of a global catastrophe there may be less time and fewer people available to solve the problem. Longtermists may argue that most future value is in the scenarios where we overcome reproductive constraints and expand to the stars (Siegmann & Mota Freitas, 2022). My findings do not contradict this. But such scenarios appear to be significantly less likely in a post-catastrophe world. And the worlds in which we do bounce back seem likely to have values very different from our own. Population recovery after a catastrophe In this section I examine three models for determining population growth. I find that full population recovery after a major global catastrophe is unlikely, and that the worlds which do recover are likely to emerge with values very different from those of the pre-catastrophe world. It's worth noting that a catastrophe need not inflict its damage at one point in time. The effects of some historical famines and pandemics have unfurled over many yea...

The Nonlinear Library
EA - Future Matters: March 2022 by Pablo

The Nonlinear Library

Play Episode Listen Later Mar 22, 2022 23:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters: March 2022, published by Pablo on March 22, 2022 on The Effective Altruism Forum. We think our civilization near its meridian, but we are yet only at the cock-crowing and the morning star. Ralph Waldo Emerson Welcome to Future Matters, a newsletter about longtermism. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Future Matters is crossposted to Substack and available as a podcast. Research We are typically confident that some things are conscious (humans), and that some things are not (rocks); other things we're very unsure about (insects). In this post, Amanda Askell shares her views about AI consciousness. It seems unlikely that current AI systems are conscious, but they are improving and there's no great reason to think we will never create conscious AI systems. This matters because consciousness is morally-relevant, e.g. we tend to think that if something is conscious, we shouldn't harm it for no good reason. Since it's much worse to mistakenly deny something moral status than to mistakenly attribute it, we should take a cautious approach when it comes to AI: if we ever have reason to believe some AI system is conscious, we should start to treat it as a moral patient. This makes it important and urgent that we develop tools and techniques to assess whether AI systems are conscious, and related questions, e.g. whether they are suffering. The leadership of the Global Catastrophic Risk Institute issued a Statement on the Russian invasion of Ukraine. The authors consider the effects of the invasion on (1) risks of nuclear war and (2) other global catastrophic risks. They argue that the conflict increases the risk of both intentional and inadvertent nuclear war, and that it may increase other risks primarily via its consequences on climate change, on China, and on international relations. Earlier this year, Hunga Tonga-Hunga Ha'apai—a submarine volcano in the South Pacific—produced what appears to be the largest volcanic eruption of the last 30 years. In What can we learn from a short preview of a super-eruption and what are some tractable ways of mitigating, Mike Cassidy and Lara Mani point out that this event and its cascading impacts provide a glimpse into the possible effects of a much larger eruption, which could be comparable in intensity but much longer in duration. The main lessons the authors draw are that humanity was unprepared for the eruption and that its remote location dramatically minimized its impacts. To better prepare for these risks, the authors propose better identifying the volcanoes capable of large enough eruptions and the regions most affected by them; building resilience by investigating the role that technology could play in disaster response and by enhancing community-led resilience mechanisms; and mitigating the risks by research on removal of aerosols from large explosive eruptions and on ways to reduce the explosivity of eruptions by fracking or drilling. The second part in a three-part series of great power conflict, Stephen Clare's How likely is World War III? attempts to estimate the probability of great power conflict this century as well as its severity, should it occur. Tentatively, Clare assigns a 45% chance to a confrontation between great powers by 2100, an 8% chance of a war much worse than World War II, and a 1% chance of a war causing human extinction. Note that some of the key sources in Clare's analysis rely on the Correlates of War dataset, which is less informative about long-run trends in global conflict than is generally assumed; see Ben Garfinkel's comment for discussion. Malevolent nonstate actors with access to advanced technology can increase the probability of an existential cata...

The Nonlinear Library
EA - Early Reflections and Resources on the Russian Invasion of Ukraine by SethBaum

The Nonlinear Library

Play Episode Listen Later Mar 19, 2022 14:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Early Reflections and Resources on the Russian Invasion of Ukraine, published by SethBaum on March 18, 2022 on The Effective Altruism Forum. The opening days of the Russian invasion of Ukraine brought a flurry of dramatic changes to the world. Now, three weeks into the war, conditions on the ground have somewhat stabilized. Further major changes remain very much possible, but meanwhile, there is now an opportunity to reflect on what has happened and what that means for the world moving forward. This post provides some reflections oriented mainly, but not exclusively, for a global catastrophic risk audience, with some emphasis on nuclear weapons. General Thoughts This one matters. The war itself poses significant risks and it has a variety of important implications for international affairs. People who work on global catastrophic risk and related issues will benefit from understanding this. One might wonder how much time to spend studying the war. While each person should decide based on their own circumstances, I can say that studying it is not a waste of time. Whether to try getting involved to help on the immediate situation is a more complex matter. It's already getting massive attention and effort, including from people who are highly trained to work on this sort of situation. I would be cautious about trying to get involved based on a limited understanding of the issues. For example, understanding the long-term moral importance of global catastrophic risk is a weak basis for making specific policy recommendations. There's a lot of nuance that goes into international security policy that also needs to be accounted for. For my part, I have focused on activities that are clearly beneficial and that are not already getting significant attention. On several occasions, I've gotten an idea for an activity only to find, upon closer inspection, that it's a bad idea or that it's already been done several times over by people more qualified than myself. This is despite the fact that do have some modest background on international security issues, especially nuclear weapons. I am a generalist who sometimes works on nuclear weapons, whereas there are communities of people who specialize on the topic. A healthy dose of humility has helped me avoid making some inappropriate contributions. More detailed discussion is below. Links to suggested readings are at the end. Nuclear War Risk The possibility that nuclear weapons could be used in the ongoing war is a matter of grave concern. Several knowledgeable observers have postulated that the conflict could escalate to nuclear war. From a risk perspective, this raises the questions of what the probability of nuclear war is and how severe it would be. Some estimates of the risk of the Ukraine war escalating to nuclear war have been made by people who are active in geopolitical forecasting. As I discuss in a recent article on nuclear war risk analysis, attempts to quantify the ongoing risk raise significant methodological challenges. First, nuclear war risk is an inherently difficult risk to quantify. It depends on highly complex and geopolitical factors. A lot of the relevant information is not publicly available. Nuclear war is more-or-less unprecedented; the only precedent, World War II, occurred under circumstances that no longer exist. Second, the ongoing Ukraine war involves constantly changing circumstances. A risk estimate that is good one day may be bad another day. As an illustration, in the run-up to the war, Robert de Neufville's estimate of the probability that Russia attacks Ukraine went from 65% to 25% in around one to three days circa Feb 12-15 due to apparent lower tensions. de Neufville is a superforecaster and a former colleague of mine at the Global Catastrophic Risk Institute, where we co-authored a GCRI paper on ...

Efektiivne Altruism Eesti
#11 Kristina Meringuga karusloomafarmide keelustamisest

Efektiivne Altruism Eesti

Play Episode Listen Later Jul 28, 2021 83:30


Täna on meil saates Kristina Mering, kes on loomakaitseorganisatsiooni Nähtamatud Loomad president. Meie fookus on Nähtamatute Loomade karusloomafarmide keelustamise kampaanial kuna üsna hiljuti 2. juunil võttis Riigikogu vastu ajaloolise seaduse keelustada karusloomade kasvatamine Eestis 2026. aastaks. Räägime põhjalikult lahti, mis tagas kampaania edu ning mis soovitusi on Kristinal teistele organisatsioonidele, kes püüavad mõjutada poliitilisi otsuseid. Vestluse jooksul mainitud allikad: - AJALOOLINE OTSUS: Riigikogu võttis vastu otsuse keelustada karusloomafarmid: https://nahtamatudloomad.ee/ajalooline-otsus-riigikogu-vottis-vastu-otsuse-keelustada-karusloomafarmid - Kristina Hea Kodaniku Klubi üritusel poliitika kujundamisest rääkimas: https://www.facebook.com/heakodanik/videos/1080591469083759/ - Nähtamatud Loomad: https://nahtamatudloomad.ee/ - Taimne Teisipäev: https://taimneteisipaev.ee/ - Toeta Nähtamatuid Loomi: https://nahtamatudloomad.ee/toeta - Iisraeli karuslooma naha müügi keeld: https://thebeet.com/israel-becomes-first-country-in-the-world-to-completely-ban-the-sale-of-fur/ - Global Catastrophic Risk Institute üleskutse: http://gcrinstitute.org/open-call-for-advisees-and-collaborators-may-2021/ - Giving What We Can YouTube'i kanal: https://www.youtube.com/c/GivingWhatWeCanCommunity/videos

israel eestis meie riigikogu iisraeli vestluse global catastrophic risk institute
Philosophical Disquisitions
#55 - Baum on the Long-Term Future of Human Civilisation

Philosophical Disquisitions

Play Episode Listen Later Mar 14, 2019


In this episode I talk to Seth Baum. Seth is an interdisciplinary researcher working across a wide range of fields in natural and social science, engineering, philosophy, and policy. His primary research focus is global catastrophic risk. He also works in astrobiology. He is the Co-Founder (with Tony Barrett) and Executive Director of the Global Catastrophic Risk Institute. He is also a Research Affiliate of the University of Cambridge Centre for the Study of Existential Risk. We talk about the importance of studying the long-term future of human civilisation, and map out four possible trajectories for the long-term future.You can download the episode here or listen below. You can also subscribe on a variety of different platforms, including iTunes, Stitcher, Overcast, Podbay, Player FM and more. The RSS feed is available here. Show Notes0:00 - Introduction1:39 - Why did Seth write about the long-term future of human civilisation?5:15 - Why should we care about the long-term future? What is the long-term future?13:12 - How can we scientifically and ethically study the long-term future?16:04 - Is it all too speculative?20:48 - Four possible futures, briefly sketched: (i) status quo; (ii) catastrophe; (iii) technological transformation; and (iv) astronomical23:08 - The Status Quo Trajectory - Keeping things as they are28:45 - Should we want to maintain the status quo?33:50 - The Catastrophe Trajectory - Awaiting the likely collapse of civilisation38:58 - How could we restore civilisation post-collapse? Should we be working on this now?44:00 - Are we under-investing in research into post-collapse restoration?49:00 - The Technological Transformation Trajectory - Radical change through technology52:35 - How desirable is radical technological change?56:00 - The Astronomical Trajectory - Colonising the solar system and beyond58:40 - Is the colonisation of space the best hope for humankind?1:07:22 - How should the study of the long-term future proceed from here?  Relevant LinksSeth's homepageThe Global Catastrophic Risk Institute"Long-Term Trajectories for Human Civilisation" by Baum et al"The Perils of Short-Termism: Civilisation's Greatest Threat" by Fisher, BBC NewsThe Knowledge by Lewis Dartnell"Space Colonization and the Meaning of Life" by Baum, Nautilus"Astronomical Waste: The Opportunity Cost of Delayed Technological Development" by Nick Bostrom"Superintelligence as a Cause or Cure for Risks of Astronomical Suffering" by Kaj Sotala and Lucas Gloor"Space Colonization and Suffering Risks" by Phil Torres"Thomas Hobbes in Space: The Problem of Intergalactic War" by John Danaher    #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Analysis
Will humans survive the century?

Analysis

Play Episode Listen Later Mar 11, 2019 28:40


What is the chance of the human race surviving the 21st century? There are many dangers – climate change for example, or nuclear war, or a pandemic, or planet Earth being hit by a giant asteroid. Around the world a number of research centres have sprung up to investigate and mitigate what’s called existential risk. How precarious is our civilisation and can we all play a part in preventing global catastrophe? Contributors Anders Sandberg, Future of Humanity Institute. Phil Torres, Future of Life Institute. Karin Kuhlemann, University College London. Simon Beard, Centre for Existential Risk. Lalitha Sundaram, Centre for Existential Risk. Seth Baum, Global Catastrophic Risk Institute. Film clip: Armageddon, Touchstone Pictures (1998), Directed by Michael Bay. Presented (cheerily) by David Edmonds. Producer: Diane Richardson

The Story Collider
Research: Stories about the places studies take us

The Story Collider

Play Episode Listen Later Feb 3, 2017 29:28


Part 1: As a teenager, Bri Riggio struggles to understand her eating disorder and connect with her psychologist father. Part 2: Seth Baum, an expert in global catastrophic risk, makes waves when he suggests a solution to the threat of nuclear winter. Bri Riggio has spent the last six years working at various institutions of higher education, from a study abroad program in Greece to George Mason University, where she now supports the Office of Research at the executive level. While not a scientist by training, she has always loved research and the process of learning. She stupidly spent an extra year in graduate school after choosing to base her Master's thesis on a social science methodology that she didn't know and just barely managed to finish her MA in Conflict Resolution this past spring. To keep her sanity, she runs marathons, plays video games, and looks for opportunities to tell her stories. Dr. Seth Baum is Executive Director of the Global Catastrophic Risk Institute, a nonprofit think tank that Baum co-founded in 2011. His research focuses on risk and policy analysis of catastrophes that could destroy human civilization, such as global warming, nuclear war, and infectious disease outbreaks. Baum received a Ph.D. in Geography from Pennsylvania State University and completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions. His writing has appeared in the Bulletin of the Atomic Scientists, the Guardian, Scientific American, and a wide range of peer-reviewed scholarly journals. Follow him on Twitter @SethBaum and Facebook @sdbaum. Learn more about your ad choices. Visit megaphone.fm/adchoices

Kickass News
Artificial Intelligence (Pt. 2) w/ Prof. Nick Bostrom, Dr. Seth Baum, & the Creators of AMC's HUMANS

Kickass News

Play Episode Listen Later Oct 7, 2016 53:14


In Part 2 on Artificial Intelligence, we examine the possibility of A.I. achieving "the singularity," the moment when technology achieves human-like consciousness and  the possible consequences for humanity if and when machines surpass us. Sam Vincent and Jonathan Brackley, the creators of AMC's drama series HUMANS, discuss how intelligent machines and people might interact, what the singularity might be like from the perspective of a conscious A.I., and whether a man-made system that thinks and feels like a human being should be entitled to human rights.   Then Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, will map out several scenarios that could lead to the end of human civilization at the hands of A.I.  He talks about how more traditional existential threats like a nuclear disaster or a pandemic might play into it, and we also discuss the more near-term issues of lethal autonomous weapons and hackers infiltrating intelligent systems.   Finally I talk with Professor Nick Bostrom, Director of the Future of Humanity Institute at Oxford University.  His best-selling book Superintelligence: Paths, Dangers, Strategies has won acclaim from tech leaders including Elon Musk and Bill Gates.  He warns that the singularity could come much faster than we think in the form of an "intelligence explosion," and humanity would be wise to hope for the best but prepare for the worst.  He’ll outline ideas for the best ways to get A.I. to work WITH humans, rather than against us, and he'll talk about his mission to persuade the tech industry and popular media to take the issue of artificial intelligence seriously. Special thanks to the Milken Institute for hosting parts of this interview during the 2016 Milken Global Conference. Visit www.milkeninstitute.org to learn more about their work in the areas of science, education and innovation. If you enjoyed today’s podcast, then you can order Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies on Amazon.  Follow him at www.nickbostrom.com or at www.fhi.ox.ac.uk.  Keep up with Seth Baum at www.sethbaum.com and www.gcrinstitute.org or you follow him on Twitter at @SethBaum. Follow Sam Vincent and Jonathan Brackley on Twitter at @smavincent and @JonBrackley.  Visit the fan site for AMC'S HUMANS at www.amc.com/shows/humans, and you can watch the first season of HUMANS at AMC On Demand or Amazon. Please subscribe to Kickass News and leave us a review. And support the show by donating at www.gofundme.com/kickassnews. Visit www.kickassnewspodcast.com for more fun stuff. Thanks for listening!

Future of Life Institute Podcast
Concrete Problems In AI Safety With Dario Amodei And Seth Baum

Future of Life Institute Podcast

Play Episode Listen Later Aug 30, 2016 43:21


Interview with Dario Amodei of OpenAI and Seth Baum of the Global Catastrophic Risk Institute about studying short-term vs. long-term risks of AI, plus lots of discussion about Amodei's recent paper, Concrete Problems in AI Safety.

interview ai safety openai concrete seth baum global catastrophic risk institute
Future of Life Institute Podcast
Earthquakes As Existential Risks?

Future of Life Institute Podcast

Play Episode Listen Later Jul 25, 2016 27:39


Could an earthquake become an existential or catastrophic risk that puts all of humanity at risk? Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of the Future of Life Institute consider extreme earthquake scenarios to figure out if such a risk is plausible. Featuring seismologist Martin Chapman of Virginia Tech. (Edit: This was just for fun, in a similar vein to MythBusters. We wanted to see just how far we could go.)

Future of Life Institute Podcast
Climate interview with Seth Baum

Future of Life Institute Podcast

Play Episode Listen Later Dec 22, 2015 12:33


An interview with Seth Baum, Executive Director of the Global Catastrophic Risk Institute, about whether the Paris Climate Agreement can be considered a success.

executive director climate paris climate agreement seth baum global catastrophic risk institute
Real Talk With Lee
Almost Famous Friday's

Real Talk With Lee

Play Episode Listen Later Apr 3, 2015 101:00


 Robert Kopecky, author of How to Survive Life (and Death), A Guide to Happiness in This World and Beyond, recently released by Conari Press/Weiser Books (and gratefully well-received!) It's a serious, but fun book from someone with unusual qualifications.The book is based on my three very compelling, and distinctively different Near Death Experiences, and what I learned about life from each of them. My emphasis isn't on scenarios of the afterlife, but on the nature and meaning of each of the three experiences; not about what heaven looks like, but about how to make this life look a bit more heavenly  David Denkenberger,is a research associate at the Global Catastrophic Risk Institute. My book is called "Feeding Everyone No Matter What" and presents alternate food supplies if agriculture is disrupted by events like abrupt climate change or nuclear winter. Solutions range from the conventional (e.g. growing mushrooms on dead trees) to the exotic (e.g. eating bacteria grown in natural gas).