Podcasts about global catastrophic risks

  • 32PODCASTS
  • 62EPISODES
  • 28mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Sep 30, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about global catastrophic risks

Latest podcast episodes about global catastrophic risks

Making Sense with Sam Harris - Subscriber Content

Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/385-ai-utopia Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don’t perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes’s predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics. Nick Bostrom is a professor at Oxford University, where he is the founding director of the Future of Humanity Institute. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked the global conversation about the future of AI. His work has framed much of the current thinking around humanity’s future (such as the concept of existential risk, the simulation argument, the vulnerable world hypothesis, astronomical waste, and the unilateralist’s curse). He has been on Foreign Policy’s Top 100 Global Thinkers list twice, and was the youngest person to rank among the top 15 in Prospect’s World Thinkers list. He has an academic background in theoretical physics, AI, computational neuroscience, and philosophy. His most recent book is Deep Utopia: Life and Meaning in a Solved World. Website: https://nickbostrom.com/ Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

Into the Impossible
Are We Living in a Simulation? Nick Bostrom (2022)

Into the Impossible

Play Episode Listen Later Sep 17, 2024 63:50


What if everything you know is just a simulation?  In 2022, I was joined by the one and only Nick Bostrom to discuss the simulation hypothesis and the prospects of superintelligence.  Nick is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the most-cited professional philosopher in the world aged 50 or under and is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller. With a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, there is no one better to answer this question than him! Tune in.  — Key Takeaways:  00:00:00 Intro 00:00:44 Judging a book by its cover 00:05:22 How could an AI have emotions and be creative? 00:08:22 How could a computing device / AI feel pain? 00:13:09 The Turing test  00:20:02 The simulation hypothesis  00:22:27 Is there a "Drake Equation" for the simulation hypothesis? 00:27:16 Penroses' orchestrated objective reality  00:34:11 SETI and the prospect of extraterrestrial life  00:49:20 Are computers really getting "smarter"? 00:53:59 Audience questions 01:01:09 Outro — Additional resources: 

Clearer Thinking with Spencer Greenberg
The path to utopia (with Nick Bostrom)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Aug 24, 2024 63:54


Why do there seem to be more dystopias than utopias in our collective imagination? Why is it easier to find agreement on what we don't want than on what we do want? Do we simply not know what we want? What are "solved worlds", "plastic worlds", and "vulnerable worlds"? Given today's technologies, why aren't we working less than we potentially could? Can humanity reach a utopia without superintelligent AI? What will humans do with their time, and/or how will they find purpose in life, if AIs take over all labor? What are "quiet" values? With respect to AI, how important is it to us that our conversation partners be conscious? Which factors will likely make the biggest differences in terms of moving the world towards utopia or dystopia? What are some of the most promising strategies for improving global coordination? How likely are we to end life on earth? How likely is it that we're living in a simulation?Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He's been a Professor at Oxford University, where he served as the founding Director of the Future of Humanity Institute from 2005 until its closure in April 2024. He is currently the founder and Director of Research of the Macrostrategy Research Initiative. Bostrom is the author of over 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014). His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. His most recent book, Deep Utopia: Life and Meaning in a Solved World, was published in March of 2024. Learn more about him at his website, nickbostrom.com.StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

The Nonlinear Library
EA - Taking Uncertainty Seriously (or, Why Tools Matter) by Bob Fischer

The Nonlinear Library

Play Episode Listen Later Jul 19, 2024 13:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taking Uncertainty Seriously (or, Why Tools Matter), published by Bob Fischer on July 19, 2024 on The Effective Altruism Forum. Executive Summary We should take uncertainty seriously. Rethink Priorities' Moral Parliament Tool, for instance, highlights that whether a worldview favors a particular project depends on relatively small differences in empirical assumptions and the way we characterize the commitments of that worldview. We have good reason to be uncertain: The relevant empirical and philosophical issues are difficult. We're largely guessing when it comes to most of the key empirical claims associated with Global Catastrophic Risks and Animal Welfare. As a community, EA has some objectionable epistemic features - e.g., it can be an echo chamber - that should probably make us less confident of the claims that are popular within it. The extent of our uncertainty is a reason to build models more like the Portfolio Builder and Moral Parliament Tools and less like traditional BOTECs. This is because: Our models allow you to change parameters systematically to see how those changes affect allocations, permitting sensitivity analyses. BOTECs don't deliver optimizations. BOTECs don't systematically incorporate alternative decision theories or moral views. Building a general tool requires you to formulate general assumptions about the functional relationships between different parameters. If you don't build general tools, then it's easier to make ad hoc assumptions (or ad hoc adjustments to your assumptions). Introduction Most philanthropic actors, whether individuals or large charitable organizations, support a variety of cause areas and charities. How should they prioritize between altruistic opportunities in light of their beliefs and decision-theoretic commitments? The CRAFT Sequence explores the challenge of constructing giving portfolios. Over the course of this sequence - and, in particular, through Rethink Priorities' Portfolio Builder and Moral Parliament Tools - we've investigated the factors that influence our views about optimal giving. For instance, we may want to adjust our allocations based on the diminishing returns of particular projects, to hedge against risk, to accommodate moral uncertainty, or based on our preferred procedure for moving from our commitments to an overall portfolio. In this final post, we briefly recap the CRAFT Sequence, discuss the importance of uncertainty, and argue why we should be quite uncertain about any particular combination of empirical, normative, and metanormative judgments. We think that there is a good case for developing and using frameworks and tools like the ones CRAFT offers to help us navigate our uncertainty. Recapping CRAFT We can be uncertain about a wide range of empirical questions, ranging from the probability that an intervention has a positive effect of some magnitude to the rate at which returns diminish. We can be uncertain about a wide range of normative questions, ranging from the amount of credit that an actor can take to the value we ought to assign to various possible futures. We can be uncertain about a wide range of metanormative questions, ranging from the correct decision theory to the correct means of resolving disagreements among our normative commitments. Over the course of this sequence - and, in particular, through Rethink Priorities' Portfolio Builder and Moral Parliament Tools - we've tried to do two things. First, we've tried to motivate some of these uncertainties: We've explored alternatives to EV maximization's use as a decision procedure. Even if EV maximization is the correct criterion of rationality, it's questionable as a decision procedure that ordinary, fallible people can use to make decisions given all their uncertainties and limitations. We've explored the problems and prom...

The Irish Tech News Podcast
AI insights in a modern world with Professor Nick Bostrom, Oxford University

The Irish Tech News Podcast

Play Episode Listen Later May 14, 2024 34:52


For decades, philosopher Nick Bostrom (director of the Future of Humanity Institute at Oxford) has led the conversation around technology and human experience (and grabbed the attention of the tech titans who are developing AI - Bill Gates, Elon Musk, and Sam Altman). Now, a decade after his NY Times bestseller Superintelligence warned us of what could go wrong with AI development, he flips the script in his new book Deep Utopia: Life and Meaning in a Solved World (March 27), asking us to instead consider “What could go well?” Ronan recently spoke to Professor Nick Bostrom. Professor Bostrom talks about his background, his new book Deep Utopia Life and Meaning in a Solved World, why he thinks advanced AI systems could automate most human jobs and more. More about Nick Bostrom: Swedish-born philosopher Nick Bostrom was founder and director of the Future of Humanity Institute at Oxford University. He is the most-cited professional philosopher in the world aged 50 or under and is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller.  With a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, his work has pioneered some of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds.His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; and he has been interviewed more than 1,000 times by media outlets around the world. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit.

Into the Impossible
Nick Bostrom: Will Artificial Intelligence Lead Us to a Utopian Future?

Into the Impossible

Play Episode Listen Later May 12, 2024 66:41


The Jeff Bullas Show
Philosophy and AI: What is the Future of Creativity?

The Jeff Bullas Show

Play Episode Listen Later May 9, 2024 65:35


Nick Bostrom is a Professor at Oxford University and the founding director of the Future of Humanity Institute. Nick is also the world's most cited philosopher aged 50 or under. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement(2009), and Superintelligence: Paths, Dangers, Strategies (2014), a wrote a New York Times bestseller which sparked a global conversation about the future of AI. ‌His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. ‌He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list. ‌He has just published a new book called “Deep Utopia: Life and Meaning in a Solved World.” What you will learn Find out why Nick is spending time in seclusion in Portugal Nick shares the big ideas from his new book “Deep Utopia”, which dreams up a world perfectly fixed by AI Discover why Nick got hooked on AI way before the internet was a big deal and how those big future questions sparked his path What would happen to our jobs and hobbies if AI races ahead in the creative industries? Nick shares his thoughts Gain insights into whether AI is going to make our conversations better or just make it easier for people to push ads and political agendas Plus loads more!

Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews
Nick Bostrom - Philosopher at the University of Oxford - Author of Deep Utopia and Superintelligence

Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews

Play Episode Listen Later Apr 22, 2024 62:57


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He is known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. Nick is a Professor at Oxford University, where he heads the Future of Humanity Institute as its Founding Director. He is also the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), Superintelligence: Paths, Dangers, Strategies (2014), which became a New York Times bestseller and sparked a global conversation about the future of AI, and Deep Utopia, Life and Meaning in a Solved World, Ideapress, 2024.Nick's work has pioneered some of the ideas that frame current thinking about humanity's future: the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, to name a few.To know more about Nick Bostrom, visit https://businessabc.net/wiki/nick-bostromDeep Utopia: Life and Meaning in a Solved WorldIn his latest book, "Deep Utopia: Life and Meaning in a Solved World," Nick Bostrom shifts the focus from the potential dangers of artificial intelligence explored in his previous work, "Superintelligence: Paths, Dangers, Strategies," to envisioning a future where AI development unfolds positively. As the conversation around AI continues to evolve, Bostrom probes the profound philosophical and spiritual implications of a world where superintelligence is safely developed, effectively governed, and utilised for the benefit of humanity.In this hypothetical scenario of a "solved world," where human labour becomes obsolete due to advanced AI systems, Bostrom raises existential questions about the essence of human existence and the pursuit of meaning. With the advent of technologies capable of fulfilling practical needs and desires beyond human capabilities, society would enter a state of "post-instrumentality," where the traditional purposes of human endeavour are rendered obsolete.About citiesabc.comhttps://www.citiesabc.com/​​​​​​​​​​​ About businessabc.nethttps://www.businessabc.net/About fashionabc.orghttps://www.fashionabc.org/ About Dinis Guardahttps://www.dinisguarda.com/https://businessabc.net/wiki/dinis-guardaSupport the show

Big Think
The intelligence explosion: Nick Bostrom on the future of AI

Big Think

Play Episode Listen Later Mar 7, 2024 11:23


We may build incredible AI. But can we contain our cruelty? Oxford professor Nick Bostrom explains. Nick Bostrom, a professor at the University of Oxford and director of the Future of Humanity Institute, discusses the development of machine superintelligence and its potential impact on humanity. Bostrom believes that, in this century, we will create the first general intelligence that will be smarter than humans. He sees this as the most important thing humanity will ever do, but it also comes with an enormous responsibility. Bostrom notes that there are existential risks associated with the transition to the machine intelligence era, such as the possibility of an underlying superintelligence that overrides human civilization with its own value structures. In addition, there is the question of how to ensure that conscious digital minds are treated well. However, if we succeed in ensuring the well-being of artificial intelligence, we could have vastly better tools for dealing with everything from diseases to poverty. Ultimately, Bostrom believes that the development of machine superintelligence is crucial for a truly great future. ---------------------------------------------------------------------------------------------------------------------------------------- chapters: 0:00 Smarter than humans 0:57 Brains: From organic to artificial 1:39 The birth of superintelligence 2:58 Existential risks 4:22 The future of humanity -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Go Deeper with Big Think: ►Become a Big Think Member Get exclusive access to full interviews, early access to new releases, Big Think merch and more ►Get Big Think+ for Business Guide, inspire and accelerate leaders at all levels of your company with the biggest minds in business -------------------------------------------------------------------------------------------------------------------------------------------------------------------- About Nick Bostrom: Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50. He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument (2003) and the concept of existential risk (2002). Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been interviewed more than 1,000 times by various media. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots. ---------------------------------------------------------------------------------------------------------------------------------------- About Big Think | Smarter Faster™ ► Big Think The leading source of expert-driven, educational content. With thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, Big Think helps you get smarter, faster by exploring the big ideas and core skills that define knowledge in the 21st century. Get Smarter, Faster. With Episodes From The Worlds Biggest Thinkers. Follow The Podcast And Turn On The Notifications!! Share This Episode If You Found It Valuable Leave A 5 Star Review... Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
EA - Review of EA Global Bay Area 2024 (Global Catastrophic Risks) by frances lorenz

The Nonlinear Library

Play Episode Listen Later Mar 1, 2024 6:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of EA Global Bay Area 2024 (Global Catastrophic Risks), published by frances lorenz on March 1, 2024 on The Effective Altruism Forum. EA Global: Bay Area (Global Catastrophic Risks) took place February 2-4. We hosted 820 attendees, 47 of whom volunteered over the weekend to help run the event. Thank you to everyone who attended and a special thank you to our volunteers - we hope it was a valuable weekend! Photos and recorded talks You can now check out photos from the event. Recorded talks, such as the media panel on impactful GCR communication, Tessa Alexanian's talk on preventing engineered pandemics, Joe Carlsmith's discussion of scheming AIs, and more, are now available on our Youtube channel. A brief summary of attendee feedback Our post-event feedback survey received 184 responses. This is lower than our average completion rate - we're still accepting feedback responses and would love to hear from all our attendees. Each response helps us get better summary metrics and we look through each short answer. To submit your feedback, you can visit the Swapcard event page and click the Feedback Survey button. The survey link can also be found in a post-event email sent to all attendees with the subject line, "EA Global: Bay Area 2024 | Thank you for attending!" Key metrics The EA Global team uses a several key metrics to estimate the impact of our events. These metrics, and the questions we use in our feedback survey to measure them, include: Likelihood to recommend (How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own? Discrete scale from 0 to 10, 0 being not at all likely and 10 being extremely likely) Number of new connections[1] (How many new connections did you make at this event?) Number of impactful connections[2] (Of those new connections, how many do you think might be impactful connections?) Number of Swapcard meetings per person (This data is pulled from Swapcard) Counterfactual use of attendee time (To what extent was this EA Global a good use of your time, compared to how you would have otherwise spent your time? A discrete scale ranging from, "a waste of my time, 10x the counterfactual") The likelihood to recommend for this event was higher compared to last year's EA Global: Bay Area and our EA Global 2023 average (i.e. the average across the three EA Globals we hosted in 2023) (see Table 1). Number of new connections was slightly down compared to the 2023 average, while the number of impactful connections was slightly up. The counterfactual use of time reported by attendees was slightly higher overall than Boston 2023 (the first EA Global we used this metric at), though there was also an increase in the number of reports that the event was a worse use of attendees' time (see Figure 1). Metric (average of all respondents) EAG BA 2024 (GCR) EAG BA 2023 EAG 2023 average Likelihood to recommend (0 - 10) 8.78 8.54 8.70 Number of new connections 9.05 11.5 9.72 Number of impactful connections 4.15 4.8 4.09 Swapcard meetings per person 6.73 5.26 5.24 Table 1. A summary of key metrics from the post-event feedback surveys for EA Global: Bay Area 2024 (GCRs), EA Global: Bay Area 2023, and the average from all three EA Globals hosted in 2023. Feedback on the GCRs focus 37% of respondents rated this event more valuable than a standard EA Global, 34% rated it roughly as valuable and 9% as less valuable. 20% of respondents had not attended an EA Global event previously (Figure 2). If the event had been a regular EA Global (i.e. not focussed on GCRs), most respondents predicted they would have still attended. To be more precise, approximately 90% of respondents reported having over 50% probability of attending the event in the absence of a GCR ...

The Nonlinear Library
EA - New Open Philanthropy Grantmaking Program: Forecasting by Open Philanthropy

The Nonlinear Library

Play Episode Listen Later Feb 20, 2024 3:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Open Philanthropy Grantmaking Program: Forecasting, published by Open Philanthropy on February 20, 2024 on The Effective Altruism Forum. Written by Benjamin Tereick We are happy to announce that we have added forecasting as an official grantmaking focus area. As of January 2024, the forecasting team comprises two full-time employees: myself and Javier Prieto. In August 2023, I joined Open Phil to lead our forecasting grantmaking and internal processes. Prior to that, I worked on forecasts of existential risk and the long-term future at the Global Priorities Institute. Javier recently joined the forecasting team in a full-time capacity from Luke Muehlhauser's AI governance team, which was previously responsible for our forecasting grantmaking. While we are just now launching a dedicated cause area, Open Phil has long endorsed forecasting as an important way of improving the epistemic foundations of our decisions and the decisions of others. We have made several grants to support the forecasting community in the last few years, e.g., to Metaculus, the Forecasting Research Institute, and ARLIS. Moreover, since the launch of Open Phil, grantmakers have often made predictions about core outcomes for grants they approve. Now with increased staff capacity, the forecasting team wants to build on this work. Our main goal is to help realize the promise of forecasting as a way to improve high-stakes decisions, as outlined in our focus area description. We are excited both about projects aiming to increase the adoption rate of forecasting as a tool by relevant decision-makers, and about projects that provide accurate forecasts on questions that could plausibly influence the choices of these decision-makers. We are interested in such work across both of our portfolios: Global Health and Wellbeing and Global Catastrophic Risks. [1] We are as of yet uncertain about the most promising type of project in the forecasting focus area, and we will likely fund a variety of different approaches. We will also continue our commitment to forecasting research and to the general support of the forecasting community, as we consider both to be prerequisites for high-impact forecasting. Supported by other Open Phil researchers, we plan to continue exploring the most plausible theories of change for forecasting. I aim to regularly update the forecasting community on the development of our thinking. Besides grantmaking, the forecasting team is also responsible for Open Phil's internal forecasting processes, and for managing forecasting services for Open Phil staff. This part of our work will be less public, but we will occasionally publish insights from our own processes, like Javier's 2022 report on the accuracy of our internal forecasts. ^ It should be noted that administratively, the forecasting team is part of the Global Catastrophic Risks portfolio, and historically, our forecasting work has had closer links to that part of the organization. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Summer Internships at Open Philanthropy - Global Catastrophic Risks (due March 4) by Hazel Browne

The Nonlinear Library

Play Episode Listen Later Feb 16, 2024 8:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summer Internships at Open Philanthropy - Global Catastrophic Risks (due March 4), published by Hazel Browne on February 16, 2024 on The Effective Altruism Forum. We're excited to announce that the Global Catastrophic Risks Cause Prioritization team at Open Philanthropy will be hiring several interns this summer to help us explore new causes and advance our research. We think this is a great way for us to grow our capacity, develop research talent, and expand our pipeline for future full-time roles. The key points are: Applications are due at 11:59 PM Pacific on Monday, March 4, 2023. Applicants must be currently enrolled in a degree program or working in a position that offers externship/secondment opportunities. The internship runs from June 10 to August 16-30 (with limited adjustments based on academic calendars) and is paid ($2,000 per week) and fully remote. We're open to a wide variety of backgrounds, but expect some of the strongest candidates to be enrolled in master's or doctoral programs. We aim to employ people with many different experiences, perspectives and backgrounds who share our passion for accomplishing as much good as we can. We particularly encourage applications from people of color, self-identified women, non-binary individuals, and people from low and middle income countries. Full details (and a link to the application) are available here and are also copied below. We hope that you'll apply and share the news with others! About the internship We're looking for students currently enrolled in degree programs (or non-students whose jobs offer externship/secondment opportunities) to apply for a research internship from June - August 2024 and help us investigate important questions and causes. We see the internship as a way to grow our capacity, develop promising research talent, and expand our hiring pipeline for full-time roles down the line. We want to support interns as team members to work on our core priorities, while also showing them how Open Philanthropy works and helping them build skills important for cause prioritization research. As such, interns will directly increase our team's capacity to do research that informs our Global Catastrophic Risks strategy and grantmaking. Ultimately, this will help us get money to where it can have the most impact. We anticipate that interns will collaborate closely with the team; at the same time, we expect a high degree of independence and encourage self-directed work. Our internship tracks We are hiring interns for either the Research or Strategy tracks. The responsibilities for these tracks largely overlap, and the two positions will be evaluated using the same application materials. The main difference is one of emphasis: while research track interns primarily focus on direct research, strategy track interns are sometimes tasked with working on non-research projects (such as helping run a contest or a request for proposals). You will be asked to indicate which track you'd like to be considered for in the application. Interns will work on multiple projects at different levels of depth, in the same way as full-time team members. They will report to an existing cause prioritization team member and participate in team meetings and discussions, including presenting their work to the team for feedback. Specific projects will depend on the team's needs and the intern's skills, but will fall under the following core responsibilities: Searching for new program areas. We believe there are promising giving opportunities that don't currently fall within the purview of our existing program areas. Finding them involves blending theoretical models with concrete investigation into the tractability of new interventions to reduce catastrophic risk. Much of this research is informed by conversations with relevant exp...

The Nonlinear Library
EA - The Intergovernmental Panel On Global Catastrophic Risks (IPGCR) by DannyBressler

The Nonlinear Library

Play Episode Listen Later Feb 7, 2024 35:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Intergovernmental Panel On Global Catastrophic Risks (IPGCR), published by DannyBressler on February 7, 2024 on The Effective Altruism Forum. Summary This post motivates and describes a potential International Panel on Global Catastrophic Risks (IPGCR). The IPGCR will focus only on GCRs: risks that could cause a global collapse of human civilization or human extinction. The IPGCR seeks to fit an important and currently unoccupied niche: an international expert organization whose only purview is to produce expert reports and summaries for the international community on risks that could cause a global collapse of human civilization or human extinction. The IPGCR will produce reports across scientific and technical domains, and it will focus on the ways in which risks may intersect and interact. This will aid policymakers in constructing policy that coordinates and prioritizes responses to different threats, and minimizes the chance that any GCR occurs, regardless of its origin. The IPGCR will work in some areas where there is more consensus among experts and some areas where there is less consensus. Unlike consensus-seeking organizations like the Intergovernmental Panel on Climate Change (IPCC), the IPGCR will not necessarily seek consensus. Instead, it will seek to accurately convey areas of consensus, disagreement, and uncertainty among experts. The IPGCR will draw on leadership and expertise from around the world and across levels of economic development to ensure that it promotes the interests of all humanity in helping to avoid and mitigate potential global catastrophes. You can chat with the post here: Chat with IPGCR (although let me know if this GPT seems unaligned with this post as you chat with it). 1. Introduction and Rationale Global catastrophic risks (GCRs) are risks that could cause a global collapse of human civilization or human extinction (Bostrom 2013, Bostrom & Cirkovic 2011, Posner 2004). Addressing these risks requires good policy, which requires a good understanding of the risks and options for mitigating them. However, primary research is not enough: policymakers must be informed by objective summaries of the existing scholarship and expert-assessed policy options. This post proposes the creation of the Intergovernmental Panel on Global Catastrophic Risks (IPGCR). The IPGCR is an international organization that synthesizes scientific understanding and makes policy recommendations related to global catastrophic risks. The IPGCR will report on the scientific, technological, and socioeconomic bases of GCRs, the potential impacts of GCRs, and options for the avoidance and mitigation of GCRs. The IPGCR will synthesize previously published research into reports that summarize the state of relevant knowledge. It will sit under the auspices of the United Nations, and its reports will include explicit policy recommendations aimed at informing decision-making by the UN and other bodies. To draw an analogy, the IPGCR does not put out forest fires; it surveys the forest, and it advises precautionary measures to minimize the chance of a forest fire occurring. The IPGCR's reports will aim to be done in a comprehensive, objective, open, and transparent manner, including fully communicating uncertainty or incomplete consensus around the findings. The mechanisms for how this will be accomplished are described throughout this document. The IPGCR draws on best practices from other international organizations and adopts those that best fit within the IPGCR's purview. Like the US National Academy of Sciences, the UK Royal Society, and the Intergovernmental Panel on Climate Change, the IPGCR will primarily operate through expert volunteers from academia, industry, and government, who will write and review the reports. In contrast to these other institutions, the ...

The Nonlinear Library
EA - Observatorio de Riesgos Catastróficos Globales (ORCG) Recap 2023 by JorgeTorresC

The Nonlinear Library

Play Episode Listen Later Dec 14, 2023 6:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Observatorio de Riesgos Catastróficos Globales (ORCG) Recap 2023, published by JorgeTorresC on December 14, 2023 on The Effective Altruism Forum. The Global Catastrophic Risks Observatory (ORCG) is a scientific diplomacy organization that emerged in February 2023 to formulate governance proposals that allow the comprehensive management of different global risks in Spanish-speaking countries. We connect decision-makers with experts to achieve our mission, producing evidence-based publications. In this context, we have worked on several projects on advanced artificial intelligence risks, biological risks, and food risks such as nuclear winter. Since its inception, the organization has accumulated valuable experience and generated extensive production. This includes four reports, one produced in collaboration with the Alliance to Feed the Earth in Disasters (ALLFED). In addition, we have produced four academic articles, three of which have been accepted for publication in specialized journals. We have also created three policy recommendations and/or working documents and four notes in collaboration with institutions such as the Simon Institute for Long-term Governance and The Future Society. In addition, the organization has developed abundant informative material, such as web articles, videos, conferences, and infographics. During these nine months of activity, the Observatory has established relationships with actors in Spanish-speaking countries, especially highlighting the collaboration with the regional cooperation spaces of the United Nations Office for Disaster Risk Reduction (UNDRR) and the Economic Commission for Latin America and the Caribbean (ECLAC), as well as with risk management offices at the national level. In this context, we have supported the formulation of Argentina's National Plan for Disaster Risk Reduction 2024-2030. Our contribution stands out with a specific chapter on extreme food catastrophes, which was incorporated into the work manual of the Information and Risk Scenarios Commission (Technical Commission No. 7). We invite you to send any questions and/or requests to info@riesgoscatastroficosglobales.com. You can contribute to the mitigation of Global Catastrophic Risks by donating. Documents Reports Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS), DOI: 10.13140/RG.2.2.11906.96969. Artificial intelligence risk management in Spain, DOI: 10.13140/RG.2.2.18451.86562. Proposal for the prevention and detection of emerging infectious diseases in Guatemala, DOI: 10.13140/RG.2.2.28217.75365. Latin America and global catastrophic risks: transforming risk management DOI: 10.13140/RG.2.2.25294.02886. Papers Resilient food solutions to avoid mass starvation during a nuclear winter in Argentina, REDER Journal, Accepted, pending publication. Systematic review of taxonomies of risks associated with artificial intelligence, Analecta Política Journal, Accepted, pending publication. The EU AI Act: A pioneering effort to regulate frontier AI?, Journal IberamIA, Accepted, pending publication. Operationalizing AI Global Governance Democratization, submitted to call for papers of Office of the Envoy of the Secretary General for Technology, Non-public document. Policy brief and work documents RCG Position paper: AI Act trilogue . Operationalising the definition of highly capable AI . PNRRD Argentina 2024-2030 chapter proposal "Scenarios for Abrupt Reduction of Solar Light", *Published as an internal government document. Collaborations [Simon Institute] Response to Our Common Agenda Policy Brief 1: "To Think and Act for Future Generations" . [Simon Institute] Response to Our Common Agenda Policy Brief 2: "Strengthening the International Response to Complex Global Shocks - An Emergency Platform" . [Simon Institute] Respons...

Origin Story
Effective Altruism: Morality by numbers

Origin Story

Play Episode Listen Later Dec 11, 2023 70:11


In the last episode of season four, Ian Dunt and Dorian Lynskey discuss effective altruism. Last month the US entrepreneur Sam Bankman-Fried was convicted on multiple counts of fraud and conspiracy related to the dramatic collapse of his cryptocurrency exchange FTX. Bankman-Fried was also a prominent advocate of effective altruism, a philanthropic movement based on utilitarian philosophy, and the scandal has thrown the EA community into crisis. Dorian and Ian explain how two maverick young Oxford philosophers ended up creating a multi-billion-dollar movement, explore the ideas behind it, and track its journey towards long termism: the philosophy of safeguarding the future of the human race from threats such as hostile AI. Are the principles of EA sound? Did the influx of billionaires and the obsession with existential risk knock it off course? Was Bankman-Fried a true believer who blew it or just a grifter who took the idealists for a ride? And can EA survive one of the biggest financial scandals of this century? When big ideas collide with big money and big tech, things get messy. Support Origin Story on Patreon for exclusive benefits www.Patreon.com/originstorypod  Reading list Books: Carol J.Adams, Alice Crary, Lori Gruen, (eds.) — The Good it Promises, the Harm it Does: Critical Essays on Effective Alturism, (2023) Nick Bostrom and Milan M. Ćirković (eds.) — Global Catastrophic Risks (2008) Nick Bostrom — Superintelligence: Paths, Dangers, Strategies (2014) Zeke Faux — Number Go Up: Inside Crypto's Wild Rise and Staggering Fall (2023) John Leslie — The End of the World: The Science and Ethics of Human Extinction (1996) Michael Lewis — Going Infinite: The Rise and Fall of a New Tycoon (2023) William MacAskill — Doing Good Better: Effective Altruism and How You Can Make a Difference (2015) William MacAskill — What We Owe the Future (2022) Toby Ord — The Precipice: Existential Risk and the Future of Humanity (2020) Online: Core EA Principles, Centre for Effective Altruism Peter Singer — Famine, Affluence and Morality, 1971 Peter Singer — TED talk, 2013 William MacAskill — The history of the term ‘effective altruism', Effective Altruism Forum, 2014 Raffi Khatchadourian — The Doomsday Invention, New Yorker, 2015 Gideon-Lewis Krauss — The Reluctant Prophet of Effective Altruism, New Yorker, 2022 Charlotte Alter — Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed, Time, 2023 Sophie McBain — Sam Bankman-Fried and the effective altruism delusion, New Statesman, 2023 Podcasts: 80,000 Hours: Sam Bankman-Fried, 2022 80,000 Hours: Toby Ord, 2023 Written and presented by Dorian Lynskey and Ian Dunt. Audio production by Simon Williams. Music by Jade Bailey. Logo art by Mischa Welsh. Lead Producer is Anne-Marie Luff. Group Editor: Andrew Harrison. Origin Story is a Podmasters production. Twitter Learn more about your ad choices. Visit megaphone.fm/adchoices

The Best of Making Sense with Sam Harris
#151 — Will We Destroy the Future?

The Best of Making Sense with Sam Harris

Play Episode Listen Later Nov 20, 2023 93:05


Sam Harris speaks with Nick Bostrom about the problem of existential risk. They discuss public goods, moral illusions, the asymmetry between happiness and suffering, utilitarianism, “the vulnerable world hypothesis,” the history of nuclear deterrence, the possible need for “turnkey totalitarianism,” whether we're living in a computer simulation, the Doomsday Argument, the implications of extraterrestrial life, and other topics. Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller. Episodes that have been re-released as part of the Best of Making Sense series may have been edited for relevance since their original airing.  

The Nonlinear Library
EA - Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams by Open Philanthropy

The Nonlinear Library

Play Episode Listen Later Sep 30, 2023 6:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams, published by Open Philanthropy on September 30, 2023 on The Effective Altruism Forum. It's been another busy year at Open Philanthropy; after nearly doubling the size of our team in 2022, we've added over 30 new team members so far in 2023. Now we're launching a number of open applications for roles in all of our Global Catastrophic Risks (GCR) cause area teams (AI Governance and Policy, Technical AI Safety, Biosecurity & Pandemic Preparedness, GCR Cause Prioritization, and GCR Capacity Building). The application, job descriptions, and general team information are available here. Notably, you can apply to as many of these positions as you'd like with a single application form! We're hiring because our GCR teams feel pinched and really need more capacity. Program Officers in GCR areas think that growing their teams will lead them to make significantly more grants at or above our current bar. We've had to turn down potentially promising opportunities because we didn't have enough time to investigate them; on the flip side, we're likely currently allocating tens of millions of dollars suboptimally in ways that more hours could reveal and correct. On the research side, we've had to triage important projects that underpin our grantmaking and inform others' work, such as work on the value of Open Phil's last dollar and deep dives into various technical alignment agendas. And on the operational side, maintaining flexibility in grantmaking at our scale requires significant creative logistical work. Both last year's reduction in capital available for GCR projects (in the near term) and the uptick in opportunities following the global boom of interest in AI risk make our grantmaking look relatively more important; compared to last year, we're now looking at more opportunities in a space with less total funding. GCR roles we're now hiring for include: Program associates to make grants in technical AI governance mechanisms, US AI policy advocacy, general AI governance, technical AI safety, biosecurity & pandemic preparedness, EA community building, AI safety field building, and EA university groups. Researchers to identify and evaluate new areas for GCR grantmaking, conduct research on catastrophic risks beyond our current grantmaking areas, and oversee a range of research efforts in biosecurity. We're also interested in researchers to analyze issues in technical AI safety and (separately) the natural sciences. Operations roles embedded within our GCR grantmaking teams: the Biosecurity & Pandemic Preparedness team is looking for an infosec specialist, an ops generalist, and an executive assistant (who may also support some other teams); the GCR Capacity Building team is looking for an ops generalist. Most of these hires have multiple possible seniority levels; whether you're just starting in your field or have advanced expertise, we encourage you to apply. If you know someone who would be great for one of these roles, please refer them to us. We welcome external referrals and have found them extremely helpful in the past. We also offer a $5,000 referral bonus; more information here. How we're approaching these hires You only need to apply once to opt into consideration for as many of these roles as you're interested in. A checkbox on the application form will ask which roles you'd like to be considered for. We've also made efforts to streamline work tests and use the same tests for multiple roles where possible; however, some roles do use different work tests, so it's possible you'll still have to take different work tests for different roles, especially if you're interested in roles across a wide array of skillsets (e.g., both research and operations). You may also have interviews with mu...

Effective Altruism Forum Podcast
“Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams” by Open Philanthropy

Effective Altruism Forum Podcast

Play Episode Listen Later Sep 29, 2023 6:31


It's been another busy year at Open Philanthropy; after nearly doubling the size of our team in 2022, we've added over 30 new team members so far in 2023. Now we're launching a number of open applications for roles in all of our Global Catastrophic Risks (GCR) cause area teams (AI Governance and Policy, Technical AI Safety, Biosecurity & Pandemic Preparedness, GCR Cause Prioritization, and GCR Capacity Building[1]).The application, job descriptions, and general team information are available here. We're hiring because our GCR teams feel pinched and really need more capacity. Program Officers in GCR areas think that growing their teams will lead them to make significantly more grants at or above our current bar. We've had to turn down potentially promising opportunities because we didn't have enough time to investigate them[2]; on the flip side, we're likely currently allocating tens of millions of dollars suboptimally in ways that more [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: September 29th, 2023 Source: https://forum.effectivealtruism.org/posts/bBefhAXpCFNswNr9m/open-philanthropy-is-hiring-for-multiple-roles-across-our --- Narrated by TYPE III AUDIO.

The Nonlinear Library
EA - Riesgos Catastróficos Globales needs funding by Jaime Sevilla

The Nonlinear Library

Play Episode Listen Later Aug 2, 2023 5:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Riesgos Catastróficos Globales needs funding, published by Jaime Sevilla on August 2, 2023 on The Effective Altruism Forum. Riesgos Catastróficos Globales (RCG) is a science-policy nonprofit investigating opportunities to improve the management of Global Catastrophic Risks in Spanish-Speaking countries. I wrote a previous update back in May. Since then, the organisation has published seven more articles, including a report on Artificial Intelligence regulation in the context of the EU AI Act sandbox. We have also been invited to contribute to the 2024-2030 National Risk Management Plan of Argentina, which will consequently be the world's first to include a section on abrupt sunlight reduction scenarios (ASRS). Unfortunately, our major fundraising efforts have been unsuccessful. We are only able to keep operating due to some incredibly generous donations by private individuals. We are looking to fundraise $87k to support our operations between October 2023 and March 2024. If you are a funder, you can contact us through info@riesgoscatastroficosglobales.com . Individuals can help extend our runway through a donation. Reasons to support Riesgos Catastróficos Globales I believe that RCG is an incredible opportunity for impact. Here are some reasons why. We have already found promising avenues to impact. We have officially joined the public risk management network in Argentina, and we have been invited to contribute an entry on abrupt sun-reducing scenarios (ASRS) to the 2024-2030 national risk management plan. RCG has shown to be amazingly productive. Since the new team started operating in March we have published two large reports and ten articles. Another large report is currently undergoing review, and we are working on three articles we plan to submit to academic journals. This is an unusually high rate of output for a new organization. RCG is the only Spanish-Speaking organisation producing work on Global Catastrophic Risks studies. I believe that our reports on Artificial Winter and Artificial Intelligence are the best produced in the language. Of particular significance is our active engagement with Latin American countries, which are otherwise not well represented in conversations about global risk. We are incubating some incredible talent. Our staff includes competent profiles who in a short span of time have gained in-depth expertise in Global Catastrophic Risks. This would have been hard to acquire elsewhere, and I am very excited about their careers. In sum, I am very excited about the impact we are having and the work that is happening in Riesgos Catastróficos Globales. Keep reading to learn more about it! Status update Here are updates on our main lines of work. Artificial Winter. We have joined the Argentinian Register of Associations for Comprehensive Risk Management (RAGIR), and we will be contributing a section on managing abrupt sunlight reduction scenarios (ASRS) to the 2024-2030 National Risk Management Plan. We continue promoting public engagement with the topic, having recently published a summary infographic of our report. We also are preparing a related submission to an academic journal. Artificial Intelligence. We have published our report on AI governance in the context of the EU AI Act sandbox, as well as companion infographics. A member of the European parliament has agreed to write a prologue for the report. In parallel, we have been engaging with the discussion around the AI Act through calls for feedback. We are also currently preparing two submissions to academic journals related to risks and regulation of AI. Biosecurity. We have drafted a report on biosurveillance and contention of emergent infectious diseases in Guatemala, which is currently undergoing expert review. It will be published in August. We are also writing a short article o...

The Philosopher's Nest
S1E31 - Elliott Thornley on Population Ethics, Global Catastrophic Risks, and Writing an Integrated Thesis

The Philosopher's Nest

Play Episode Listen Later May 8, 2023 21:36


Today we're going to be joined by Elliott Thornley, a DPhil student at the University of Oxford. We'll be talking about Elliott's work on population ethics and global priorities research, as well as his thoughts on writing an integrated thesis rather than a monograph thesis. If, after listening, you'd like to get in touch with Elliott, you can find his email address on his website: www.elliott-thornley.com, and you can follow him on twitter at @ElliottThornley. Click here to learn more about Effective Thesis

The Nonlinear Library
EA - Retrospective on recent activity of Riesgos Catastróficos Globales by Jaime Sevilla

The Nonlinear Library

Play Episode Listen Later May 2, 2023 7:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on recent activity of Riesgos Catastróficos Globales, published by Jaime Sevilla on May 1, 2023 on The Effective Altruism Forum. The new team of Riesgos Catastróficos Globales started their job two months ago. During this time, they have been working on two reports on what we have identified as top priorities for the management of Global Catastrophic Risks from Spanish-Speaking countries: food security during Abrupt Sunlight-Reduction Scenarios (e.g. nuclear winter) and AI regulation. In this article, I will cover their output in more depth and future plans, with some reflections on how the project is going. The short version is that I am reasonably pleased, and the directive board has decided to continue the project for two more months. The team's productivity has exceeded my expectations, though I see opportunities for improvement in our quality assurance, formation and outreach. We remain short of funding; if you want to support our work you can donate through our donation portal. Intellectual output In the last two months, the team has been working on two major reports and several minor outputs. 1) Report on food security in Argentina during abrupt sun-reducing scenarios (ASRS), in collaboration with ALLFED. In this report, we explain the important role Argentina could have during ASRS to mitigate global famine. We sketch several policies that would be useful inclusions in an emergency plan, such as resilient food deployment, together with suggestions on which public organisms could implement them. 2) Report on AI regulation for the EU AI Act Spanish sandbox (forthcoming). We are interviewing and eliciting opinions from several experts, to compile an overview of AI risk for Spanish policymakers and proposals to make the most out of the upcoming EU AI sandbox. 3) An article about AI regulation in Spain. In this short article, we explain the relevance of Spain for AI regulation in the context of the EU AI Act. We propose four policies that could be tested in the upcoming sandbox. It serves as a preview of the report I mentioned above. 4) An article about the new GCR mitigation law in USA, reporting on its meaning and proposing similar initiatives for Spanish-Speaking countries. 5) Two statements about Our Common Agenda Policy Briefs, in collaboration with the Simon Institute. Overall, I think we have done a good job of contextualizing the research done in the international GCR community. However, I feel we rely a lot on the involvement of the direction board for quality assurance, and our limited time means that some mistakes and misconceptions will likely have made it to publication. Having said that, I am pleased with the results. The team has been amazingly productive, publishing a 60-page report in two months and several minor publications alongside it. In the future, we will be involving more experts for a more thorough review process. This also means that we will be erring towards producing shorter reports, which can be more thoroughly checked and are better for engaging policy-makers. Formation Early in the project, we identified the education of our staff as a key challenge to overcome. Our staff has work experience and credentials, but their exposure to the GCR literature was limited. We undertook several activities to address this lack of formation: Knowledge transfer talks with Spanish-speaking experts from our directive board and advisory network (Juan García from ALLFED, Jaime Sevilla from Epoch, Clarissa Rios Rojas from CSER). A GCR reading group with curated reading recommendations. An online course taught by Sandra Malagón from Carreras con Impacto. A dedicated course on the basics of Machine Learning. I am satisfied with the results, and I see a clear progression in the team. In hindsight, I think we erred on the side of too much form...

Superintelligence by Nick Bostrom | Book Summary, Review and Quotes | Free Audiobook

Play Episode Listen Later Apr 7, 2023 19:27


Learn on your own terms. Get the PDF, infographic, full ad-free audiobook and animated version of this summary and a lot more on the top-rated StoryShots app: https://www.getstoryshots.com Help us grow and create more amazing content for you! ⭐️⭐️⭐️⭐️⭐️ Don't forget to subscribe, rate and review the StoryShots podcast now.  What should our next book be? Suggest and vote it up on the StoryShots app. StoryShots Book Summary and Review of Superintelligence: Paths, Dangers, Strategies by Nick Bostrom Life gets busy. Has Superintelligence been on your reading list? Learn the key insights now. We're scratching the surface here.  If you don't already have Nick Bostrom's popular book on artificial intelligence and technology, order it here or get the audiobook for free to learn the juicy details. Introduction What happens when artificial intelligence surpasses human intelligence? Machines can think, learn, and solve complex problems faster and more accurately than we can. This is the world that Nick Bostrom explores in his book, Superintelligence. Advances in artificial intelligence are bringing us closer to creating superintelligent beings.  Big tech companies like Microsoft, Google, and Facebook are all racing to create a super powerful AI. They're pouring a lot of resources into research and development to make it happen. But here's the catch: without the right safety measures and rules in place, things might go haywire. That's why it's important to step in and make sure AI stays under control.  Imagine a world where machines are not only cheaper but also way better at doing jobs than humans. In that world, machines might take over human labor, leaving people wondering, "What now?" So it's important to come up with creative solutions to make sure everyone's taken care of. The book shows what happens after superintelligence emerges. It examines the growth of intelligence, the forms and powers of superintelligence, and its strategic choices. We have to prepare now to avoid disasters later. Bostrom offers strategies to navigate the dangers and challenges it presents.  Superintelligence examines the history of artificial intelligence and the development of technological growth. The book describes how AI is growing faster than its technological predecessors. It also looks at surveys of expert opinions regarding its future progress. Sam Altman, the co-founder of OpenAI, calls Superintelligence a must-read for anyone who cares about the future of humanity. He even included it on his list of the nine books he thinks everyone should read. Site: This summary will delve into the fascinating and sometimes frightening world of superintelligence. It provides you with an engaging overview of Bostrom's key ideas. About Nick Bostrom Nick Bostrom is a Swedish philosopher and futurist. He is known for his groundbreaking work in artificial intelligence and its impact on humanity. Bostrom is a professor at the University of Oxford, where he founded the Future of Humanity Institute. In particular, he conducts research in how advanced technologies and AI can benefit and harm society. In addition to Superintelligence, Bostrom has authored other influential works, including Anthropic Bias: Observation Selection Effects in Science and Philosophy and Global Catastrophic Risks. His work has contributed to the ongoing discussion of humanity's future.  StoryShot #1: We Are Not Ready for Superintelligence StoryShot #2: There Are Three Forms of Superintelligence  StoryShot #3: There are Two Sources of Advantage for Digital Intelligence StoryShot #4: Uncontrolled Superintelligence Poses Significant Risks to Society Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
EA - Global catastrophic risks law approved in the United States by JorgeTorresC

The Nonlinear Library

Play Episode Listen Later Mar 7, 2023 1:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Global catastrophic risks law approved in the United States, published by JorgeTorresC on March 7, 2023 on The Effective Altruism Forum. Executive Summary The enactment of the Global Catastrophic Risk Management Act represents a significant step forward in global catastrophic risk management. It is the first time a nation has undertaken a detailed analysis of these risks. The law orders the United States government to establish actions for prevention, preparation, and resilience in the face of catastrophic risks. Specifically, the United States government will be required to: Present a global catastrophic risk assessment to the US Congress. Develop a comprehensive risk mitigation plan involving the collaboration of sixteen designated US national agencies. Formulate a strategy for risk management under the leadership of the Secretary of Homeland Security and the Administrator of the Federal Emergency Management of the US. Conduct a national exercise to test the strategy. Provide recommendations to the US Congress. This legislation recognizes as global catastrophic risks: global pandemics, nuclear war, asteroid and comet impacts, supervolcanoes, sudden and severe changes in climate, and threats arising from the use and development of emerging technologies (such as artificial intelligence or engineered pandemics). Our article presents an overview of the legislation, followed by a comparative discussion of the international legislation of GCRs. Furthermore, we recommend considering similar laws for adoption within the Spanish-speaking context. Read more (in Spanish) Riesgos Catastróficos Globales is a science-advocacy and research organization working on improving the management of global risks in Spanish-Speaking countries. You can support our organization with a donation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Nick Bostrom should step down as Director of FHI by BostromAnonAccount

The Nonlinear Library

Play Episode Listen Later Mar 4, 2023 7:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Bostrom should step down as Director of FHI, published by BostromAnonAccount on March 4, 2023 on The Effective Altruism Forum. Nick Bostrom should step down as Director of FHI. He should move into a role as a Senior Research Fellow at FHI, and remain a Professor of Philosophy at Oxford University. I don't seek to minimize his intellectual contribution. His seminal 2002 paper on existential risk launched a new sub-field of existential risk research (building on many others). The 2008 book on Global Catastrophic Risks he co-edited was an important part of bringing together this early field. 2014's Superintelligence put AI risk squarely onto the agenda. And he has made other contributions across philosophy from human enhancement to the simulation hypothesis. I'm not denying that. I'm not seeking to cancel him and prevent him from writing further papers and books. In fact, I want him to spend more time on that. But I don't think he's been a particularly good Director of FHI. These difficulties are demonstrated by and reinforced by his Apology. I think he should step down for the good of FHI and the field. This post has some hard truths and may be uncomfortable reading, but FHI and the field are more important than that discomfort. Pre-existing issues Bostrom was already struggling as Director. In the past decade, he's churned through 5-10 administrators, due to his persistent micromanagement. He discouraged investment in the relationship with the University and sought to get around/streamline/reduce the bureaucracy involved with being part of the University. All of this contributed to the breakdown of the relationship with the Philosophy Faculty (which FHI is a part of). This led the Faculty to impose a hiring freeze a few years ago, preventing FHI from hiring more people until they had resolved administrative problems. Until then, FHI could rely on a constant churn of new people to replace the people burnt out and/or moving on. The hiring freeze stopped the churn. The hiring freeze also contributed in part to the end of the Research Scholars Program and Cotton-Barratt's resignation from FHI. It also contributed in part to the switch of almost all of the AI Governance Research Group to the Center for the Governance of AI. Apology Then in January 2023, Bostrom posted an Apology for an Old Email. In my personal opinion, this statement demonstrated his lack of aptitude and lack of concern for his important role. These are sensitive topics that need to be handled with care. But the Apology had a glib tone, reused the original racial slur, seemed to indicate he was still open to discredited ‘race science' hypotheses, and had an irrelevant digression on eugenics. I personally think these are disqualifying views for someone in his position as Director. But also, any of these issues would presumably have been flagged by colleagues or a communications professional. It appears he didn't check this major statement with anyone or seek feedback. Being Director of a major research center in an important but controversial field requires care, tact, leadership and attention to downside risks. The Apology failed to demonstrate that. The Apology has had the effect of complicating many important relationships for FHI: with the University, with staff, with funders and with collaborators. Bostrom will now struggle even more to lead the center. First, University. The Faculty was already concerned, and Oxford University is now investigating. Oxford University released a statement to The Daily Beast: “The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications. Neither the content nor language are in line with our strong commitment to diversity and equality.” B...

Thruline to the 4th Sector
Tackling Mass Climate Migration by Enacting Global Change with Aaron Berger, Research Scientist for Global Catastrophic Risks and Geopolitical Spillover Events

Thruline to the 4th Sector

Play Episode Listen Later Feb 2, 2023 51:57


This episode features a conversation between Phil Dillard, Founder of Thruline Networks, and Aaron Berger, a research scientist specializing in pattern recognition, domestic and foreign current events, and technology trends. He serves as a strategic advisor to individuals and organizations interested in finding important solutions to difficult problems.Aaron's experience includes government relations, high-level representation and negotiation, systems-thinking, scenario planning, learning transfer, and other research skills. He focuses on global catastrophic risks, national security, and geopolitical spillover events, with more of his recent work assisting in solving the climate crisis. Aaron serves the Rainey Center as special advisor, is Head of Research for Sharemeister, a co-chair for the NEXUS Working Group on Energy Innovation & Environment, and an international advisor for the Sunrise Movement.In this episode, Aaron talks about his role in forecasting geopolitical outcomes, the worrying effects of mass climate migration causing internal displacement, and how he's able to define and measure impact through systems-thinking in research.—Guest Quote“Shakespeare has this quote from one of his plays, I think it's Handler or something like that, there are three ways to achieve for greatness to happen. One is to be born great, and I think they were all born great. One is to achieve greatness, and I think we all have the potential to achieve greatness. And the third is to have greatness thrust upon us. I think that truly great people go through all of those stages, and the third one, of course, being a choice. If we are presented with a great opportunity, are we able to go and rise up to that? And I think that we can. So for me, I was taking stock of the meager amount of resources that I had and the big network that I had developed. I came to the realization that, if I did nothing, no one would do anything for me, and I just couldn't live with myself if I didn't look back and say, you know what, at the very least, I did everything that I could.” - Aaron BergerEpisode Timestamps(02:11) Aaron's role(04:00) Forecasting geopolitical outcomes(10:19) Climate change as a geopolitical challenge(13:54) Mass climate migration(19:40) Internal displacement(23:47) Systems thinking(33:16) Inspirational projects(39:09) Defining and measuring impact(45:58) Final thoughtsLinksAaron Berger's LinkedInPhil Dillard's LinkedInThruline Networks

The Nonlinear Library
EA - Announcing Interim CEOs of EVF by Owen Cotton-Barratt

The Nonlinear Library

Play Episode Listen Later Jan 30, 2023 10:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Interim CEOs of EVF, published by Owen Cotton-Barratt on January 30, 2023 on The Effective Altruism Forum. Overview Effective Ventures Foundation USA (“EVF US”) has appointed Zachary Robinson as its Interim CEO as of today. We're also taking this time to announce that Effective Ventures Foundation (“EVF UK”) appointed Howie Lempel as its Interim CEO back in November. EVF UK and EVF US together act as hosts and fiscal sponsors of many key projects in effective altruism, including the Centre for Effective Altruism (CEA), 80,000 Hours (80k), Longview Philanthropy, etc. These charities (EVF UK and EVF US) previously did not have the position of CEO, and project leads (for CEA, 80k, etc.) reported directly to EVF boards for management oversight. The FTX situation has given rise to a number of new challenges. In particular, there are financial and legal issues that occur at the level of the charities (EVF UK and EVF US), not at the project level, because the projects are not separate legal entities. Because of this, we've chosen to coordinate the legal, financial, and communications strategies of the projects under EVF much more than before. In response to the new challenges from FTX, the boards became much more involved in the day-to-day operations of the charities. But it's not ideal to have boards playing the role of executives, so the boards have now also appointed Interim CEOs to each charity. The Interim CEO roles are about handling crises and helping the entities transition to an improved long-term structure. They are naturally time-limited roles; we aren't sure how they might change, or when it will make sense for Howie and/or Zach to hand the reins off. The announcement of these Interim CEOs doesn't constitute any change of project leadership; for example, Max Dalton will continue as leader of CEA, the community-building project which is part of EVF. Meta remarks This post is written on behalf of the boards of EVF. It's difficult to write or act as a group without either doing things that not everyone is totally behind, or constraining ourselves to the bland. In the case of this post, it is largely the work of the primary authors; other board members might not agree with the more subjective judgements. The impetus for getting this post out now was a desire to empower the CEOs to speak on behalf of the entities; there are some time-sensitive updates they expect to share soon. Edit: Howie has now shared one There's a lot to say about what FTX's collapse means for EA and EVF; in many ways we're still in the early days of wrestling with what this means for us and our communities. This post isn't about that. However, we know that this is the first major public communication from the EVF boards, and we don't want the implicature to be that we don't think this is important or worth discussing. We're hoping to write more soon about why we haven't said more, how we're seeing the situation, and how EVF's choices may have impacted EA discourse about FTX. About Howie and Zach Howie Lempel has been Interim CEO of EVF UK since mid-November. To take this on, Howie took leave from his role as CEO of 80k, one of the largest EVF projects. In light of Howie's move, Brenton Mayer was appointed as Interim CEO of 80k in December. Brenton was previously Director of Internal Systems at 80k. Before he worked at 80k, Howie went to Yale Law for two years. He left Yale to join Open Philanthropy while it was still being incubated at GiveWell, working as their first Program Officer for Global Catastrophic Risks. Howie has also worked on white collar crime at the Manhattan DA's office and on U.S. economic policy as a research assistant at the Brookings Institution. In addition to his role as CEO of 80k, he's also known in the EA community for his personal podcast episode on mental he...

The Nonlinear Library
EA - Applications for EAGx LatAm closing on the 20th of December by LGlez

The Nonlinear Library

Play Episode Listen Later Dec 4, 2022 3:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications for EAGx LatAm closing on the 20th of December, published by LGlez on December 4, 2022 on The Effective Altruism Forum. This is just a short announcement that you can now apply to EAGx LatAm, which will take place in Mexico City on January 6-8. For more information, you can read our previous announcement here. Apologies for the delay in getting this post out, but we had to postpone it for circumstances beyond our control. “How to do the most good” is a very hard problem and no isolated community will solve it on its own. We are excited about opening spaces to discuss this question in a variety of contexts, with diversity and inclusion in mind. The unfortunate fact is that it's easier for some to hear about EA and be heard in EA, for reasons that have nothing to do with talent on one side or ill intent on the other, and everything to do with visas, the language you happened to be raised with, your socioeconomic background or environment. We believe that EA's current blindspots can be identified by brilliant minds all over the world, and we want to promote spaces for more people to come together, learn and bring their unique perspectives to the conversation. We have almost finalised the programme. You can expect to see sessions to discuss community building in Low and Middle Income Countries and meet organisations working on top EA areas outside of the UK and the US. We will also have the usual EAGx suspects, including intro talks about Global Health and Development, AI Safety, Animal Welfare or Global Catastrophic Risks. We have confirmed speakers from GWWC, CEA, Charity Entrepreneurship, JPAL, Open Philanthropy, IPA, Rethink Priorities and the World Bank, among many others. Rob Wiblin will come to practice his Spanish talk about Career Prioritisation and Toby Ord will join us virtually to discuss longtermism in the context of the Global South. This conference is mainly for those who are from Latin America or have ties to the continent, because they are the ones for whom it's most difficult to attend events elsewhere and connect with people who face similar struggles. But as we were saying before, we don't believe in isolated communities and we certainly wouldn't want to build a segregated EA LatAm community! On the contrary, we see this as an opportunity for connection. We're therefore looking forward to receiving experienced members of the international community who are excited to meet talented people who might be under their radar –to talk with them, mentor them, learn from them, collaborate with them, hire them. On the other hand, if you're new to EA, willing to learn more and based in a region that hosts regular EA conferences (e.g. the US, Europe), we suggest one of those might be a better fit for you to be introduced to the movement. If in doubt, err on the side of applying. And no, you don't need to speak Spanish to join us. If you're asking for funding to cover your travel expenses, bear in mind that costs in Mexico are cheap compared to Europe and the US. You should be able to find a good hotel for ∼60$. Regarding transport, Uber is available and very cheap. If you have any questions or comments don't hesitate to write to latam@eaglobalx.org We think that EA with a Latin American spice can be great. Join us in January to discuss doing good over tacos and get the coolest EA merch the world has ever seen. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

EARadio
Food Security and Global Catastrophic Risks from an EA perspective | Michael Hinge | EAGxOxford 22

EARadio

Play Episode Listen Later Dec 2, 2022 38:57


Significant progress has been made on food security issues globally over the previous decades. However, significant risks remain, particularly in the form of under analysed tail risks that could reduce food output by between 10-90% globally, placing billions at risk of starvation. The threat here presents an opportunity for EA aligned researchers and funding to make a real difference, and in this talk Michael Hinge lays out some of the research recently completed concerning the most severe food catastrophic risks, what remains to be done and how people can get involved.View the original talk and video here.Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet.Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

The Nonlinear Library
EA - Announcing the Founders Pledge Global Catastrophic Risks Fund by christian.r

The Nonlinear Library

Play Episode Listen Later Oct 26, 2022 5:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Founders Pledge Global Catastrophic Risks Fund, published by christian.r on October 26, 2022 on The Effective Altruism Forum. At Founders Pledge, we just launched a new addition to our funds: the Global Catastrophic Risks Fund. This post gives a brief overview of the fund. Key Points The fund will focus on global catastrophic risks with a special emphasis on risk pathways through international stability and great power relations. The fund's shorter giving timelines are complementary to our investing-to-give Patient Philanthropy Fund — we are publishing a short write-up on this soon. The fund is designed to offer high-impact giving opportunities for both longtermists and non-longtermists who care about catastrophic risks (see section on “Our Perspective” in the Prospectus). You can find more information — including differences and complementarity with other funds and longtermist funders — in our Fund Prospectus. Overview The GCR Fund will build on Founders Pledge's recent research into great power conflict and risks from frontier military and civilian technologies, with a special focus on international stability — a pathway that we believe shapes a number of the biggest risks facing humanity — and will work on: War between great powers, like a U.S.-China clash over Taiwan, or U.S.-Russia war; Nuclear war, especially emerging threats to nuclear stability, like vulnerabilities of nuclear command, control, and communications; Risks from artificial intelligence (AI), including risks from both machine learning applications (like autonomous weapon systems) and from transformative AI; Catastrophic biological risks, such as naturally-arising pandemics, engineered pathogens, laboratory accidents, and the misuse of new advances in synthetic biology; and Emerging threats from new technologies and in new domains. Moreover, the Fund will support field-building activities around the study and mitigation of global catastrophic risks, and methodological interventions, including new ways of studying these risks, such as probabilistic forecasting and experimental wargaming. The focus on international security is a current specialty, and we expect the areas of expertise of the fund to expand as we build capacity. Current and Future Generations This Fund is designed both to tackle threats to humanity's long-term future and to take action now to protect every human being alive today. We believe both that some interventions on global catastrophic risks can be justified on a simple cost-benefit analysis alone, and also that safeguarding the long-term future of humanity is among the most important things we can work on (and that in practice, they often converge). Whether or not you share our commitment to longtermism or believe that reducing existential risks is particularly important, you may still be interested in the Fund for the simple reason that you want to help prevent the deaths and suffering of millions of people. To illustrate this, the Fund may support the development of confidence-building measures on AI — like an International Autonomous Incidents Agreement — with the aim of both mitigating the destabilizing impact of near-term military AI applications, as well as providing a focal point for long-termist AI governance. Some grants will focus mainly on near-termist risks; others mainly on longtermist concerns. Like our other Funds, this will be a philanthropic co-funding vehicle designed to enable us to pursue a number of grantmaking opportunities, including: Active grantmaking, working with organizations to shape their plans for the future; Seeding new organizations and projects with high expected value; Committing to multi-year funding to give stability to promising projects and decrease their fundraising costs; Filling small funding gaps that fall between the cr...

The Nonlinear Library
EA - Sixty years after the Cuban Missile Crisis, a new era of global catastrophic risks by christian.r

The Nonlinear Library

Play Episode Listen Later Oct 13, 2022 2:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sixty years after the Cuban Missile Crisis, a new era of global catastrophic risks, published by christian.r on October 13, 2022 on The Effective Altruism Forum. Linkpost for a short op-ed I wrote in the Bulletin of the Atomic Scientists in light of the upcoming 60th anniversary of the Cuban Missile Crisis and President Biden's recent comments that "For the first time since the Cuban Missile Crisis, we have a direct threat to the use of nuclear weapons, if in fact things continue down the path they'd been going." Was asked to keep it mostly nuclear (i.e., only some narrow AI and no bio, which was in my first draft), but managed to keep in some broader points about technology development and deployment, like "artificial intelligence and other new technologies, if thoughtlessly deployed, could increase the risk of accidents and miscalculation even further." First couple of paragraphs: This month marks the 60th anniversary of the Cuban Missile Crisis. For two tense weeks from October 16 to October 29, 1962, the United States and the Soviet Union teetered on the brink of nuclear war. Sixty years later, tensions between the world's major militaries are uncomfortably high once again. In recent weeks, Russian President Vladimir Putin's nuclear-charged threats to use “all available means” in the Russo-Ukrainian war have again raised the prospect of nuclear war. And on October 6, US President Joe Biden reportedly told a group of Democratic donors: “For the first time since the Cuban Missile Crisis, we have a direct threat to the use of nuclear weapons, if in fact things continue down the path they'd been going.” Any uncontrolled escalation of these existing conflicts could end in global catastrophe, and the history of the Cuban Missile Crisis suggests that such escalation may be more likely to happen through miscalculation and accidents. Lists of nuclear close calls show the variety of pathways that could have led to disaster during the Cuban crisis. Famously, Soviet naval officer Vasili Arkhipov vetoed the captain of a nuclear submarine who wanted to launch a nuclear-armed torpedo in response to what turned out to be non-lethal depth charges fired by US forces; had Arkhipov not been on this particular vessel, the captain might have had the two other votes he needed to order a launch. Today, artificial intelligence and other new technologies, if thoughtlessly deployed, could increase the risk of accidents and miscalculation even further [...] Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

John Michael Godier's Event Horizon
Are We Part of a Simulation? with guest Nick Bostrom

John Michael Godier's Event Horizon

Play Episode Listen Later Sep 14, 2022 47:18


Nick Bostrom https://nickbostrom.com/ Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50. He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument (2003) and the concept of existential risk (2002).

The Creative Process Podcast
Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


The Creative Process Podcast

Play Episode Listen Later Sep 6, 2022 42:22


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

The Creative Process Podcast
Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

The Creative Process Podcast

Play Episode Listen Later Sep 6, 2022 11:19


"I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

One Planet Podcast
Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

One Planet Podcast

Play Episode Listen Later Sep 6, 2022 11:19


"On the one hand, if AI actually worked out in the ideal way, then it could be an extremely powerful tool for developing solutions to climate change and many other environmental problems that we have, for example, in developing more efficient clean energy technologies. There are efforts on the way now to try to get fusion reactors to work using AI tools, to sort of guide the containment of the plasma. Recent work with AlphaFold by DeepMind, which is a subsidiary of Alphabet, they're working on developing AI tools that can be used for molecular modeling, and you could imagine various uses of that for developing better solar panels or other kinds of remedial technologies to clean up or reduce pollution. So certainly the potential from AI to the environment are manyfold and will increase over time."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

One Planet Podcast
Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


One Planet Podcast

Play Episode Listen Later Sep 6, 2022 42:22


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."On the one hand, if AI actually worked out in the ideal way, then it could be an extremely powerful tool for developing solutions to climate change and many other environmental problems that we have, for example, in developing more efficient clean energy technologies. There are efforts on the way now to try to get fusion reactors to work using AI tools, to sort of guide the containment of the plasma. Recent work with AlphaFold by DeepMind, which is a subsidiary of Alphabet, they're working on developing AI tools that can be used for molecular modeling, and you could imagine various uses of that for developing better solar panels or other kinds of remedial technologies to clean up or reduce pollution. So certainly the potential from AI to the environment are manyfold and will increase over time."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Books & Writers · The Creative Process
Nick Bostrom - Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

Books & Writers · The Creative Process

Play Episode Listen Later Sep 6, 2022 42:22


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Books & Writers · The Creative Process
Highlights - Nick Bostrom - Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

Books & Writers · The Creative Process

Play Episode Listen Later Sep 6, 2022 11:19


"I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Spirituality & Mindfulness · The Creative Process
Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Spirituality & Mindfulness · The Creative Process

Play Episode Listen Later Sep 6, 2022 11:19


"If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Spirituality & Mindfulness · The Creative Process
Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Spirituality & Mindfulness · The Creative Process

Play Episode Listen Later Sep 6, 2022 42:22


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Sustainability, Climate Change, Politics, Circular Economy & Environmental Solutions · One Planet Podcast
Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Sustainability, Climate Change, Politics, Circular Economy & Environmental Solutions · One Planet Podcast

Play Episode Listen Later Sep 6, 2022 11:19


"I think maybe the critical issue here is the governance aspect which I think is one of the core sources of many of the greatest threats to human civilization on the planet. The difficulties we have in effectively tackling these global governance challenges. So global warming, I think, at its core is really a problem of the global commons. So we all share the same atmosphere and the same global climate, ultimately. And we have a certain reservoir, the environment can absorb a certain amount of carbon dioxide without damage, but if we put out too much, then we together face a negative consequence."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Sustainability, Climate Change, Politics, Circular Economy & Environmental Solutions · One Planet Podcast
Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Sustainability, Climate Change, Politics, Circular Economy & Environmental Solutions · One Planet Podcast

Play Episode Listen Later Sep 6, 2022 42:22


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I think maybe the critical issue here is the governance aspect which I think is one of the core sources of many of the greatest threats to human civilization on the planet. The difficulties we have in effectively tackling these global governance challenges. So global warming, I think, at its core is really a problem of the global commons. So we all share the same atmosphere and the same global climate, ultimately. And we have a certain reservoir, the environment can absorb a certain amount of carbon dioxide without damage, but if we put out too much, then we together face a negative consequence."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Art · The Creative Process
Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Art · The Creative Process

Play Episode Listen Later Sep 6, 2022 11:19


"I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Art · The Creative Process
Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Art · The Creative Process

Play Episode Listen Later Sep 6, 2022 42:22


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

The Creative Process in 10 minutes or less · Arts, Culture & Society
Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


The Creative Process in 10 minutes or less · Arts, Culture & Society

Play Episode Listen Later Sep 6, 2022 11:19


"I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Tech, Innovation & Society - The Creative Process
Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Sep 6, 2022 11:19


"I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Tech, Innovation & Society - The Creative Process
Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Sep 6, 2022 42:22


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Education · The Creative Process
Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Education · The Creative Process

Play Episode Listen Later Sep 6, 2022 42:22


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

Education · The Creative Process
Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Education · The Creative Process

Play Episode Listen Later Sep 6, 2022 11:19


"If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

The Nonlinear Library
EA - [Crosspost]: Huge volcanic eruptions: time to prepare (Nature) by Mike Cassidy

The Nonlinear Library

Play Episode Listen Later Aug 19, 2022 2:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost]: Huge volcanic eruptions: time to prepare (Nature), published by Mike Cassidy on August 19, 2022 on The Effective Altruism Forum. Lara Mani and I have a comment article published in Nature this week about large magnitude volcanic eruptions: TLDR: I also wrote a twitter thread here: This is more condensed focus piece, but contains elements we've covered in these posts too. This is really the start of the work we've been doing in this area, we're hoping to quantify how globally catastrophic large eruptions would be for our global food, water and critical systems. From there, we'll have a better idea of the most effective mitigation strategies. But because this is such a neglected area (screenshot below), we know that even modest investment and effort will go a long way. We highlight several ways we think could help save a lot of lives both in the near term (smaller, more frequent eruptions) and the in future (large mag and super-eruptions): a) pinpointing where the biggest risks area/volcanoes are b) increasing and improving monitoring c) increasing preparedness (e.g. nowcasting-see below), and d) research volcano geoengineering (the ethics of which we're working with Anders Sandberg on). The last point may interest some others in the x-risk community, as potential solutions like these ones (screenshot below), could potentially help mitigate the effects from nuclear, and asteroid winters too. We're having conversations with atmospheric scientists about this type of research. Another way tech-savy EAs might be able to help with is the creation of 'nowcasting' technology, which again would be useful for a range of Global Catastrophic Risks. The paper has been covered a fair bit in the international media, (e.g.) and we feel like we could use this mometumn to make some tractable improvements to global volcanic risk. If you'd like to help fund our work or discuss any of these ideas with us then get in touch! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Results of a Spanish-speaking essay contest about Global Catastrophic Risk by Jaime Sevilla

The Nonlinear Library

Play Episode Listen Later Jul 15, 2022 9:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Results of a Spanish-speaking essay contest about Global Catastrophic Risk, published by Jaime Sevilla on July 15, 2022 on The Effective Altruism Forum. Over June I have been evaluating entries of the essay contest organised by Riesgos Catastróficos Globales, an organisation supporting Spanish-speaking work on Global Catastrophic Risks (GCRs). The contest ran from 4 March 2022 to 30 May 2022, and we received 141 (valid) applications. The applications were written in Spanish. I am immensely grateful to our support team (Cristina Schmidt Ibañez, Claudette Salinas, Alison Díaz and Emilio Bazan) for running the operations of the contest and helping me evaluate the entries, to the rest of the RCGs team (Juan García, Ángela Aristizábal and Pablo Stafforini), the Spanish Speaking EA community (especially coordinators Sandra Malagón and Laura González) and many university professors for their support and help promoting the contest; and to the FTX Future Fund Regranting Program for financing the contest. The essay contest has been a great opportunity to promote Spanish content and writers. Also it has been an interesting exercise to learn how Spanish-speakers relate to GCRs. In this article I share some observations about the contest. In particular, I talk about the reception of the contest, the content of the entries, the format of the entries and the quality of the entries. Key highlights The writing contest seems to have been a very cost-effective way of promoting engagement with GCRs. We spent $10k and got ~39k visits to our website and received ~141 valid essays. Unintuitively, the contest drove a lot of web traffic from Latin American countries with little EA presence like the Dominican Republic, Guatemala, Venezuela, Perú, Honduras, Ecuador and Nicaragua, and relatively little from places with moderate community presence like Spain, Mexico and Colombia. People mostly wrote about Global Catastrophic Risks in general and climate change. There were very few entries on other specific GCRs. Most essays were of rather poor quality. There were 26 essays that met a minimum bar of quality and and from those 11 that I considered good quality. You can read the winning entries of the contest (in Spanish) here. Reception and geographical information I am very pleased with the reception of the contest. I was expecting to receive twenty-odd entries. Instead we received 141 valid entries. While the contest was up (March, April and May) our webpage received 39,377 visits. In comparison, in January and February we received

The Nonlinear Library
EA - Concurso de ensayos sobre Riesgos Catastróficos Globales en Español by Jaime Sevilla

The Nonlinear Library

Play Episode Listen Later Mar 18, 2022 1:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concurso de ensayos sobre Riesgos Catastróficos Globales en Español, published by Jaime Sevilla on March 17, 2022 on The Effective Altruism Forum. La estabilidad de nuestra sociedad depende de nuestra pericia para prevenir y sobrellevar eventos, de origen natural o humano, que puedan afectar a nuestro mundo a gran escala. Estos Riesgos Catastróficos Globales (RCGs) incluyen pandemias, cambio climático, riesgo nuclear, asteroides, volcanes y riesgos asociados a la inteligencia artificial avanzada, entre otros. Queremos promover el talento dedicado a estudiar Riesgos Catastróficos Globales y la creación de contenido en español sobre el tema. Para ello, presentamos un concurso de ensayos sobre Riesgos Catastróficos Globales. El concurso esta abierto hasta el 30 de Mayo de 2022. Se repartirán premios de hasta 1000€. Más información en nuestra página web. [English translation] The stability of our society depends on our ability to prevent and cope with events, natural or man-made, that may affect our world on a large scale. These Global Catastrophic Risks (GCRs) include pandemics, climate change, nuclear risk, asteroids, volcanoes, and risks associated with advanced artificial intelligence, among others. We want to promote talent dedicated to studying Global Catastrophic Risks and the creation of content in Spanish on the subject. To this end, we present an essay contest on Global Catastrophic Risks. The contest is open until May 30, 2022. Prizes of up to 1000€ will be awarded. More information on our website. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Retraice
Re17: Hypotheses to Eleven

Retraice

Play Episode Listen Later Mar 17, 2022 14:09


On 'current history', or what might be going on out there. Subscribe at: paid.retraice.com Details: what's GOOT; current history; hypotheses [and some predictions]; What's next? Complete notes and video at: https://www.retraice.com/segments/re17 Air date: Monday, 7th Mar. 2022, 4 : 20 PM Eastern/US. 0:00:00 what's GOOT; 0:01:35 current history; 0:04:30 hypotheses [and some predictions]; 0:13:38 What's next? References: Allison, G. (2018). Destined for War: Can America and China Escape Thucydides's Trap? Mariner Books. ISBN: 978-1328915382. Searches: https://www.amazon.com/s?k=9781328915382 https://www.google.com/search?q=isbn+9781328915382 https://lccn.loc.gov/2017005351 Andrew, C. (2018). The Secret World: A History of Intelligence. Yale University Press. ISBN in paperback edition printed as "978-0-300-23844-0 (hardcover : alk. paper)". Searches: https://www.amazon.com/s?k=978-0300238440 https://www.google.com/search?q=isbn+978-0300238440 https://lccn.loc.gov/2018947154 Baumeister, R. F. (1999). Evil: Inside Human Violence and Cruelty. Holt Paperbacks, revised ed. ISBN: 978-0805071658. Searches: https://www.amazon.com/s?k=9780805071658 https://www.google.com/search?q=isbn+9780805071658 https://lccn.loc.gov/96041940 Bostrom, N. (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, 10, 44-79. Citations are from Bostrom's website copy: https://www.nickbostrom.com/information-hazards.pdf Retrieved 9th Sep. 2020. Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy, 10(4), 455-476. Nov. 2019. https://nickbostrom.com/papers/vulnerable.pdf Retrieved 24th Mar. 2020. Bostrom, N., & Cirkovic, M. M. (Eds.) (2008). Global Catastrophic Risks. Oxford University Press. ISBN: 978-0199606504. Searches: https://www.amazon.com/s?k=978-0199606504 https://www.google.com/search?q=isbn+978-0199606504 https://lccn.loc.gov/2008006539 Brockman, J. (Ed.) (2015). What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence. Harper Perennial. ISBN: 978-0062425652. Searches: https://www.amazon.com/s?k=978-0062425652 https://www.google.com/search?q=isbn+978-0062425652 https://lccn.loc.gov/2016303054 Chomsky, N. (1970). For Reasons of State. The New Press, revised ed. ISBN: 1565847946. Originally published 1970; this revised ed. 2003. Searches: https://www.amazon.com/s?k=1565847946 https://www.google.com/search?q=isbn+1565847946 https://catalog.loc.gov/vwebv/search?searchArg=1565847946 Chomsky, N. (2017). Requiem for the American Dream: The 10 Principles of Concentration of Wealth & Power. Seven Stories Press. ISBN: 978-1609807368. Searches: https://www.amazon.com/s?k=978-1609807368 https://www.google.com/search?q=isbn+978-1609807368 https://lccn.loc.gov/2016054121 Cirkovic, M. M. (2008). Observation selection effects and global catastrophic risks. (pp. 120-145). In Bostrom & Cirkovic (2008). de Grey, A. (2007). Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime. St. Martin's Press. ISBN: 978-0312367060. Searches: https://www.amazon.com/s?k=978-0312367060 https://www.google.com/search?q=isbn+978-0312367060 https://lccn.loc.gov/2007020217 Deary, I. J. (2001). Intelligence: A Very Short Introduction. Oxford. ISBN: 978-0192893215. Searches: https://www.amazon.com/s?k=978-0192893215 https://www.google.com/search?q=isbn+978-0192893215 https://lccn.loc.gov/2001269139 Diamond, J. (1997). Guns, Germs, and Steel: The Fates of Human Societies. Norton. ISBN: 0393317552. Searches: https://www.amazon.com/s?k=0393317552 https://www.google.com/search?q=isbn+0393317552 https://catalog.loc.gov/vwebv/search?searchArg=0393317552 Dolan, R. M. (2000). UFOs and the National Security State Vol. 1: An Unclassified History. Keyhole, 1st ed. ISBN: 0967799503. Searches: https://www.amazon.com/s?k=0967799503 https://www.google.com/search?q=isbn+0967799503 https://catalog.loc.gov/vwebv/search?searchArg=0967799503 Dolan, R. M. (2009). UFOs and the National Security State Vol. 2: The Cover-Up Exposed, 1973-1991. Keyhole. ISBN: 978-0967799513. Searches: https://www.amazon.com/s?k=978-0967799513 https://www.google.com/search?q=isbn+978-0967799513 Durant, W., & Durant, A. (1968). The Lessons of History. Simon and Schuster. No ISBN. Searches: https://www.amazon.com/s?k=lessons+of+history+durant https://www.google.com/search?q=lessons+of+history+durant https://lccn.loc.gov/68019949 Dyson, G. (2015). Analog, the revolution that dares not speak its name. (pp. 255-256). In Brockman (2015). Dyson, G. (2020). Analogia: The Emergence of Technology Beyond Programmable Control. Farrar, Straus and Giroux. ISBN: 978-0374104863. Searches: https://www.amazon.com/s?k=9780374104863 https://www.google.com/search?q=isbn+9780374104863 https://catalog.loc.gov/vwebv/search?searchArg=9780374104863 Dyson, G. B. (1997). Darwin Among The Machines: The Evolution Of Global Intelligence. Basic Books. ISBN: 978-0465031627. Searches: https://www.amazon.com/s?k=978-0465031627 https://www.google.com/search?q=isbn+978-0465031627 https://lccn.loc.gov/2012943208 Frank, R., & Bernanke, B. (2001). Principles of Economics. Mcgraw-Hill. ISBN: 0072289627. Searches: https://www.amazon.com/s?k=0072289627 https://www.google.com/search?q=isbn+0072289627 https://catalog.loc.gov/vwebv/search?searchArg=0072289627 Frankfurt, H. G. (1988). The Importance of What We Care About. Cambridge. ISBN: 978-0521336116. Searches: https://www.amazon.com/s?k=978-0521336116 https://www.google.com/search?q=isbn+978-0521336116 https://lccn.loc.gov/87026941 Gawande, A. (2014). Being Mortal: Medicine and What Matters in the End. Metropolitan Books. ISBN: 978-0805095159. Searches: https://www.amazon.com/s?k=9780805095159 https://www.google.com/search?q=isbn+9780805095159 https://catalog.loc.gov/vwebv/search?searchArg=9780805095159 Grabo, C. M. (2002). Anticipating Surprise: Analysis for Strategic Warning. Center for Strategic Intelligence Research. ISBN: 0965619567 https://www.ni-u.edu/ni_press/pdf/Anticipating_Surprise_Analysis.pdf Retrieved 7th Sep. 2020. Griffiths, P. J. (1971). Vietnam, Inc.. Phaidon, 2nd ed. ISBN: 978-0714846033. Originally published 1971. This edition 2006. Link and searches: http://philipjonesgriffiths.org/photography/selected-work/vietnam-inc/ Retrieved 10 Mar. 2022. https://www.amazon.com/s?k=978-0714846033 https://www.google.com/search?q=isbn+978-0714846033 https://lccn.loc.gov/2006283959 Hamming, R. W. (2020). The Art of Doing Science and Engineering: Learning to Learn. Stripe Press. ISBN: 978-1732265172. Searches: https://www.amazon.com/s?k=9781732265172 https://www.google.com/search?q=isbn+9781732265172 Hawking, S. (2018). Brief Answers to the Big Questions. Bantam. ISBN: 978-1984819192. Searches: https://www.amazon.com/s?k=9781984819192 https://www.google.com/search?q=isbn+9781984819192 https://catalog.loc.gov/vwebv/search?searchArg=9781984819192 Herrnstein, R. J., & Murray, C. (1996). The Bell Curve: Intelligence and Class Structure in American Life. Free Press. ISBN: 978-0684824291. Searches: https://www.amazon.com/s?k=9780684824291 https://www.google.com/search?q=isbn+9780684824291 https://catalog.loc.gov/vwebv/search?searchArg=9780684824291 Johnson, S. (2014). How We Got to Now: Six Innovations That Made the Modern World. Riverhead Books. ISBN: 978-1594633935. Searches: https://www.amazon.com/s?k=9781594633935 https://www.google.com/search?q=isbn+9781594633935 https://lccn.loc.gov/2014018412 Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN: 978-0374533557. Searches: https://www.amazon.com/s?k=978-0374533557 https://www.google.com/search?q=isbn+978-0374533557 https://lccn.loc.gov/2012533187 Kaplan, F. (2016). Dark Territory: The Secret History of Cyber War. Simon & Schuster. ISBN: 978-1476763255. Searches: https://www.amazon.com/s?k=9781476763255 https://www.google.com/search?q=isbn+9781476763255 https://catalog.loc.gov/vwebv/search?searchArg=9781476763255 Kelleher, C. A., & Knapp, G. (2005). Hunt for the Skinwalker: Science Confronts the Unexplained at a Remote Ranch in Utah. Paraview Pocket Books. ISBN: 978-1416505211. Searches: https://www.amazon.com/s?k=978-1416505211 https://www.google.com/search?q=isbn+978-1416505211 https://lccn.loc.gov/2005053457 Keyhoe, D. (1950). The Flying Saucers Are Real. Forgotten Books. ISBN: 978-1605065472. Originally published 1950; this edition 2008. Searches: https://www.amazon.com/s?k=9781605065472 https://www.google.com/search?q=isbn+9781605065472 https://lccn.loc.gov/50004886 Kilcullen, D. (2020). The Dragons And The Snakes: How The Rest Learned To Fight The West. Oxford University Press. ISBN: 978-0190265687. Searches: https://www.amazon.com/s?k=9780190265687 https://www.google.com/search?q=isbn+9780190265687 https://catalog.loc.gov/vwebv/search?searchArg=9780190265687 Lazar, B. (2019). Dreamland: An Autobiography. Interstellar. ISBN: 978-0578437057. Searches: https://www.amazon.com/s?k=9780578437057 https://www.google.com/search?q=isbn+9780578437057 Lee, K.-F. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin Harcourt. ISBN: 978-1328546395. Searches: https://www.amazon.com/s?k=9781328546395 https://www.google.com/search?q=isbn+9781328546395 https://catalog.loc.gov/vwebv/search?searchArg=9781328546395 Mitter, R. (2008). Modern China: A Very Short Introduction. Oxford University Press, kindle ed. ISBN: 978-0199228027. Searches: https://www.amazon.com/s?k=9780199228027 https://www.google.com/search?q=isbn+9780199228027 https://catalog.loc.gov/vwebv/search?searchArg=9780199228027 Nouri, A., & Chyba, C. F. (2008). Biotechnology and biosecurity. (pp. 450-480). In Bostrom & Cirkovic (2008). O'Donnell, P. K. (2004). Operatives, Spies, and Saboteurs: The Unknown Story of the Men and Women of World War II's OSS. Free Press / Simon & Schuster. ISBN: 074323572X. Edition and searches: https://archive.org/details/operativesspiess00odon https://www.amazon.com/s?k=074323572X https://www.google.com/search?q=isbn+074323572X https://catalog.loc.gov/vwebv/search?searchArg=074323572X Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette. ISBN: 978-0316484916. Searches: https://www.amazon.com/s?k=978-0316484916 https://www.google.com/search?q=isbn+978-0316484916 https://lccn.loc.gov/2019956459 Orlov, D. (2008). Reinventing Collapse: The Soviet Example and American Prospects. New Society. ISBN: 978-0865716063. Searches: https://www.amazon.com/s?k=9780865716063 https://www.google.com/search?q=isbn+9780865716063 https://catalog.loc.gov/vwebv/search?searchArg=9780865716063 Osnos, E. (2020/01/06). The Future of America's Contest with China. The New Yorker. https://www.newyorker.com/magazine/2020/01/13/the-future-of-americas-contest-with-china Retrieved 22 April, 2020. Perlroth, N. (2020). This Is How They Tell Me the World Ends: The Cyberweapons Arms Race. Bloomsbury. ISBN: 978-1635576054. Searches: https://www.amazon.com/s?k=978-1635576054 https://www.google.com/search?q=isbn+978-1635576054 https://lccn.loc.gov/2020950713 Phoenix, C., & Treder, M. (2008). Nanotechnology as global catastrophic risk. (pp. 481-503). In Bostrom & Cirkovic (2008). Pillsbury, M. (2015). The Hundred-Year Marathon: China's Secret Strategy to Replace America as the Global Superpower. St. Martin's Griffin. ISBN: 978-1250081346. Searches: https://www.amazon.com/s?k=9781250081346 https://www.google.com/search?q=isbn+9781250081346 https://lccn.loc.gov/2014012015 Pinker, S. (2011). The Better Angels of Our Nature: Why Violence Has Declined. Penguin Publishing Group. ISBN: 978-0143122012. Searches: https://www.amazon.com/s?k=978-0143122012 https://www.google.com/search?q=isbn+978-0143122012 https://lccn.loc.gov/2011015201 Pogue, D. (2021). How to Prepare for Climate Change: A Practical Guide to Surviving the Chaos. Simon & Schuster. ISBN: 978-1982134518. Searches: https://www.amazon.com/s?k=9781982134518 https://www.google.com/search?q=isbn+9781982134518 https://catalog.loc.gov/vwebv/search?searchArg=9781982134518 Putnam, R. D. (2015). Our Kids: The American Dream in Crisis. Simon & Schuster. ISBN: 978-1476769905. Searches: https://www.amazon.com/s?k=9781476769905 https://www.google.com/search?q=isbn+9781476769905 https://lccn.loc.gov/2015001534 Rees, M. (2003). Our Final Hour: A Scientist's Warning. Basic Books. ISBN: 0465068634. Searches: https://www.amazon.com/s?k=0465068634 https://www.google.com/search?q=isbn+0465068634 https://lccn.loc.gov/2004556001 Rees, M. (2008). Foreword to Bostrom & Cirkovic (2008). (pp. iii-vii). Reid, T. R. (2017). A Fine Mess: A Global Quest for a Simpler, Fairer, and More Efficient Tax System. Penguin Press. ISBN: 978-1594205514. Searches: https://www.amazon.com/s?k=9781594205514 https://www.google.com/search?q=isbn+9781594205514 https://catalog.loc.gov/vwebv/search?searchArg=9781594205514 Retraice (2020/09/07). Re1: Three Kinds of Intelligence. retraice.com. https://www.retraice.com/segments/re1 Retrieved 22nd Sep. 2020. Retraice (2020/11/10). Re13: The Care Factor. retraice.com. https://www.retraice.com/segments/re13 Retrieved 10th Nov. 2020. Romm, J. (2016). Climate Change: What Everyone Needs to Know. Oxford University Press. ISBN: 978-0190250171. Searches: https://www.amazon.com/s?k=9780190250171 https://www.google.com/search?q=isbn+9780190250171 https://catalog.loc.gov/vwebv/search?searchArg=9780190250171 Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498 Salter, A. (2003). Predators. Basic Books. ISBN: 978-0465071732. Searches: https://www.amazon.com/s?k=978-0465071739 https://www.google.com/search?q=isbn+978-0465071739 https://lccn.loc.gov/2002015846 Sanger, D. E. (2018). The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age. Broadway Books. ISBN: 978-0451497901. Searches: https://www.amazon.com/s?k=9780451497901 https://www.google.com/search?q=isbn+9780451497901 https://catalog.loc.gov/vwebv/search?searchArg=9780451497901 Sapolsky, R. M. (2018). Behave: The Biology of Humans at Our Best and Worst. Penguin Books. ISBN: 978-0143110910. Searches: https://www.amazon.com/s?k=9780143110910 https://www.google.com/search?q=isbn+9780143110910 https://lccn.loc.gov/2016056755 Shirer, W. L. (1959). The Rise and Fall of the Third Reich: A History of Nazi Germany. Simon & Schuster, 50th anniv. ed. ISBN: 978-1451651683. Originally published 1959; this ed. 2011. Searches: https://www.amazon.com/s?k=9781451651683 https://www.google.com/search?q=isbn+9781451651683 https://lccn.loc.gov/60006729 Shorrocks, A., Davies, J., Lluberas, R., & Rohner, U. (2019). Global wealth report 2019. Credit Suisse Research Institute. Oct. 2019. https://www.credit-suisse.com/about-us/en/reports-research/global-wealth-report.html Retrieved 4 July, 2020. Simler, K., & Hanson, R. (2018). The Elephant in the Brain: Hidden Motives in Everyday Life. Oxford University Press. ISBN: 9780190495992. Searches: https://www.amazon.com/s?k=9780190495992 https://www.google.com/search?q=isbn+9780190495992 https://lccn.loc.gov/2017004296 Spalding, R. (2019). Stealth War: How China Took Over While America's Elite Slept. Portfolio. ISBN: 978-0593084342. Searches: https://www.amazon.com/s?k=9780593084342 https://www.google.com/search?q=isbn+9780593084342 https://catalog.loc.gov/vwebv/search?searchArg=9780593084342 Stephens-Davidowitz, S. (2018). Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are. Dey Street Books. ISBN: 978-0062390868. Searches: https://www.amazon.com/s?k=9780062390868 https://www.google.com/search?q=isbn+9780062390868 https://lccn.loc.gov/2017297094 Sternberg, R. J. (Ed.) (2020). The Cambridge Handbook of Intelligence (Cambridge Handbooks in Psychology) (2 vols.). Cambridge University Press, 2nd ed. ISBN: 978-1108719193. Searches: https://www.amazon.com/s?k=9781108719193 https://www.google.com/search?q=isbn+9781108719193 https://lccn.loc.gov/2019019464 Vallee, J. (1979). Messengers of Deception: UFO Contacts and Cults. And/Or Press. ISBN: 0915904381. Different edition and searches: https://archive.org/details/MessengersOfDeceptionUFOContactsAndCultsJacquesValle1979/mode/2up https://www.amazon.com/s?k=0915904381 https://www.google.com/search?q=isbn+0915904381 https://catalog.loc.gov/vwebv/search?searchArg=0915904381 Walter, B. F. (2022). How Civil Wars Start. Crown. ISBN: 978-0593137789. Searches: https://www.amazon.com/s?k=978-0593137789 https://www.google.com/search?q=isbn+978-0593137789 https://lccn.loc.gov/2021040090 Walter, C. (2020). Immortality, Inc.: Renegade Science, Silicon Valley Billions, and the Quest to Live Forever. National Geographic. ISBN: 978-1426219801. Searches: https://www.amazon.com/s?k=9781426219801 https://www.google.com/search?q=isbn+9781426219801 https://catalog.loc.gov/vwebv/search?searchArg=9781426219801 Zubrin, R. (1996). The Case for Mars: The Plan to Settle the Red Planet and Why We Must. Free Press. First published in 1996. This 25th anniv. edition 2021. ISBN: 978-0684827575. Searches: https://www.amazon.com/s?k=978-0684827575 https://www.google.com/search?q=isbn+978-0684827575 https://lccn.loc.gov/2011005417 Zubrin, R. (2019). The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility. Prometheus Books. ISBN: 978-1633885349. Searches: https://www.amazon.com/s?k=978-1633885349 https://www.google.com/search?q=isbn+978-1633885349 https://lccn.loc.gov/2018061068 Copyright: 2022 Retraice, Inc. https://retraice.com

america women fear history art china men lessons future space fall state crisis thinking chaos global psychology guns revolution utah world war ii surviving press humanity quest economics vietnam humans ufos silicon valley principles hunt crown oxford trap air intelligence cambridge spies elephants new yorker diamond kevin durant eleven contest settle frankfurt national geographic copyright cults sabotage davies everyday life hanson pearson norton new world order interstellar predators requiem big questions schuster immortality kaplan nazi germany concentration observation analog modern world knapp dyson destined messengers oxford university press unexplained searches cruelty biotechnology dolan griffiths germs isbn rees oss live forever eds putnam bloomsbury cambridge university press foreword simpler red planet free press hawking farrar new data lazar giroux nanotechnology retrieved mcgraw hill hachette salter american life spalding cyberwar simon schuster chomsky sanger citations yale university press straus what matters penguin books kelleher sternberg chyba fairer baumeister better angels pillsbury pogue kahneman global policy basic books operatives brockman pinker bantam keyhole houghton mifflin harcourt new press nouri orlov our best new society vallee bostrom bernanke hypotheses machine intelligence penguin press romm secret strategy phaidon mariner books sapolsky robert zubrin riverhead books goot hamming grabo how we got harper perennial gawande deary prometheus books wealth power human societies cambridge handbook seven stories press dey street books cyber age broadway books metropolitan books osnos limitless possibility behave the biology shirer steel the fates class structure our lifetime being mortal medicine war can america forgotten books brain hidden motives china escape thucydides this is how they tell me world ends the cyberweapons arms race our nature why violence has declined global catastrophic risks everybody lies big data doing science remote ranch skinwalker science confronts dark territory the secret history our kids the american dream stephens davidowitz
The 2020 Network
@Risk: Global Catastrophic Risks with Jens Orback

The 2020 Network

Play Episode Listen Later Mar 10, 2022 38:11


On this episode of @ Risk, Jodi Butts is joined by Swedish economist, former politician and Executive Director of the Global Challenges Foundation, Jens Orback, to discuss the world's top global catastrophic risks.

@Risk
Global Catastrophic Risks with Jens Orback

@Risk

Play Episode Listen Later Mar 10, 2022 38:11


On this episode of @ Risk, Jodi Butts is joined by Swedish economist, former politician and Executive Director of the Global Challenges Foundation, Jens Orback, to discuss the world's top global catastrophic risks.

The Nonlinear Library: LessWrong Top Posts
Scholarship: How to DoIt Efficiently by lukeprog

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 8:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scholarship: How to DoIt Efficiently, published by lukeprog on the LessWrong. Scholarship is an important virtue of rationality, but it can be costly. Its major costs are time and effort. Thus, if you can reduce the time and effort required for scholarship - if you can learn to do scholarship more efficiently - then scholarship will be worth your effort more often than it previously was. As an autodidact who now consumes whole fields of knowledge in mere weeks, I've developed efficient habits that allow me to research topics quickly. I'll share my research habits with you now. Review articles and textbooks are king My first task is to find scholarly review (or 'survey') articles on my chosen topic from the past five years (the more recent, the better). A good review article provides: An overview of the subject matter of the field and the terms being used (for scholarly googling later). An overview of the open and solved problems in the field, and which researchers are working on them. Pointers to the key studies that give researchers their current understanding of the topic. If you can find a recent scholarly edited volume of review articles on the topic, then you've hit the jackpot. (Edited volumes are better than single-author volumes, because when starting out you want to avoid reading only one particular researcher's perspective.) Examples from my own research of just this year include: Affective neuroscience: Pleasures of the Brain (2009) Neuroeconomics: Decision Making and the Brain (2008) Dual process theories of psychology: In Two Minds (2009) Intuition and unconscious learning: Intuition in Judgment and Decision Making (2007) Goals: The Psychology of Goals (2009) Catastrophic risks: Global Catastrophic Risks (2008) If the field is large enough, there may exist an edited 'Handbook' on the subject, which is basically just a very large scholarly edited volume of review articles. Examples: Oxford Handbook of Evolutionary Psychology (2007), Oxford Handbook of Positive Psychology (2009), Oxford Handbook of Philosophy and Neuroscience (2009), Handbook of Developmental Cognitive Neuroscience (2008), Oxford Handbook of Neuroethics (2011), Handbook of Relationship Intitiation (2008), and Handbook of Implicit Social Cognition (2010). For the humanities, see the Blackwell Companions and Cambridge Companions. If your questions are basic enough, a recent entry-level textbook on the subject may be just as good. Textbooks are basically book-length review articles written for undergrads. Textbooks I purchased this year include: Evolutionary Psychology: The New Science of Mind, 4th edition (2011) Artificial Intelligence: A Modern Approach, 3rd edition (2009) Psychology Applied to Modern Life, 10th edition (2011) Psychology, 9th edition (2009) Use Google Books and Amazon's 'Look Inside' feature to see if the books appear to be of high quality, and likely to answer the questions you have. Also check the textbook recommendations here. You can save money by checking Library Genesis and library.nu for a PDF copy first, or by buying used books, or by buying ebook versions from Amazon, B&N, or Google. Keep in mind that if you take the virtue of scholarship seriously, you may need to change how you think about the cost of obtaining knowledge. Purchasing the right book can save you dozens of hours of research. Because a huge part of my life these days is devoted to scholarship, a significant portion of my monthly budget is set aside for purchasing knowledge. So far this year I've averaged over $150/mo spent on textbooks and scholarly edited volumes. Recent scholarly review articles can also be found on Google scholar. Search for key terms, and review articles will often be listed near the top of the results because review articles are cited widely. For example, result #9 on Google sch...

The Nonlinear Library: EA Forum Top Posts
GCRs mitigation: the missing Sustainable Development Goal by AmAristizábal

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 46:42


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: GCRs mitigation: the missing Sustainable Development Goal, published by AmAristizábal on the effective altruism forum. Thanks to Luca Righetti, Elise Bohan, Fin Moorhouse, Max Daniel, Konrad Seifert and Maxime Stauffer for their valuable comments and thoughts on the draft and future steps. Thanks to Owen for telling me to turn this idea into a post! Summary and introduction Throughout this post, I will explore some overlaps between sustainability (focusing on Sustainable Development Goals) and longtermism (focusing on Global Catastrophic Risks mitigation). I wrote this over a two-week period to get some tentative thoughts out. My goal with posting this is to find other people interested in thinking about the intersection of sustainable development and GCRs mitigation as well as to invite feedback for how/if to proceed with research or practical projects in this area. The write-up doesn't represent any strongly held and stable views, but is meant to explore if there is a policy opportunity for longtermists to work with sustainable development policies. More specifically, I want to see if it is worth pushing the next SDGs agenda with a bigger focus on GCRs mitigation. Roughly, I want to explore if this is a bridge worth building: In the first section, I will briefly overview the Sustainable Development Goals (SDGs) and Global Catastrophic Risks (GCRs), explaining why it could make sense to start building a link between these. In section 2, I will explore how GCRs mitigation fits into the SDGs, using COVID-19 as an example of how risk mitigation is foundational to sustainable development. In section 3, I will quickly point out some potential overlaps between longtermism and sustainability and mention the idea of a risk budget as the ultimate non-renewable resource. In section 4, I will portray a way of understanding SDGs in terms of longtermist grand strategies, followed by a final section exploring the pros and cons of building this bridge and possible future steps if it is worth pursuing. 1. Overview of SDGs and GCRs SDGs I am broadly interested in the overlap between sustainability –with its many definitions– and longtermism, but first I want to explore the SDGs primarily for their policy opportunities (other sustainability frameworks could be explored in future posts). In 2015 the United Nations General assembly set up the Sustainable Development Goals, which are 17 goals meant to be accomplished by 2030. They were adopted by all UN members and are a “call for action” for member states to eradicate poverty and improve different quality of life indicators whilst also tackling climate change and environmental damage. Here is a summarized timeline of sustainability and sustainable development: The concept of sustainability can be traced back at least to 1700, and was applied to forestry in Saxony. It emerged in a time of scarcity when the mining industry had consumed whole forests and trees had been cut out at unsustainable rates for decades, threatening the livelihood of thousands. 20th century: environmental movements started to point out that there were environmental costs associated with the many material benefits that were now being enjoyed due to the industrial revolution from the 18th and 19th century. In 1973 and 1979 there were energy crises that demonstrated the extent to which the global community had become dependent on non-renewable energy resources. 1970s: The concept of "degrowth" (somewhat related to sustainability) properly appeared during the 1970s. It was a political, economic, and social movement that critiqued “productivism”, the paradigm of economic growth, pointing out the social and ecological harm caused by the pursuit of infinite growth and Western "development" imperatives. 1987: Modern concept of sustainable development derived from the Brun...

The Munk Debates Podcast
Be it resolved: To realize humanity's full potential, requires settling worlds beyond our own

The Munk Debates Podcast

Play Episode Listen Later Oct 5, 2021 45:16


This past year has seen an onslaught of disruptions that call into question our ability to coexist with our environment. The devastating effects of climate change have arrived, and show no signs of abating. Flash flooding has swept across China and Northern Europe. The Eastern United States has been inundated by hurricanes of historic size. Record breaking heat waves and wildfires have decimated large swaths of Western North America. And a global pandemic continues to rage on. All of this begs the question, must we look elsewhere in our universe to ensure the survival of humanity? A growing movement of astrophysicists, biologists, and billionaire space enthusiasts believes our salvation does indeed lie offplanet. Supporters of this movement argue that we are on the cusp of technology that puts this possibility within reach, and that exploration and settlement to deal with issues of environmental instability and scarcity is nothing new. Settling the reachable regions of our universe is merely an extension of this age-old trend. But detractors of the plans to settle space dismiss it as an immeasurably expensive fever dream. In their minds, it would be far more prudent to invest our time and resources into fixing the problems here on Earth, the only known planet to host life. Beyond the massive technological advancements required, there are simply far too many unknowns about how and where life originated to assume it can be simply transported through the cosmos. Arguing for the motion is Milan Cirkovic, Research Professor at the Astronomical Observatory of Belgrade and author of Global Catastrophic Risks. Arguing against the motion is Lord Martin Rees, Lord Martin Rees Astronomer Royal, former President of The Royal Society. He is the author of On the Future whose updated paperback edition is due out in October, and The End of Astronauts due out in March of 2022. Milan Cirkovic: “There are many human achievements which, almost by definition, could never be realized if humanity remains bound to Earth.” Lord Martin Rees: “It is a dangerous delusion to think that we could escape the Earth's problems by going to Mars." Sources: Engadget, Blue Origin, SpaceX, European Space Agency, World Government Summit and 60 Minutes Australia The host of the Munk Debates is Rudyard Griffiths - @rudyardg.   Tweet your comments about this episode to @munkdebate or comment on our Facebook page https://www.facebook.com/munkdebates/ To sign up for a weekly email reminder for this podcast, send an email to podcast@munkdebates.com.   To support civil and substantive debate on the big questions of the day, consider becoming a Munk Member at https://munkdebates.com/membership Members receive access to our 10+ year library of great debates in HD video, a free Munk Debates book, newsletter and ticketing privileges at our live events. This podcast is a project of the Munk Debates, a Canadian charitable organization dedicated to fostering civil and substantive public dialogue - https://munkdebates.com/ The Munk Debates podcast is produced by Antica, Canada's largest private audio production company - https://www.anticaproductions.com/   Executive Producer: Stuart Coxe, CEO Antica Productions Senior Producer: Jacob Lewis Editor: Kieran Lynch Associate Producer: Abhi Raheja

EARadio
Industrial alternative foods for global catastrophic risks | Juan García Martínez

EARadio

Play Episode Listen Later Jul 2, 2021 23:28


Juan presents the latest research on industrial food solutions for feeding everyone in the case of food-related global catastrophic risks. He focuses on sun-blocking global food catastrophes such as large asteroid impacts, supervolcanic eruptions and nuclear winter. The solutions presented include single-cell protein (SCP) from natural gas or from hydrogen and CO2, sugar from lignocellulosic biomass, and synthetic margarine from petroleum.Juan García Martínez is a Research Assistant at Alliance to Feed the Earth In Disasters. Juan obtained his master's degree in chemical engineering from the University of Twente and went on to join ALLFED as a research associate, where he had volunteered prior to finishing his studies. He has done research on carbon dioxide capture and utilization with his MSc thesis and his internship at the Energy Research Center of the Netherlands, and is eager to apply his energy and knowledge to new research on making humanity's food system more resilient. This talk was taken from EA Global Asia and Pacific 2020. Click here to watch the talk with the PowerPoint presentation.

EARadio
Australians’ perceptions of global catastrophic risks | Emily Grundy

EARadio

Play Episode Listen Later Apr 9, 2021 24:32


Emily provides an introduction to the research collaboration READI – an organisation that conducts collaborative research to further the aims of the effective altruism movement. She outlines recent findings from the Survey of COVID-19 Responses to Understand Behaviour (SCRUB), which grew out of READI, regarding what the Australian public thinks about global catastrophic risks. This … Australians’ perceptions of global catastrophic risks | Emily Grundy Read More »

Making Sense with Sam Harris - Subscriber Content
Bonus Questions: Nick Bostrom

Making Sense with Sam Harris - Subscriber Content

Play Episode Listen Later Mar 19, 2019 9:41


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He is Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller. Website: nickbostrom.com

Making Sense with Sam Harris - Subscriber Content
#151 - Will We Destroy the Future?

Making Sense with Sam Harris - Subscriber Content

Play Episode Listen Later Mar 18, 2019 92:45


Sam Harris speaks with Nick Bostrom about the problem of existential risk. They discuss public goods, moral illusions, the asymmetry between happiness and suffering, utilitarianism, “the vulnerable world hypothesis,” the history of nuclear deterrence, the possible need for “turnkey totalitarianism,” whether we’re living in a computer simulation, the Doomsday Argument, the implications of extraterrestrial life, and other topics. Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He is Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller. Website: nickbostrom.com Episodes that have been re-released as part of the Best of Making Sense series may have been edited for relevance since their original airing.

The Future And You
October 28, 2009 Episode

The Future And You

Play Episode Listen Later Oct 28, 2009 30:00


Eliezer Yudkowsky (co-founder and research fellow of the Singularity Institute for Artificial Intelligence) is today's featured guest. Topics: the Singularity and the creation of Friendly AI; his estimate of the probability of success in making a Friendly AI; and why achieving AI using evolutionary software might be monumentally dangerous. He also talks about human rationality, such as: the percentage of humans today who can be considered rational; his own efforts to increase that number; how the listener can seek the path to greater rationality in his or her own thinking; the benefits of greater rationality; and the amount of success that can be expected in this pursuit. Hosted by Stephen Euin Cobb, this is the October 28, 2009 episode of The Future And You. [Running time: 30 minutes] (This interview was recorded on October 4, 2009 at the Singularity Summit in New York City.) Eliezer Yudkowsky is an artificial intelligence researcher concerned with the Singularity, and an advocate of Friendly Artificial Intelligence. He is the author of The Singularity Institute for Artificial Intelligence publications Creating Friendly AI (2001) and Levels of Organization in General Intelligence (2002). His most recent academic contributions include two chapters in Oxford philosopher Nick Bostrom's edited volume Global Catastrophic Risks. Aside from research, he is also notable for his explanations of technical subjects in non-academic language, particularly on rationality, such as his article An Intuitive Explanation of Bayesian Reasoning. Also, along with Robin Hanson, he was one of the principal contributors to the blog Overcoming Bias sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found LessWrong.com, a community blog devoted to refining the art of human rationality.