Search for episodes from Radio Bostrom with a specific topic:

Latest episodes from Radio Bostrom

AI Creation and the Cosmic Host (2024)

Play Episode Listen Later Aug 8, 2024 1:21


By Nick Bostrom.Abstract:There may well exist a normative structure, based on the preferences or concordats of a cosmic host, and which has high relevance to the development of A-eye. In particular, we may have both moral and prudential reason to create superintelligence that becomes a good cosmic citizen—that is conforms to cosmic norms and contributes positively to the cosmopolis. An exclusive focus on promoting the welfare of the human species and other terrestrial beings, or an insistence that our own norms must at all cost prevail, may be objectionable and unwise. Such attitudes might be analogized to the selfishness of one who exclusively pursues their own personal interest, or the arrogance of one who acts as if their own convictions entitle them to run roughshod over social norms—though arguably they would be worse, given our present inferior status relative to the membership of the cosmic host. An attitude of humility may be more appropriate.Read the full paper:https://nickbostrom.com/papers/ai-creation-and-the-cosmic-host.pdfMore episodes at:https://radiobostrom.com

Deep Utopia: Life and Meaning in a Solved World (2024)

Play Episode Listen Later Mar 18, 2024 1:37


Nick Bostrom's latest book, Deep Utopia: Life and Meaning in a Solved World, will be published on 27th March, 2024. It's available to pre-order now: https://nickbostrom.com/deep-utopia/ The publisher describes the book as follows: A greyhound catching the mechanical lure—what would he actually do with it? Has he given this any thought? Bostrom's previous book, Superintelligence: Paths, Dangers, Strategies changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right? Suppose that we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock. If this transition to the machine intelligence era goes well, human labor becomes obsolete. We would thus enter a condition of “post-instrumentality”, in which our efforts are not needed for any practical purpose. Furthermore, at technological maturity, human nature becomes entirely malleable. Here we confront a challenge that is not technological but philosophical and spiritual. In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day? Deep Utopia shines new light on these old questions, and gives us glimpses of a different kind of existence, which might be ours in the future.

The Unilateralist's Curse and the Case for a Principle of Conformity (2016)

Play Episode Listen Later Oct 4, 2022 41:35


By Nick Bostrom, Thomas Douglas & Anders Sandberg.Abstract:In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist's curse, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it.Read the full paper:https://nickbostrom.com/papers/unilateralist.pdfMore episodes at:https://radiobostrom.com/

In Defense of Posthuman Dignity (2005)

Play Episode Listen Later Sep 28, 2022 35:13


By Nick Bostrom.Abstract:Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman' state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision.Read the full paper:https://nickbostrom.com/ethics/dignityMore episodes at:https://radiobostrom.com/

A Primer on the Doomsday Argument (1999)

Play Episode Listen Later Sep 27, 2022 12:30


By Nick Bostrom.Abstract:Rarely does philosophy produce empirical predictions. The Doomsday argument is an important exception. From seemingly trivial premises it seeks to show that the risk that humankind will go extinct soon has been systematically underestimated. Nearly everybody's first reaction is that there must be something wrong with such an argument. Yet despite being subjected to intense scrutiny by a growing number of philosophers, no simple flaw in the argument has been identified.Read the full paper:https://anthropic-principle.com/q=anthropic_principle/doomsday_argument/More episodes at:https://radiobostrom.com/

Propositions Concerning Digital Minds and Society (2022)

Play Episode Listen Later Sep 23, 2022 71:44


By Nick Bostrom & Carl Shulman. Draft version 1.10.AIs with moral status and political rights? We'll need a modus vivendi, and it's becoming urgent to figure out the parameters for that. This paper makes a load of specific claims that begin to stake out a position.Read the full paper:https://nickbostrom.com/propositions.pdfMore episodes at:https://radiobostrom.com/

Base Camp for Mt. Ethics (2022)

Play Episode Listen Later Sep 17, 2022 58:20


By Nick Bostrom.Metametaethics/preamble:1. Many traditional debates in metaethics, such as between realism and antirealism, may not carve the possibility space of nature at its joints. Even if one perfectly understood ethics, it might be unclear which side would be the winner in these debates, or the adjudication might turn out to depend on seemingly minor and semi-arbitrary definitional stipulations that were made along the way.2. There is not a sharp separation between morality, law, and custom. These may be broadly the same sort of thing, albeit with some potentially important differences in emphasis, including, but not limited to:a. Custom vs. morality. Policed by distinct kinds of passion: externally, morality enforced with anger and indignation; custom with disdain. Internally, morality is enforced by feelings of remorse; custom with embarrassment.b. Law vs. morality. Law is typically codified and enforced by formal institutions; morality is typically not codified (although different authors may have various theories and claims), and is enforced decentrally by informal institutions and social sentiment.i. If law were perfect and comprehensive, there would be little or no need for morality. This may become increasingly feasible with technological advances.ii. (Are modern societies with well-functioning legal systems and stable high-capacity states less moralistic than societies that are less technologically advanced or more anarchic?)iii. I'm aware of the descriptive/normative distinction that is commonly made. The approach taken here may seem to ignorantly blur it, because it does not start by taking this distinction for granted.iv. Underlying both law and morality there is a deeper layer—abstract properties of arrangements that regulate conduct. Law and morality are roughly symmetric with respect to this layer.Read the full paper:https://nickbostrom.com/papers/mountethics.pdfMore episodes at:https://radiobostrom.com/

The Transhumanist FAQ (2003)

Play Episode Listen Later Sep 13, 2022 182:22


By Nick Bostrom.Abstract:Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows:(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies.Transhumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as “human”.Read the full paper:https://nickbostrom.com/views/transhumanist.pdfMore episodes at:https://radiobostrom.com/

The Future of Human Evolution (2004)

Play Episode Listen Later Sep 12, 2022 63:17


By Nick Bostrom.Abstract:Evolutionary development is sometimes thought of as exhibiting an inexorable trend towards higher, more complex, and normatively worthwhile forms of life. This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about. We then consider how such catastrophic outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated policy to control human evolution by modifying the fitness function of future intelligent life forms.Read the full paper:https://nickbostrom.com/fut/evolutionMore episodes at:https://radiobostrom.com/

Predictions from Philosophy? (1997)

Play Episode Listen Later Sep 8, 2022 103:05


By Nick Bostrom.Abstract:The purpose of this paper, boldly stated, is to propose a new type of philosophy, a philosophy whose aim is prediction. The pace of technological progress is increasing very rapidly: it looks as if we are witnessing an exponential growth, the growth-rate being proportional to the size already obtained, with scientific knowledge doubling every 10 to 20 years since the second world war, and with computer processor speed doubling every 18 months or so. It is argued that this technological development makes urgent many empirical questions which a philosopher could be well-suited to help answering. I try to cover a broad range of interesting problems and approaches, which means that I won't go at all deeply into any of them; I only try to say enough to show what some of the problems are, how one can begin to work with them, and why philosophy is relevant. My hope is that this will whet your appetite to deal with these questions, or at least increase general awareness that they worthy tasks for first-class intellects, including ones which might belong to philosophers.Read the full paper:https://nickbostrom.com/old/predictMore episodes at:https://radiobostrom.com/

[French Translation] Letter from Utopia (2008)

Play Episode Listen Later Sep 8, 2022 19:16


By Nick Bostrom. Translated by Jill Drouillard. Abstract:The good life: just how good could it be? A vision of the future from the future.Read the full paper:https://www.nickbostrom.com/translations/utopie.pdfMore episodes at:https://radiobostrom.com/

What is a Singleton? (2005)

Play Episode Listen Later Sep 7, 2022 13:49


By Nick Bostrom.Abstract:This note introduces the concept of a "singleton" and suggests that this concept is useful for formulating and analyzing possible scenarios for the future of humanity.Read the full paper:https://nickbostrom.com/fut/singletonMore episodes at:https://radiobostrom.com/

Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?

Play Episode Listen Later Sep 6, 2022 41:16


By Carl Shulman and Nick Bostrom. Abstract:Human capital is an important determinant of individual and aggregate economic outcomes, and a major input to scientific progress. It has been suggested that advances in genomics may open up new avenues to enhance human intellectual abilities genetically, complementing environmental interventions such as education and nutrition. One way to do this would be via embryo selection in the context of in vitro fertilization (IVF). In this article, we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant (but likely not drastic) impacts over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology – stem cell-derived gametes – which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans. Read the full paper:https://nickbostrom.com/papers/embryo.pdfMore episodes at:https://radiobostrom.com/

Letter from Utopia (2008)

Play Episode Listen Later Aug 29, 2022 19:46


By Nick Bostrom.Abstract:The good life: just how good could it be? A vision of the future from the future.Read the full paper:https://nickbostrom.com/utopia More episodes at:https://radiobostrom.com/

Technological Revolutions: Ethics and Policy in the Dark (2006)

Play Episode Listen Later Aug 29, 2022 84:22


By Nick Bostrom.Abstract:Technological revolutions are among the most important things that happen to humanity. Ethical assessment in the incipient stages of a potential technological revolution faces several difficulties, including the unpredictability of their long‐term impacts, the problematic role of human agency in bringing them about, and the fact that technological revolutions rewrite not only the material conditions of our existence but also reshape culture and even – perhaps – human nature. This essay explores some of these difficulties and the challenges they pose for a rational assessment of the ethical and policy issues associated with anticipated technological revolutions.Read the full paper:https://nickbostrom.com/revolutions.pdfMore episodes at:https://radiobostrom.com/

Human Enhancement Ethics: The State of the Debate (2008) Introduction Chapter.

Play Episode Listen Later Aug 28, 2022 55:10


By Nick Bostrom and Julian Savulescu.Background:Are we good enough? If not, how may we improve ourselves? Must we restrict ourselves to traditional methods like study and training? Or should we also use science to enhance some of our mental and physical capacities more directly?Over the last decade, human enhancement has grown into a major topic of debate in applied ethics. Interest has been stimulated by advances in the biomedical sciences, advances which to many suggest that it will become increasingly feasible to use medicine and technology to reshape, manipulate, and enhance many aspects of human biology even in healthy individuals. To the extent that such interventions are on the horizon (or already available) there is an obvious practical dimension to these debates. This practical dimension is underscored by an outcrop of think tanks and activist organizations devoted to the biopolitics of enhancement.Read the full paper:https://nickbostrom.com/ethics/human-enhancement-ethics.pdfMore episodes at:https://radiobostrom.com/

How Vulnerable is the World? (2021)

Play Episode Listen Later Aug 25, 2022 23:09


By Nick Bostrom and Matthew van der Merwe.Abstract:Sooner or later a technology capable of wiping out human civilisation might be invented. How far would we go to stop it?Read the full paper:https://aeon.co/essays/none-of-our-technologies-has-managed-to-destroy-humanity-yetLinks:- The Vulnerable World Hypothesis (2019) (original academic paper)- The Vulnerable World Hypothesis (2019) (narration by Radio Bostrom)Notes:This article is an adaption of Bostrom's academic paper "The Vulnerable World Hypothesis (2019)".The article was first published in Aeon Magazine. The narration was provided by Curio. We are grateful to Aeon and Curio for granting us permission to re-use the audio. Curio are offering Radio Bostrom listeners a 25% discount on their annual subscription.

Crucial Considerations and Wise Philanthropy (2014)

Play Episode Listen Later Aug 25, 2022 35:05


By Nick Bostrom.Abstract:Within a utilitarian context, one can perhaps try to explicate [crucial considerations] as follows: a crucial consideration is a consideration that radically changes the expected value of pursuing some high-level subgoal. The idea here is that you have some evaluation standard that is fixed, and you form some overall plan to achieve some high-level subgoal. This is your idea of how to maximize this evaluation standard. A crucial consideration, then, would be a consideration that radically changes the expected value of achieving this subgoal, and we will see some examples of this. Now if you stop limiting your view to some utilitarian context, then you might want to retreat to these earlier more informal formulations, because one of the things that could be questioned is utilitarianism itself. But for most of this talk we will be thinking about that component.Read the full paper:https://www.effectivealtruism.org/articles/crucial-considerations-and-wise-philanthropy-nick-bostromMore episodes at:https://radiobostrom.com/

Astronomical Waste: The Opportunity Cost of Delayed Technological Development (2003)

Play Episode Listen Later Aug 25, 2022 20:28


By Nick Bostrom.Abstract:With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.Read the full paper:https://nickbostrom.com/astronomical/wasteMore episodes at:https://radiobostrom.com/

Are You Living In A Computer Simulation? (2003)

Play Episode Listen Later Aug 23, 2022 37:55


By Nick Bostrom. Abstract:This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor‐simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.Read the full paper:https://www.simulation-argument.com/simulation.pdfMore episodes at:https://radiobostrom.com/

The Ethics of Artificial Intelligence (2011)

Play Episode Listen Later Aug 22, 2022 59:39


By Nick Bostrom and Eliezer Yudkowsky.Abstract:The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill. Read the full paper:https://nickbostrom.com/ethics/artificial-intelligence.pdfMore episodes at:https://radiobostrom.com/

Information Hazards: A Typology of Potential Harms from Knowledge (2011)

Play Episode Listen Later Aug 19, 2022 108:22


By Nick Bostrom.Abstract:Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. They can, however, be important. This paper surveys the terrain and proposes a taxonomy.Read the full paper:https://nickbostrom.com/information-hazards.pdfMore episodes at:https://radiobostrom.com/

The Evolutionary Optimality Challenge (2021)

Play Episode Listen Later Aug 17, 2022 70:10


By Nick Bostrom, Anders Sandberg, and Matthew van der Merwe.This is an updated version of The Wisdom of Nature, first published in the book Human Enhancement (Oxford University Press, 2009).Abstract:Human beings are a marvel of evolved complexity. When we try to enhance poorly-understood complex evolved systems, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. A recognition of this reality can manifest as a vaguely normative intuition, to the effect that it is “hubristic” to try to improve on nature, or that biomedical therapy is ok while enhancement is morally suspect. We suggest that one root of these moral intuitions may be fundamentally prudential rather than ethical. More importantly, we develop a practical heuristic, the “evolutionary optimality challenge”, for evaluating the plausibility that specific candidate biomedical interventions would be safe and effective. This heuristic recognizes the grain of truth contained in “nature knows best” attitudes while providing criteria for identifying the special cases where it may be feasible, with present or near-future technology, to enhance human nature.Read the full paper:https://www.nickbostrom.com/evolutionary-optimality.pdfMore episodes at:https://radiobostrom.com/

Where Are They? Why I hope the search for extraterrestrial life finds nothing (2008)

Play Episode Listen Later Aug 13, 2022 32:59


By Nick Bostrom.Abstract:When water was discovered on Mars, people got very excited. Where there is water, there may be life. Scientists are planning new missions to study the planet up close. NASA's next Mars rover is scheduled to arrive in 2010. In the decade following, a Mars Sample Return mission might be launched, which would use robotic systems to collect samples of Martian rocks, soils, and atmosphere, and return them to Earth. We could then analyze the sample to see if it contains any traces of life, whether extinct or still active. Such a discovery would be of tremendous scientific significance. What could be more fascinating than discovering life that had evolved entirely independently of life here on Earth? Many people would also find it heartening to learn that we are not entirely alone in this vast cold cosmos.But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.Conversely, if we discovered traces of some simple extinct life form—some bacteria, some algae—it would be bad news. If we found fossils of something more advanced, perhaps something looking like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life we found, the more depressing the news of its existence would be. Scientifically interesting, certainly, but a bad omen for the future of the human race.Read the full paper:https://nickbostrom.com/extraterrestrial.pdfMore episodes at:https://radiobostrom.com

The Reversal Test: Eliminating Status Quo Bias in Applied Ethics (2006)

Play Episode Listen Later Aug 12, 2022 76:09


By Nick Bostrom and Toby Ord.Abstract:In this article we argue that one prevalent cognitive bias, status quo bias, may be responsible for much of the opposition to human enhancement in general and to genetic cognitive enhancement in particular. Our strategy is as follows: first, we briefly review some of the psychological evidence for the pervasiveness of status quo bias in human decision making. This evidence provides some reason for suspecting that this bias may also be present in analyses of human enhancement ethics. We then propose two versions of a heuristic for reducing status quo bias. Applying this heuristic to consequentialist objections to genetic cognitive enhancements, we show that these objections are affected by status quo bias. When the bias is removed, the objections are revealed as extremely implausible. We conclude that the case for developing and using genetic cognitive enhancements is much stronger than commonly realized.Read the full paper:https://nickbostrom.com/ethics/statusquo.pdfMore episodes at:https://radiobostrom.com/

Existential Risk Prevention as Global Priority (2012)

Play Episode Listen Later Aug 8, 2022 92:08


By Nick Bostrom.Abstract:Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability.Read the full paper:https://existential-risk.org/conceptMore episodes at:https://radiobostrom.com/

The Fable of The Dragon Tyrant (2005)

Play Episode Listen Later Aug 8, 2022 44:24


By Nick Bostrom.Abstract:Recounts the Tale of a most vicious Dragon that ate thousands of people every day, and of the actions that the King, the People, and an assembly of Dragonologists took with respect thereto.Read the full paper: https://nickbostrom.com/fable/dragonMore episodes at:https://radiobostrom.com/

Sharing the World with Digital Minds (2020)

Play Episode Listen Later Aug 8, 2022 59:54


By Carl Shulman & Nick Bostrom.Abstract:The minds of biological creatures occupy a small corner of a much larger space of possible minds that could be created once we master the technology of artificial intelligence. Yet many of our moral intuitions and practices are based on assumptions about human nature that need not hold for digital minds. This points to the need for moral reflection as we approach the era of advanced machine intelligence. Here we focus on one set of issues, which arise from the prospect of digital minds with superhumanly strong claims to resources and influence. These could arise from the vast collective benefits that mass-produced digital minds could derive from relatively small amounts of resources. Alternatively, they could arise from individual digital minds with superhuman moral status or ability to benefit from resources. Such beings could contribute immense value to the world, and failing to respect their interests could produce a moral catastrophe, while a naive way of respecting them could be disastrous for humanity. A sensible approach requires reforms of our moral norms and institutions along with advance planning regarding what kinds of digital minds we bring into existence.Read the full paper:https://nickbostrom.com/papers/digital-minds.pdfMore episodes at:https://radiobostrom.com/

The Vulnerable World Hypothesis (2019)

Play Episode Listen Later Aug 8, 2022 150:13


By Nick Bostrom.Abstract:Scientific and technological progress might change people's capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the ‘semi-anarchic default condition'. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.Read the full paper:https://nickbostrom.com/papers/vulnerable.pdfMore episodes at:https://radiobostrom.com/

Claim Radio Bostrom

In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

Claim Cancel