Podcast appearances and mentions of Anders Sandberg

  • 95PODCASTS
  • 228EPISODES
  • 47mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 9, 2025LATEST
Anders Sandberg

POPULARITY

20172018201920202021202220232024


Best podcasts about Anders Sandberg

Latest podcast episodes about Anders Sandberg

London Futurists
The case for a conditional AI safety treaty, with Otto Barten

London Futurists

Play Episode Listen Later May 9, 2025 38:12


How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That's the dilemma we'll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI.Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we'll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”.Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology.Selected follow-ups:Existential Risk ObservatoryThere Is a Solution to AI's Existential Risk Problem - TimeInternational Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty - Otto Barten and colleaguesThe Precipice: Existential Risk and the Future of Humanity - book by Toby OrdGrand futures and existential risk - Lecture by Anders Sandberg in London attended by OttoPauseAIStopAIResponsible Scaling Policies - METRMeta warns of 'worse' experience for European users - BBC NewsAccidental Nuclear War: a Timeline of Close Calls - FLIThe Vulnerable World Hypothesis - Nick BostromSemiconductor Manufacturing Optics - ZeissCalifornia Institute for Machine ConsciousnessTipping point for large-scale social change? Just 25 percent - Penn TodayMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Le monde de demain - The Flares [PODCASTS]
Humain, Demain #56 - Avenir Lointain, Superintelligence et Économie Post-Rareté avec Anders Sandberg

Le monde de demain - The Flares [PODCASTS]

Play Episode Listen Later Apr 26, 2025 68:02


Version française disponible ! Auto-doublée de YouTube avec l'IA (la qualité peut varier) Pour activer ou désactiver la version doublée en français de cet épisode en anglais, cliquez sur l'icône ⚙️ en bas de la vidéo, puis sélectionnez votre piste audio préférée. Les sous-titres français sont également disponibles dans les paramètres. ⚠️ Découvrez du contenu EXCLUSIF (pas sur la chaîne) ⚠️ ⇒ https://the-flares.com/y/bonus/ ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la

London Futurists
Human extinction: thinking the unthinkable, with Sean ÓhÉigeartaigh

London Futurists

Play Episode Listen Later Apr 23, 2025 42:27


Our subject in this episode may seem grim – it's the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time.These scenarios aren't pleasant to contemplate, but there's a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we'll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?”Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford.Selected follow-ups:Seán Ó hÉigeartaigh - Leverhulme Centre ProfileExtinction of the human species - by Sean ÓhÉigeartaighHerman Kahn - WikipediaMoral.me - by ConsciumClassifying global catastrophic risks - by Shahar Avin et alDefence in Depth Against Human Extinction - by Anders Sandberg et alThe Precipice - book by Toby OrdMeasuring AI Ability to Complete Long Tasks - by METRCold Takes - blog by Holden KarnofskyWhat Comes After the Paris AI Summit? - Article by SeanARC-AGI - by François CholletHenry Shevlin - Leverhulme Centre profileEleos (includes Rosie Campbell and Robert Long)NeurIPS talk by David ChalmersTrustworthy AI Systems To Monitor Other AI: Yoshua BengioThe Unilateralist's Curse - by Nick Bostrom and Anders SandbergMusic: Spike Protein, by Koi Discovery, availabPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

LEVITY
#19 What are the chances humanity will survive this century? | Anders Sandberg

LEVITY

Play Episode Listen Later Mar 11, 2025 129:50


We have an other amazing guest for this episode: Anders Sandberg is a visionary philosopher, futurist, and transhumanist thinker whose work pushes the boundaries of human potential and the future of intelligence. As a senior research fellow at Oxford University's Future of Humanity Institute until its closing in 2024 Sandberg explores everything from cognitive enhancement and artificial intelligence to existential risks and space colonization. With a background in computational neuroscience, he bridges science and philosophy to tackle some of the most profound questions of our time: How can we expand our cognitive capacities? What are the ethical implications of radical life extension? Could we one day transcend biological limitations entirely? Known for his sharp intellect, playful curiosity, and fearless speculation, Sandberg challenges conventional wisdom, inviting us to imagine—and shape—a future where humanity thrives beyond its current constraints.00:00 introduction04:18 exersise & David Sinclair06:10 Will we survive the century?18:18 Who can we trust? Knowledge and humility23:17 Nuclear armaggedon39:51 Technology as a double-edged sword44:30 Sandberg origin story56:54 Computational neuroscience01:00:30 Personal identity and neural simulation01:05:24 Personal identity and reasons to want to continue living01:09:39 The psychology of behind different philosophical intutions and judgments01:17:48 Is death bad for Anders Sandberg?01:25:00 Altruism and individual rights01:31:29 Elon Musk says we must die for progress01:35:10 Artificial Intelligence01:55:08 AI civilization01:02:07 Cryonics02:04:00 Book recommendations Hosted on Acast. See acast.com/privacy for more information.

Stranded Technologies Podcast
Ep. 86: Anders Sandberg on Meta-Innovation, Governance Futurism and Approaches to Existential Risk

Stranded Technologies Podcast

Play Episode Listen Later Jan 15, 2025 74:17


Our guest is Anders Sandberg. Anders is a Swedish researcher, futurist and transhumanist. He holds a PhD in computational neuroscience from Stockholm University, and is a former senior research fellow at the Future of Humanity Institute at the University of Oxford.This conversation is about the governance of innovation, and the innovation of governance. Explore Infinita City:* Website: www.infinita.city* X: @InfinitaCity* The Infinita City Times* Join Events This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.strandedtechnologies.com

What is The Future for Cities?
282I_Keygan Huckleberry, Emergency Management Officer in Christchurch

What is The Future for Cities?

Play Episode Listen Later Dec 18, 2024 74:36


"Some people are aware of the hazards, but unaware of the consequences." Are you interested in disaster resilience planning? What do you think about 15-minute cities as a tool for disaster resilience? How can we create safe, informed and resilient cities? Interview with Keygan Huckleberry, Emergency Management Officer in Christchurch. We talk about his vision for the future of cities, disaster resilience, urban planning with evacuation in mind, safe cities, agreement on urban challenges, and many more. Keygan Huckleberry is a local Planning Coordinator for Civil Defence Emergency Management in Christchurch. His day to day role is predominantly focused on developing and writing plans and strategies to address the vast array of hazards that Christchurch and Banks Peninsula face. In an activated Emergency Operations Centre Keygan will either be the Response Manager, managing the response and the response staff, ensuring each team understands their role, response objectives, and actions required under the Action Plan, and providing leadership and guidance to other functions in the response; or he will lead the planning team in creating Action Plans to address novel problems and support managing the consequences that citizens of Christchurch and Banks Peninsula are faced with. Find out more about Keygan through these links: Keygan Huckleberry on LinkedIn getready.gov.nz website with preparation templates Connecting episodes you might be interested in: No.213R - Defence in depth against human extinction No.214 - Interview with Anders Sandberg about consecutive disasters No.260 - Interview with Haydn Read about risk and resilience No.280 - Interview with Hudson Worsley about sustainability vs resilience No.281R - Misfortunes never come singly. A holistic approach to urban resilience and sustainability challenges What wast the most interesting part for you? What questions did arise for you? Let me know on Twitter ⁠⁠⁠⁠⁠⁠@WTF4Cities⁠⁠⁠⁠⁠⁠ or on the ⁠⁠⁠⁠⁠⁠wtf4cities.com⁠⁠⁠⁠⁠⁠ website where the ⁠⁠⁠⁠⁠⁠shownotes⁠⁠⁠⁠⁠⁠ are also available. I hope this was an interesting episode for you and thanks for tuning in. Music by ⁠⁠⁠⁠⁠⁠Lesfm ⁠⁠⁠⁠⁠⁠from ⁠⁠⁠⁠⁠⁠Pixabay⁠

What is The Future for Cities?
280I_Hudson Worsley, co-founder and director of Presync

What is The Future for Cities?

Play Episode Listen Later Dec 11, 2024 43:43


"At the theoretical level, the discipline of resilience is very human-centric, and I think the discipline of sustainability is more nature-centric." Are you interested in the difference between sustainability and resilience professionals? What will you answer to your grandkids when they ask what you did against climate change? How can we ensure ecosystem services? Interview with Hudson Worsley, co-founder and director of Presync. We will talk about his vision for the future of cities, ecosystem services, nature as infrastructure, answering to the next generation, and many more. Hudson Worsley is a co-founder and director of Presync, a sustainability consultancy and certified B Corp, and the Chair of MECLA, the Materials & Embodied Carbon Leaders' Alliance. Hudson works with organisations on their transition to the zero-carbon economy and adaptation to the changing climate. He supports organisations by identifying opportunities for energy efficiency and the adoption of renewables, both on-site and through the grid via renewable power purchase agreements. Hudson's consultancy, Presync has many years of relevant experience behind their integrated approach to climate change – both adaptation to changes that are now unavoidable, and mitigation to prevent further changes that are unimaginable. Presync is small and nimble with deep professional experience in energy, innovation, property development, sustainability, emission reduction and climate change. Find out more about Hudson through these links: Hudson Worsley on LinkedIn Presync website Presync on LinkedIn MECLA website MECLA on LinkedIn MECLA on YouTube Hudson Worsley on the People Planet Profit podcast Hudson Worsley at the Decarbonising the Building Industry forum Connecting episodes you might be interested in: No.208 - Interview with Professor Rudolf Giffinger about the sustainability principles No.214 - Interview with Anders Sandberg about risk multiplication No.216 - Interview with Sara Stace about the public living room No.220 - Interview with Simon Burt about the importance of bees No.279R - How ecosystems services drive urban growth: Integrating nature-based solutions What wast the most interesting part for you? What questions did arise for you? Let me know on Twitter ⁠⁠⁠⁠⁠⁠⁠⁠@WTF4Cities⁠⁠⁠⁠⁠⁠⁠⁠ or on the ⁠⁠⁠⁠⁠⁠⁠⁠wtf4cities.com⁠⁠⁠⁠⁠⁠⁠⁠ website where the ⁠⁠⁠⁠⁠⁠⁠⁠shownotes⁠⁠⁠⁠⁠⁠⁠⁠ are also available. I hope this was an interesting episode for you and thanks for tuning in. Music by ⁠⁠⁠⁠⁠⁠⁠⁠Lesfm ⁠⁠⁠⁠⁠⁠⁠⁠from ⁠⁠⁠⁠⁠⁠⁠⁠Pixabay⁠

Har vi åkt till Mars än?
58. Har vi uppfunnit kryosömn än?

Har vi åkt till Mars än?

Play Episode Listen Later Oct 12, 2024 39:52 Transcription Available


I detta spännande avsnitt av "Har vi åkt till Mars än?" dyker vi ner i framtidens rymdresor tillsammans med vår gäst, Anders Sandberg, framtidsforskare och transhumanist. Tillsammans med honom utforskar vi de fascinerande koncepten kring hur vi kan hantera de utmaningar som långa resor inom och bortom vårt solsystem medför. Vi tar oss genom de tekniska och biologiska utmaningar som är involverade i rymdresor, inklusive hibernering, kryosömn och kryonik. "Att frysa ner kroppen är inte detsamma som att gå i dvala," förklarar Anders, vilket öppnar upp för en djupare förståelse av de olika metoder som kan hjälpa oss att övervinna tidens påverkan på människokroppen under dessa extraordinära resor. Vi får även en inblick i hur djur går i dvala och vad vi kan lära oss av deras anpassningar för framtida rymdfärder. Vi pratar också med Edwin Mulder, projektledare på tyska rymdstyrelsen DLR där han arbetar med bedreststudies,där personer ligger månader i streck i sängar för att undersöka hur kroppen påverkas under långa perioder i vila.Så luta dig tillbaka och förbered dig på att bli fascinerad av tankar kring rymdforskning, rymdstrategi och framtidens rymdstationer. "Har vi åkt till Mars än?" är din guide till att förstå de komplexa frågor som omger rymdresor och vad som kan vänta oss i vintergatan och bortom. Missa inte detta avsnitt där vi utforskar rymden, framtiden och människans plats i universum!Har vi åkt till Mars än? görs på Beppo av Rundfunk Media i samarbete med SaabHosted by Ausha. See ausha.co/privacy-policy for more information.

The Foresight Institute Podcast
Anders Sandberg | Whole Brain Emulation Ethics

The Foresight Institute Podcast

Play Episode Listen Later Oct 11, 2024 12:07


Anders Sandberg's research centres on estimating the capabilities and underlying science of future technologies, methods of reasoning about long-term futures, existential and global catastrophic risk, the search for extraterrestrial intelligence (SETI), as well as societal and ethical issues surrounding human enhancement.Topics of particular interest include management of systemic risk, reasoning under uncertainty, enhancement of cognition, neuroethics and public policy. He has worked on this within the EU project ENHANCE, where he was also responsible for public outreach and online presence, and the ERC UnPredict project.Besides scientific publications in neuroscience, ethics and future studies he has also participated in the public debate about human enhancement, existential risk and SETI internationally.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.

Vetenskapsradion Hälsa
Att ladda upp hjärnan med AI - vissa menar att det kan innebära evigt liv(r)

Vetenskapsradion Hälsa

Play Episode Listen Later Oct 10, 2024 19:26


Årets nobelpris i fysik gick till upptäckten bakom maskininlärning och AI. Vissa forskare menar att evigt liv kan vara möjligt med hjälp av AI, om man laddar upp mänskliga hjärnor i datorer. Lyssna på alla avsnitt i Sveriges Radio Play. Drömmen om evigt liv har funnits i tusentals år hos människan. Med ny teknik tror vissa forskare att det skulle kunna vara möjligt med evigt liv. Det skulle i så fall ske genom att scanna av och ladda upp alla kopplingar från en människas hjärna till en dator. Superdator med kapacitet för hjärnanÄn så länge finns en superdator som har kapacitet till 1.7 miljarders miljarders beräkningar per sekund, och då är det lite mer än vad en mänsklig hjärna klarar av, menar framtidsforskaren Anders Sandberg vid Oxford University i England. Fast det har gått nästan två år sedan programmet sändes första gången, så ligger superdatorernas kapacitet fortfarande kvar på den här nivån, enligt Anders Sandberg. Men den datorkraften kräver oerhört mycket mer kraft än våra mänskliga hjärnor, som kanske kan jämföras med några watt i en lampa. Nya perspektiv om evigt livI en vindlande resa i Vetenskapsradion Hälsa säger en forskare saker hon aldrig sagt förut. Det hände Ylva Söderfeldt, idéhistoriker inom medicin och föreståndare för centrum för medicinsk humaniora vid Uppsala universitet.Nya perspektiv ges också av hjärnforskaren Martin Ingvar, Karolinska institutet i Solna, biologen Tom Kirkwood som är brittisk åldringsforskare i New Castle University i Storbritannien, Helga Nowotny, tidigare chef för europeiska forskningsrådet och professor emerita i vetenskapsforskning vid tekniska högskolan i Zurich i Schweiz.Programmet sändes första gången 29 dec 2022.Programledare och producent Annika Östmanannika.ostman@sverigesradio.se

What is The Future for Cities?
250P_Space cities with Dr Anders Sandberg, Xavier de Kestelier and Thomas Gooch

What is The Future for Cities?

Play Episode Listen Later Aug 28, 2024 81:32


Are you interested in space cities? What do you think about cities versus communities? How can we create great space communities? The panellists, Dr Anders Sandberg, Xavier de Kestelier and Thomas Gooch talk about their vision for the future of cities and space cities, the ideal size of cities, sports in space, and many more. Anders Sandberg has a background in computational neuroscience and mathematical modelling, but has for more than a decade worked in the philosophy faculty of University of Oxford doing research on topics such as the ethics and social impact of human enhancement, emerging technology, global catastrophic and existential risks, applied epistemology, and analysing the far future. His research is extremely interdisciplinary, often combines hard science with philosophy, uses quantitative methods to understand qualitative issues, and typically deals with under-researched topics. Anders enjoys academic outreach and policy impact. Find out more about Anders through these links: Anders Sandberg⁠ on LinkedIn; ⁠@anderssandberg⁠ as Anders Sandberg on X; ⁠Anders Sandberg⁠ at the Future of Humanity Institute; ⁠Anders Sandberg⁠ at University of Oxford; ⁠Anders Sandberg⁠ website; ⁠Anders Sandberg⁠ on Google Scholar; ⁠Anders Sandberg⁠ on Wikipedia; ⁠Anders Sandberg⁠ at The Conversation; Xavier De Kestelier holds a BArch and MSc in Architectural Engineering from the University of Ghent and an MSc in Urban Design from The Bartlett School of Architecture. He co-headed the Specialist Modelling Group at Foster+Partners, focusing on computational design and digital fabrication, and led the implementation of rapid prototyping technology. Xavier worked with NASA and the ESA on 3D-printed Moon and Mars habitats. Currently, he is the Global Head of Design and Innovation at Hassell Studio, overseeing design and innovation worldwide. He also directs Smartgeometry, a nonprofit organizing international digital design and fabrication workshops. Xavier has taught at the University of Ghent, Syracuse University, and The Bartlett School of Architecture, and is a member of the RIBA. Find out more about Xavier through these links: Xavier De Kestelier on LinkedIn @xdekeste as Xavier De Kestelier on Instagram Xavier De Kestelier at Hassel Studio Xavier De Kestelier at Space Architect Life in Design: Xavier De Kestelier - Architect (Youtube video) Adventures of an interplanetary architect - Xavier De Kestelier (TEDx Talk) With a background in Landscape Architecture and a multi-scalar practitioner, Thomas Gooch is the Founder of Office of Planetary Observations (OPO), a start-up providing nature data software – powered by AI, for built environment professionals. Innovating in the space industry has also led him to contribute writing a 'Declaration of Rights of the Moon', and OPO building out point cloud analysis technology for ‘sensing' the Moon. You can find out more about Thomas through these links: Thomas Gooch on LinkedIn @MrThomasGooch as Thomas Gooch on X Office of Planetary Observations website Office of Planetary Observations on LinkedIn @OPObservations as Office of Planetary Observations on X ⁠@officeofplanetaryobservations⁠ as Office of Planetary Observations on Instagram Lunar musings by Thomas Gooch Revolutionising Urban Greening with Office of Planetary Observations Connecting episodes you might be interested in: No.214 - Interview with Anders Sandberg No.233R - Platinum group metals extraction from asteroids vs Earth No.234 - Interview with Tenzin Crouch about space robots No.249R - Space colonization: A study of supply and demand What wast the most interesting part for you? What questions did arise for you? Let me know on Twitter ⁠⁠⁠⁠⁠⁠⁠⁠@WTF4Cities⁠⁠⁠⁠⁠⁠⁠⁠ or on the ⁠⁠⁠⁠⁠⁠⁠⁠wtf4cities.com⁠⁠⁠⁠⁠⁠⁠⁠ website where the ⁠⁠⁠⁠⁠⁠⁠⁠shownotes⁠⁠⁠⁠⁠⁠⁠⁠ are also available. I hope this was an interesting episode for you and thanks for tuning in. Music by ⁠⁠⁠⁠⁠⁠⁠⁠Lesfm ⁠⁠⁠⁠⁠⁠⁠⁠from ⁠⁠⁠⁠⁠⁠⁠⁠Pixabay⁠

What is The Future for Cities?
249R_Space colonization: A study of supply and demand (research summary)

What is The Future for Cities?

Play Episode Listen Later Aug 26, 2024 9:43


Are you interested in space colonisation? Summary of the article titled Space colonization: A study of supply and demand from 2011 by Dr. Dana Andrews, Gordon R. Woodcock, and Brian Bloudek, presented at the 62nd International Astronautical Congress. This is a great preparation to our next panel conversation with Dr Anders Sandberg, Xavier de Kestelier, and Thomas Gooch in episode 250 talking about space cities and their different aspects. Since we are investigating the future of cities, I thought it would be interesting to see how space colonization and thus space cities can be approached from the supply and demand perspective. This article looks at the fundamental economics of people working (and playing) in space, and shows scenarios that should result in successful colonies on the moon. As the most important things, I would like to highlight 3 aspects: The development of lunar colonies is a natural progression, beginning with tele-operated mining and evolving into human presence as the need for maintenance and oversight grows. Lunar mining offers a solution to Earth's dwindling metal resources, with the Moon's vast reserves becoming economically viable as terrestrial supplies run low. Achieving affordable space access through reusable launch systems and infrastructure like Tether Upper Stages (TUS) and Space Operations Centers (SOC) is crucial for successful lunar operations. You can find the article through this link. Abstract: This paper steps back and looks at the fundamental economics of people working (and playing) in space, and shows scenarios that should result in successful colonies on the moon. The basic premise is the ever increasing cost of industrial metals necessary to generate renewable energy for a growing world population, and the relative abundance of those same metals on the near side of the moon. There is a crossover point, relatively soon, where it is cheaper and more environmentally friendly to mine the moon instead of the increasingly poor ores remaining on earth. At that point government and industry can form a partnership much like The Railroad Act of 1862 to incentivise construction of the transportation infrastructure and lunar mining equipment. The economics say the initial mining equipment will be tele-operated from earth, but over time the requirement for human maintenance and repair seems inescapable. We foresee a government presence on the moon almost from the start of the prospector phase to enhance safety and insure law and order, and those initial bases will eventually grow into towns and colonies. Connecting episodes you might be interested in: No.214 - Interview with Anders Sandberg about space colonisation No.233R - Platinum group metals extraction from asteroids vs Earth No.234 - Interview with Tenzin Crouch about space robotics You can find the transcript through ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠this link⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. What wast the most interesting part for you? What questions did arise for you? Let me know on Twitter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@WTF4Cities⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ or on the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠wtf4cities.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ website where the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠showno⁠t⁠es⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠are also available. I hope this was an interesting episode for you and thanks for tuning in. Music by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Lesfm ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠from ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Pixabay⁠

What is The Future for Cities?
250P_Trailer_Space cities with Dr Anders Sandberg, Xavier de Kestelier and Thomas Gooch

What is The Future for Cities?

Play Episode Listen Later Aug 24, 2024 2:13


Are you interested in space cities? What do you think about cities versus communities? How can we create great space communities? Trailer for the panel discussion in episode 250 with panellists Dr Anders Sandberg, Xavier de Kestelier and Thomas Gooch, who talk about their vision for the future of cities and space cities, the ideal size of cities, sports in space, and many more. Find out more in the ⁠⁠⁠⁠⁠⁠⁠⁠episode⁠⁠⁠⁠⁠⁠⁠⁠. Music by ⁠⁠⁠⁠⁠⁠⁠⁠Lesfm ⁠⁠⁠⁠⁠⁠⁠⁠from ⁠⁠⁠⁠⁠⁠⁠⁠Pixabay⁠⁠⁠⁠⁠⁠⁠⁠

Clearer Thinking with Spencer Greenberg
Physical limits and the long-term future (with Anders Sandberg)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Aug 14, 2024 104:04


Read the full transcript here. How much energy is needed for GDP growth? Would our civilization have developed at the same rate without fossil fuels? Could we potentially do the same things we're currently doing but with significantly less energy? How different would the world look if we'd developed nuclear energy much earlier? Why can't anything go faster than light? Will the heat death of the universe really be "the end" for everything? How can difficult concepts be communicated in simple ways that nevertheless avoid being misleading or confusing? Is energy conservation an unbreakable law? How likely is it that advanced alien civilizations exist? What are S-risks? Can global civilizations be virtuous? What is panspermia? How can we make better backups of our knowledge and culture?Anders Sandberg is a researcher at the Institute for Futures Studies in Sweden. He was formerly senior research fellow at the Future of Humanity Institute at University of Oxford. His research deals with emerging technologies, the ethics of human enhancement, global and existential risks, and very long-range futures. Follow him on Twitter / X at @anderssandberg, find him via his various links here. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Spectator Radio
The Edition: is Donald Trump now unstoppable?

Spectator Radio

Play Episode Listen Later Jul 18, 2024 37:02


This week: bulletproof Trump. The failed assassination attempt on Donald Trump means that his supporters, more than ever, view him as America's Chosen One. Joe Biden's candidacy has been falling apart since his disastrous performance in the first presidential debate last month. Trump is now ahead in the polls in all the battleground states. The whispers in Washington are that the Democrats are already giving up on stopping a second Trump term – and eyeing up the presidential election of 2028 instead. Freddy Gray, deputy editor at The Spectator, and Amber Duke, Washington editor at Spectator World, join the podcast to discuss. (02:45) Next: meeting the mega MAGA fans. The Spectator's political correspondent James Heale reports from Milwaukee, Wisconsin, where the Republican National Convention is under way. ‘Brash, flash and full of flair,' reports James, as he meets Donald Trump supporters who are, he says, wearing their MAGA politics with pride. Border control is a common complaint, while other Trumpists hope his near-death experience will see him embrace his faith. James has kindly shared with us a couple of the interviews that informed his piece in the magazine. (14:43) Then: Will and Lara take us through some of their favourite pieces in the magazine this week, including Sir David Hempleman-Adams' notebook and Gus Carter's scoop on Reform's ‘Wimpy' voters.  And finally: how techno-optimism became fashionable. Max Jeffery writes in the magazine this week about the ‘New Solutions', a trio of new ideologies that rich tech savvy ‘geeks' – as he calls them – have adopted in order to make the world a better place. These are: techno optimism, effective altruism and effective accelerationism. He joined us on the podcast to discuss, alongside Anders Sandberg, effective altruist and senior research fellow at the University of Oxford. (24:49) Hosted by William Moore and Lara Prendergast.   Produced by Oscar Edmondson and Patrick Gibbons. 

The Edition
Is Donald Trump now unstoppable?

The Edition

Play Episode Listen Later Jul 18, 2024 37:02


This week: bulletproof Trump. The failed assassination attempt on Donald Trump means that his supporters, more than ever, view him as America's Chosen One. Joe Biden's candidacy has been falling apart since his disastrous performance in the first presidential debate last month. Trump is now ahead in the polls in all the battleground states. The whispers in Washington are that the Democrats are already giving up on stopping a second Trump term – and eyeing up the presidential election of 2028 instead. Freddy Gray, deputy editor at The Spectator, and Amber Duke, Washington editor at Spectator World, join the podcast to discuss. (02:45) Next: Meeting the mega MAGA fans. The Spectator's political correspondent James Heale reports from Milwaukee, Wisconsin, where the Republican National Convention is under way. ‘Brash, flash and full of flair,' reports James, as he meets Donald Trump supporters who are, he says, wearing their MAGA politics with pride. Border control is a common complaint, while other Trumpists hope his near-death experience will see him embrace his faith. James has kindly shared with us a couple of the interviews that informed his piece in the magazine. (14:43) Then: Will and Lara take us through some of their favourite pieces in the magazine this week, including Sir David Hempleman-Adams' notebook and Gus Carter's scoop on Reform's ‘Wimpy' voters.  And finally: how techno-optimism became fashionable. Max Jeffery writes in the magazine this week about the ‘New Solutions', a trio of new ideologies that rich tech savvy ‘geeks' – as he calls them – have adopted in order to make the world a better place. These are: techno optimism, effective altruism and effective accelerationism. He joined us on the podcast to discuss, alongside Anders Sandberg, effective altruist and senior research fellow at the University of Oxford. (24:49) Hosted by William Moore and Lara Prendergast.   Produced by Oscar Edmondson and Patrick Gibbons. 

Trans Resister Radio
No More Business As Usual, AoT#421

Trans Resister Radio

Play Episode Listen Later May 6, 2024 56:20


An unexpected hero arises to take on our old enemies, the Neocons. Not the hero we expected, but most likely, the hero we deserve.  Topics include: Future of Humanity Institute closing, Oxford, transhumanism, Nick Bostrom, Anders Sandberg, WTA, h+, eugenics, Silicon Valley billionaires, Simulation Hypothesis, racist emails, dysgenics, artificial intelligence, progressive version of transhumanism, fringe ideologies and groups, IEET, Martine Rothblatt, EA, Longtermism, national economic systems, technological development, Neoliberals, Neocons, establishment in crisis, Erik Prince, Blackwater, Xe, Bush Administration, War on Terror, MIC, Indo-Pacific military theater, pivot away from Middle East, defense spending, Reagan, Cold War, post communist Russia, focus on private sector to save governmental failure, Eric Schmidt, national security shift, Boeing, whistleblowers' deaths, basic corruption, focus on profits over all else, financialization, wealth gap increasing, money isn't real, lack of economic philosophy, no accountability, dismantling of legitimate protest, Israel, student protests, banning campus protests, 2024 presidential election, AGI, major governments want to steer their own new world order, space vs the desert

The Ochelli Effect
The Age of Transitions and Uncle 5-3-2024

The Ochelli Effect

Play Episode Listen Later May 6, 2024 115:28


Neocon Future Stratego BBQThe Age of Transitions and Uncle 5-3-2024AOT #421An unexpected hero arises to take on our old enemies, the Neocons. Not the hero we expected, but most likely, the hero we deserve. Topics include: Future of Humanity Institute closing, Oxford, transhumanism, Nick Bostrom, Anders Sandberg, WTA, h+, eugenics, Silicon Valley billionaires, Simulation Hypothesis, racist emails, dysgenics, artificial intelligence, progressive version of transhumanism, fringe ideologies and groups, IEET, Martine Rothblatt, EA, Longtermism, national economic systems, technological development, Neoliberals, Neocons, establishment in crisis, Erik Prince, Blackwater, Xe, Bush Administration, War on Terror, MIC, Indo-Pacific military theater, pivot away from Middle East, defense spending, Reagan, Cold War, post communist Russia, focus on private sector to save governmental failure, Eric Schmidt, national security shift, Boeing, whistleblowers' deaths, basic corruption, focus on profits over all else, financialization, wealth gap increasing, money isn't real, lack of economic philosophy, no accountability, dismantling of legitimate protest, Israel, student protests, banning campus protests, 2024 presidential election, AGI, major governments want to steer their own new world order, space vs the desertUTP #331Topics include: Spirit of Texas BBQ restaurant, Inland Empire, livestream videos, new iPad, Stratego, Minesweeper, cowboy hats, flea market products, apps, YouTuber neighbors, podcasting, favorite things, Jabber Jaw, Groffdale Machine Co., Amish scooters, exercise, cycling, dynamo hub, bike commuting, podcast studio setup, phone call line, Sam Smith vs Petty and Lynne, squirrels, fruit trees, dragon fruit, citrus fruit fliesFRANZ MAIN HUB:https://theageoftransitions.com/PATREONhttps://www.patreon.com/aaronfranzUNCLEhttps://unclethepodcast.com/ORhttps://theageoftransitions.com/category/uncle-the-podcast/FRANZ and UNCLE Merchhttps://theageoftransitions.com/category/support-the-podcasts/KEEP OCHELLI GOING. You are the EFFECT if you support OCHELLI https://ochelli.com/donate/Ochelli Link Treehttps://linktr.ee/chuckochelli

The Nonlinear Library
LW - Express interest in an "FHI of the West" by habryka

The Nonlinear Library

Play Episode Listen Later Apr 18, 2024 5:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Express interest in an "FHI of the West", published by habryka on April 18, 2024 on LessWrong. TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder. The Future of Humanity Institute is dead: I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its roof, and many crucial considerations that form the bedrock of my current life plans were discovered and explained there (including the concept of crucial considerations itself). With the death of FHI (as well as MIRI moving away from research towards advocacy), there no longer exists a place for broadly-scoped research on the most crucial considerations for humanity's future. The closest place I can think of that currently houses that kind of work is the Open Philanthropy worldview investigation team, which houses e.g. Joe Carlsmith, but my sense is Open Philanthropy is really not the best vehicle for that kind of work. While many of the ideas that FHI was working on have found traction in other places in the world (like right here on LessWrong), I do think that with the death of FHI, there no longer exists any place where researchers who want to think about the future of humanity in an open ended way can work with other people in a high-bandwidth context, or get operational support for doing so. That seems bad. So I am thinking about fixing it. Anders Sandberg, in his oral history of FHI, wrote the following as his best guess of what made FHI work: What would it take to replicate FHI, and would it be a good idea? Here are some considerations for why it became what it was: Concrete object-level intellectual activity in core areas and finding and enabling top people were always the focus. Structure, process, plans, and hierarchy were given minimal weight (which sometimes backfired - flexible structure is better than little structure, but as organization size increases more structure is needed). Tolerance for eccentrics. Creating a protective bubble to shield them from larger University bureaucracy as much as possible (but do not ignore institutional politics!). Short-term renewable contracts. [...] Maybe about 30% of people given a job at FHI were offered to have their contracts extended after their initial contract ran out. A side-effect was to filter for individuals who truly loved the intellectual work we were doing, as opposed to careerists. Valued: insights, good ideas, intellectual honesty, focusing on what's important, interest in other disciplines, having interesting perspectives and thoughts to contribute on a range of relevant topics. Deemphasized: the normal academic game, credentials, mainstream acceptance, staying in one's lane, organizational politics. Very few organizational or planning meetings. Most meetings were only to discuss ideas or present research, often informally. Some additional things that came up in a conversation I had with Bostrom himself about this: A strong culture that gives people guidance on what things to work on, and helps researchers and entrepreneurs within the organization coordinate A bunch of logistical and operation...

The Nonlinear Library
EA - Future of Humanity Institute 2005-2024: Final Report by Pablo

The Nonlinear Library

Play Episode Listen Later Apr 17, 2024 7:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future of Humanity Institute 2005-2024: Final Report, published by Pablo on April 17, 2024 on The Effective Altruism Forum. Anders Sandberg has written a "final report" released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective on one's research. While working on currently fashionable and fundable topics may provide success in academia, aiming for building up fields that are needed, writing papers about topics before they become cool, and staying in the game allows for creating a solid body of work that is likely to have actual meaning and real-world effect. The challenge is obviously to create enough stability to allow such long-term research. This suggests that long-term funding and less topically restricted funding is more valuable than big funding. Many academic organizations are turned towards other academic organizations and recognized research topics. However, pre-paradigmatic topics are often valuable, and relevant research can occur in non-university organizations or even in emerging networks that only later become organized. Having the courage to defy academic fashion and "investing" wisely in such pre-paradigmatic or neglected domains (and networks) can reap good rewards. Having a diverse team, both in terms of backgrounds but also in disciplines, proved valuable. But this was not always easy to achieve within the rigid administrative structure that we operated in. Especially senior hires with a home discipline in a faculty other than philosophy were nearly impossible to arrange. Conversely, by making it impossible to hire anyone not from a conventional academic background (i.e., elite university postdocs) adversely affects minorities, and resulted in instances where FHI was practically blocked from hiring individuals from under-represented groups. Hence, try to avoid credentialist constraints. In order to do interdisciplinary work, it is necessary to also be curious about what other disciplines are doing and why, as well as to be open to working on topics one never considered before. It also opens the surface to the rest of the world. Unusually for a research group based in a philosophy department, FHI members found themselves giving tech support to the pharmacology department; participating in demography workshops, insurance conferences, VC investor events, geopolitics gatherings, hosting artists and civil servant delegations studying how to set up high-performing research institutions in their own home country, etc. - often with interesting results. It is not enough to have great operations people; they need to understand what the overall aim is even as the mission grows more complex. We were lucky to have had many amazing and mission-oriented people make the Institute function. Often there was an overlap between being operations and a researcher: most of the really successful ops people participated in our discussions and paper-writing. Try to hire people who are curious. Where we failed Any organization embedded in a larger organization or community needs to invest to a certain degree in establishing the right kind of...

Effective Altruism Forum Podcast
Future of Humanity Institute 2005-2024: Final Report

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 17, 2024 9:29


This is a link post. Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective [...] ---Outline:(01:00) What we did well(03:52) Where we failed(05:10) So, you want to start another FHI?--- First published: April 17th, 2024 Source: https://forum.effectivealtruism.org/posts/uK27pds7J36asqJPt/future-of-humanity-institute-2005-2024-final-report Linkpost URL:https://www.dropbox.com/scl/fi/ml8d3ubi3ippxs4yon63n/FHI-Final-Report.pdf?rlkey=2c94czhgagy27d9don7pvbc26&dl=0 --- Narrated by TYPE III AUDIO.

Effective Altruism Forum Podcast
[Linkpost] “Future of Humanity Institute 2005-2024: Final Report” by Pablo

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 17, 2024 9:25


Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective on one's research. While [...] ---Outline:(00:57) What we did well(03:48) Where we failed(05:06) So, you want to start another FHI?--- First published: April 17th, 2024 Source: https://forum.effectivealtruism.org/posts/uK27pds7J36asqJPt/future-of-humanity-institute-2005-2024-final-report Linkpost URL:https://www.dropbox.com/scl/fi/ml8d3ubi3ippxs4yon63n/FHI-Final-Report.pdf?rlkey=2c94czhgagy27d9don7pvbc26&dl=0 --- Narrated by TYPE III AUDIO.

Framgångspodden
784. Anders Sandberg - Så kan vår framtid bli: Om AI, livsförlängning & mänskliga kolonier i rymden, Original

Framgångspodden

Play Episode Listen Later Apr 10, 2024 93:03


Forskaren Anders Sandberg, verksam vid Future of Humanity Institute vid Oxford University, har som jobb att blicka in i framtiden. Dagligen funderar han kring teknologier som kan förändra mänskligheten, hur mänskligheten kan öka sina chanser att överleva över tid och vilka globala katastrofer vi behöver hålla utkik efter. Tillsammans så pratar vi om just framtiden, om vad han och andra forskare tror kan hända. Vi pratar om vilka globala risker han ser med AI, om möjligheterna att koppla upp sin hjärna i en dator och när han tror att människan kommer börja kolonisera rymden. Vi hinner också prata om olika möjligheter att förlänga livet, MDMA, samtalen med Elon Musk och varför Anders valt att frysa ner sig själv när han dör. Tusen tack för att du lyssnar!Ta del av våra kurser på Framgångsakademin.Beställ "Mitt Framgångsår".Följ Alexander Pärleros på Instagram.Följ Alexander Pärleros på Tiktok.Bästa tipsen från avsnittet i Nyhetsbrevet.I samarbete med Convendum. Hosted on Acast. See acast.com/privacy for more information.

Framgångspodden
784. Anders Sandberg - Så kan vår framtid bli: Om AI, livsförlängning & mänskliga kolonier i rymden, Short

Framgångspodden

Play Episode Listen Later Apr 10, 2024 31:52


Forskaren Anders Sandberg, verksam vid Future of Humanity Institute vid Oxford University, har som jobb att blicka in i framtiden. Dagligen funderar han kring teknologier som kan förändra mänskligheten, hur mänskligheten kan öka sina chanser att överleva över tid och vilka globala katastrofer vi behöver hålla utkik efter. Tillsammans så pratar vi om just framtiden, om vad han och andra forskare tror kan hända. Vi pratar om vilka globala risker han ser med AI, om möjligheterna att koppla upp sin hjärna i en dator och när han tror att människan kommer börja kolonisera rymden. Vi hinner också prata om olika möjligheter att förlänga livet, MDMA, samtalen med Elon Musk och varför Anders valt att frysa ner sig själv när han dör. Tusen tack för att du lyssnar!Ta del av våra kurser på Framgångsakademin.Beställ "Mitt Framgångsår".Följ Alexander Pärleros på Instagram.Följ Alexander Pärleros på Tiktok.Bästa tipsen från avsnittet i Nyhetsbrevet.I samarbete med Convendum. Hosted on Acast. See acast.com/privacy for more information.

The Foresight Institute Podcast
Vision Weekend France: Energy and Space Panel

The Foresight Institute Podcast

Play Episode Listen Later Mar 1, 2024 39:23


Panellists: Andre Losekrug, Michael Gibson, Anders Sandberg, and Robin Hanson.Recorded at Vision Weekend France 2023.Energy Key Highlights:Excitement about energy advancements, specifically in the field of fusion. This could lead to an increased focus on policy changes to address climate change and the building of reactors.The growing entrepreneurial revolution in fusion, with various start-ups and innovative reactor designs emerging. China's recent approval of a molten salt thorium breeder reactor was highlighted as a further acceptance of nuclear power.Space Key Highlights:Advancements in satellite technology and constellations are anticipated, along with the need to explore intermediate solutions between current space exploration and future ambitious projects. It is noted that space exploration offers opportunities for freedom and innovation, with new approaches to manufacturing and remote sensing potentially leading to breakthroughs in various industries.Other Topics:Offshore water restoration, international disputes, prioritizing space development, launch services, space tourism, addressing space debris, and the prediction of detecting extraterrestrial intelligent life within the next century.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.

Morgonpasset i P3 – Gästen
Då kan kärlek på piller bli verklighet

Morgonpasset i P3 – Gästen

Play Episode Listen Later Dec 29, 2023 33:28


2024 närmar sig med stormsteg och vi pratar med framtidsforskaren Anders Sandberg om den absolut senaste forskningen och vad den innebär för kommande året. Anders Sandberg är forskare på Oxford University och berättar att utvecklingen AI har haft 2023 inte kommer gå att jämföra med 2024. Vi pratar om kärlek på piller, att desarmera vulkaner och att stoppa åldrandet. Lyssna på alla avsnitt i Sveriges Radio Play. Programledare: David Druid och Nanna Olasdotter Hallberg

Reviewer 2 does geoengineering
Ethics of Volcano Geoengineering - Cassidy

Reviewer 2 does geoengineering

Play Episode Listen Later Dec 13, 2023 63:54


Should the army be allowed to blow people up with a volcano? If your geothermal power plant triggers an eruption, is that just a risk of doing business? Should we fiddle with volcanoes to make them safer? Is even researching this opening a can of worms? Gideon Futurman interviews Michael Cassidy (giving @geoengineering1 a month long editing nightmare, but with results we hope you'll like). Paper: The Ethics of Volcano Geoengineering Michael Cassidy, Anders Sandberg, Lara Mani https://doi.org/10.1029/2023EF003714

The Foresight Institute Podcast
Dr. Anders Sandberg | WBE History and Overview

The Foresight Institute Podcast

Play Episode Listen Later Dec 8, 2023 25:52


Dr. Anders Sandberg is a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he specializes in the management of low-probability, high-impact risks and future technology capabilities. With a Ph.D. in computational neuroscience from Stockholm University, his expertise spans computer science, neuroscience, and medical engineering. A cofounder of the think tank Eudoxa and former chairman of the Swedish Transhumanist Association, Sandberg is recognized for his contributions to neuroethics, cognitive enhancement, and global catastrophic risk.Key HighlightsThe impact and growth of the mind uploading research field since its foundational workshop.The influence of the workshop's roadmap, evidenced by citations across disciplines from humanities to AI safety.The astounding advancements in biotech, signaling qualitative leaps beyond traditional tech scaling.Neurotech achievements in setting clear goals, enhancing data interoperability, and developing detailed brain structure simulations.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.

FT Tech Tonic
Superintelligent AI: Transhumanism etc.

FT Tech Tonic

Play Episode Listen Later Dec 5, 2023 25:59


What are the ideas driving the pursuit of human-level AI? In the penultimate episode of this Tech Tonic series, hosts Madhumita Murgia and John Thornhill look at some of the futuristic objectives that are at the centre of the AI industry's quest for superintelligence and hear about the Extropians, a surprisingly influential group of futurists from the early 1990s. Anders Sandberg, senior research fellow at Oxford university's Future of Humanity Institute, sets out some of the ideas developed in the Extropians mailing list while Connor Leahy, co-founder of Conjecture and Timnit Gebru, founder of the Distributed AI Research Institute (DAIR) explain why they worry about the Extropians' continued influence today.Free links:OpenAI and the rift at the heart of Silicon ValleyWe need to examine the beliefs of today's tech luminariesOpenAI's secrecy imperils public trustBig tech companies cut AI ethics staff, raising safety concernsTech Tonic is presented by Madhumita Murgia and John Thornhill. Senior producer is Edwin Lane and the producer is Josh Gabert-Doyon. Executive producer is Manuela Saragosa. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT's head of audio is Cheryl Brumley.Clips: Alcor CryonicsRead a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.

Why?
What will humans look like in a million years?

Why?

Play Episode Listen Later Nov 16, 2023 31:13


Humans now have more control over our long-term evolution than ever before. Developments in DNA research could mean we can by-pass natural selection and dictate our own biological destiny. But where is it leading? We time-travel forward a million years and check in on our (very) distant cousins to see if they resemble us at all. Will they even be recognisably human? Dr Anders Sandberg of Oxford University's Future of Humanity Institute discusses the future of our species with Ananyo Bhattacharya. Every Monday and Thursday WHY? takes you on an adventure to the edge of knowledge, asking the questions that puzzle and perplex us, from the inner workings of the universe to the far reaches of our dreams. Follow on your favourite app so you never miss an episode. WHY? is written and presented by Ananyo Bhattacharya. Audio production by Jade Bailey. Artwork by James Parrett. Music by DJ Food. Exec Producer: Jacob Jarvis. Lead Producer: Anne-Marie Luff. Group Editor: Andrew Harrison. WHY? is a Podmasters Production. Instagram | Twitter Learn more about your ad choices. Visit megaphone.fm/adchoices

Philosophy for our times
Ancient traits in a modern world | Sunetra Gupta, Anders Sandberg, Subrena Smith

Philosophy for our times

Play Episode Listen Later Nov 14, 2023 54:26


Is our neurobiology at odds with the modern world?Looking for a link we mentioned? Find it here: https://linktr.ee/philosophyforourtimesWe see the remarkable evolution of the human brain as one of the driving factors behind our success as a species. Our neurobiology evolved though to solve challenges in a drastically different world than we find ourselves in today. Might our evolved traits, once advantageous, now be our Achilles heel? For human aggression, inventiveness and a determination to overcome enemies, once evolutionarily effective now risk resource, technology, and nuclear crises each with the potential to bring our species to an end. Can we find ways to change our behaviour before it is too late? Professor of Theoretical Epidemiology at the University of Oxford Sunetra Gupta, research fellow at the Future of Humanity Institute at Oxford Anders Sandberg and philosopher of biology Subrena Smith debate whether or not our neurobiology inadequate to deal with the challenges of the 21st century. Güneş Taylor hosts.There are thousands of big ideas to discover at IAI.tv – videos, articles, and courses waiting for you to explore. Find out more: https://iai.tv/podcast-offers?utm_source=podcast&utm_medium=shownotes&utm_campaign=ancient-traits-in-a-modern-worldSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

80k After Hours
Highlights: #165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

80k After Hours

Play Episode Listen Later Nov 1, 2023 28:31


This is a selection of highlights from episode #165 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universeAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

Morgonpasset i P3
David Druid utpressar SVT, Erdogan säger yaaas till Sverige och lever vi i en simulation?

Morgonpasset i P3

Play Episode Listen Later Oct 24, 2023 93:12


David Druid har gjort en sexig grej och vaskat den! Margret Atladottir kan inte sluta tänka på Britney-boken som kommer idag! Vi pratar med Daniel Helldén som presenterats som ny kandidat till språkrörsposten. Oxford-forskaren Anders Sandberg ger oss svar på ALLA frågor vi har om simulationer, AI och parallella universum! Babs Drougge på P3 Nyheter rapporterar om att Erdogan säger yaaas till Sverige och att skidskyttar är oense om tejpad mun! Lyssna på alla avsnitt i Sveriges Radio Play. Programledare: David Druid och Margret Atladottir

Morgonpasset i P3 – Gästen
Det tyder på att vi lever i en simulering

Morgonpasset i P3 – Gästen

Play Episode Listen Later Oct 24, 2023 28:02


Lever vi i en simulation? Är verkligheten verklig? Vad är meningen med livet? Ny forskning har kommit som försöker förklara vår verklighet och vi pratar med om simuleringsteorierna med Anders Sandberg som forskar på människans framtid vid Oxford University! Anders Sandberg delar även med sig av sina tankar om AI och om hotet mot mänskligheten är så verkligt som forskare verkar mena på. Lyssna på alla avsnitt i Sveriges Radio Play. Programledare: David Druid och Margret Atladottir

80,000 Hours Podcast with Rob Wiblin
#165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 6, 2023 168:33


"Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody's sitting on Mars and you're going to war against them, it's very hard to hit them. You don't have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it's going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it's actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you're in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast. So my general conclusion has been that war looks unlikely on some size scales but not on others." — Anders SandbergIn today's episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.Links to learn more, summary and full transcript.They cover:The epic new book Anders is working on, and whether he'll ever finish itWhether there's a best possible world or we can just keep improving foreverWhat wars might look like if the galaxy is mostly settledThe impediments to AI or humans making it to other starsHow the universe will end a million trillion years in the futureWhether it's useful to wonder about whether we're living in a simulationThe grabby aliens theoryWhether civilizations get more likely to fail the older they getThe best way to generate energy that could ever existBlack hole bombsWhether superintelligence is necessary to get a lot of valueThe likelihood that life from elsewhere has already visited EarthAnd plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

The Nonlinear Library
AF - Projects I would like to see (possibly at AI Safety Camp) by Linda Linsefors

The Nonlinear Library

Play Episode Listen Later Sep 27, 2023 6:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Projects I would like to see (possibly at AI Safety Camp), published by Linda Linsefors on September 27, 2023 on The AI Alignment Forum. I recently discussed with my AISC co-organiser Remmelt, some possible project ideas I would be excited about seeing at the upcoming AISC, and I thought these would be valuable to share more widely. Thanks to Remmelt for helfull suggestions and comments. What is AI Safety Camp? AISC in its current form is primarily a structure to help people find collaborators. As a research lead we give your project visibility, and help you recruit a team. As a regular participant, we match you up with a project you can help with. I want to see more good projects happening. I know there is a lot of unused talent wanting to help with AI safety. If you want to run one of these projects, it doesn't matter to me if you do it as part of AISC or independently, or as part of some other program. The purpose of this post is to highlight these projects as valuable things to do, and to let you know AISC can support you, if you think what we offer is helpful. Project ideas These are not my after-long-consideration top picks of most important things to do, just some things I think would be net positive if someone would do. I typically don't spend much cognitive effort on absolute rankings anyway, since I think personal fit is more important for ranking your personal options. I don't claim originality for anything here. It's possible there is work on one or several of these topics, which I'm not aware of. Please share links in comments, if you know of such work. Is substrate-needs convergence inevitable for any autonomous system, or is it preventable with sufficient error correction techniques? This can be done as an adversarial collaboration (see below) but doesn't have to be. The risk from substrate-needs convergence can be summarised as such: If AI is complex enough to self-sufficiently maintain its components, natural selection will sneak in. This would select for components that cause environmental conditions needed for artificial self-replication. An AGI will necessarily be complex enough. Therefore natural selection will push the system towards self replication. Therefore it is not possible for an AGI to be stably aligned with any other goal. Note that this line of reasoning does not necessitate that the AI will come to represent self replication as its goal (although that is a possible outcome), only that natural selection will push it towards this behaviour. I'm simplifying and skipping over a lot of steps! I don't think there currently is a great writeup of the full argument, but if you're interested you can read more here or watch this talk by Remmelt or reach out to me or Remmelt. Remmelt has a deeper understanding of the arguments for substrate-needs convergence than me, but my communication style might be better suited for some people. I think substrate-needs convergence is pointing at a real risk. I don't know yet if the argument (which I summarised above) proves that building an AGI that stays aligned is impossible, or if it points to one more challenge to be overcome. Figuring out which of these is the case seems very important. I've talked to a few people about this problem, and identified what I think is the main crux: How well you can execute error correction mechanisms? When Forest Laundry and Anders Sandberg discussed substrate-needs convergence, they ended up with a similar crux, but unfortunately did not have time to address it. Here's a recording of their discussion, however Landry's mic breaks about 20 minutes in, which makes it hard to hear him from that point onward. Any alignment-relevant adversarial collaboration What is adversarial collaborations: [link to some Scott Post] Possible topic: For and against some alignment plan. Maybe y...

The Nonlinear Library
EA - The Case for AI Safety Advocacy to the Public by Holly Elmore

The Nonlinear Library

Play Episode Listen Later Sep 20, 2023 21:51


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for AI Safety Advocacy to the Public, published by Holly Elmore on September 20, 2023 on The Effective Altruism Forum. tl;dr: Advocacy to the public is a large and neglected opportunity to advance AI Safety. AI Safety as a field is unfamiliar with advocacy, and it has reservations, some founded and others not. A deeper understanding of the dynamics of social change reveals the promise of pursuing outside game strategies to complement the already strong inside game strategies. I support an indefinite global Pause on frontier AI and I explain why Pause AI is a good message for advocacy. Because I'm American and focused on US advocacy, I will mostly be drawing on examples from the US. Please bear in mind, though, that for Pause to be a true solution it will have to be global. The case for advocacy in general Advocacy can work I've encountered many EAs who are skeptical about the role of advocacy in social change. While it is difficult to prove causality in social phenomena like this, there is a strong historical case that advocacy has been effective at bringing about the intended social change through time (whether that change ended up being desirable or not). A few examples: Though there were many other economic and political factors that contributed, it is hard to make a case that the US Civil War had nothing to do with humanitarian concern for enslaved people- concern that was raised by advocacy. The people's, and ultimately the US government's, will to abolish slavery was bolstered by a diverse array of advocacy tactics, from Harriet Beecher Stowe's writing of Uncle Tom's Cabin to Frederick Douglass's oratory to the uprisings of John Brown. The US National Women's Party is credited with pressuring Woodrow Wilson and federal and state legislators into supporting the 19th Amendment, which guaranteed women the right to vote, through its "aggressive agitation, relentless lobbying, clever publicity stunts, and creative examples of civil disobedience and nonviolent confrontation". The nationwide prohibition of alcohol in the US (1920-1933) is credited to the temperance movement, which had all manner of advocacy gimmicks including the slogan "the lips that touch liquor shall never touch mine", and the stigmatization of drunk driving and the legal drinking age of 21 is directly linked to Mothers Against Drunk Drivers. Even if advocacy only worked a little of the time or only served to tip the balance of larger forces, the stakes of AI risk are so high and AI risk advocacy is currently so neglected that I see a huge opportunity. We can now talk to the public about AI risk With the release of ChatGPT and other advances in state-of-the-art artificial intelligence in the last year, the topic of AI risk has entered the Overton window and is no longer dismissed as "sci-fi". But now, as Anders Sandberg put it, the Overton window is moving so fast it's "breaking the sound barrier". The below poll from AI Policy Institute and YouGov (release 8/11/23) shows comfortable majorities among US adults on questions about AI x-risk (76% worry about extinction risks from machine intelligence), slowing AI (82% say we should go slowly and deliberately), and government regulation of the AI industry (82% say tech executives can't be trusted to self-regulate). What having the public's support gets us Opinion polls and voters that put pressure on politicians. Constituent pressure on politicians gives the AI Safety community more power to get effective legislation passed- that is, legislation which addresses safety concerns and requires us to compromise less with other interests- and it gives the politicians more power against the AI industry lobby. The ability to leverage external pressure to improve existing strategies. With external pressure, ARC, for example, wouldn't have to worry as m...

Philosophy for our times
Getting everything, losing everything | Anders Sandberg, Massimo Pigliucci, Mazviita Chirimuuta

Philosophy for our times

Play Episode Listen Later Sep 5, 2023 41:02


Is our future reality a digital utopia or impending nightmare?Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesAs tech giants promise a lavish digital existence and unparalleled virtual experiences, there's a rising concern. Will we be trading real-life relationships for virtual ones? Substituting nature for mere simulation? And at the forefront, will control rest solely with corporations like Meta? Dive into this pressing debate and navigate the line between digital advancement and the essence of human experience. Hosted by Maria Balaska.Maria Balaska is currently a research fellow at the University of Hertfordshire and at Åbo Akademi University. Anders Sandberg is a researcher, popular science debater, trans-humanist and author of Superhuman: Exploring Human Enhancement from 600 BCE to 2050. Massimo Pigliucci is a philosophy professor at the City College of New York and former co-host of the Rationally Speaking Podcast. His research interests include the Philosophy of Science and the Philosophy of Biology. Mazviita Chirimuuta is a Senior Lecturer in Philosophy at the University of Edinburgh. She is a self-described techno-pessimist and anti-transhumanist.There are thousands of big ideas to discover at IAI.tv – videos, articles, and courses waiting for you to explore. Find out more: https://iai.tv/podcast-offers?utm_source=podcast&utm_medium=shownotes&utm_campaign=getting-everything-losing-everythingSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Abels tårn
Abels serie - Drømmen om evig liv #4: Evighetsmennesket

Abels tårn

Play Episode Listen Later Aug 4, 2023 26:02


Anders Sandberg er filosof ved Oxford og har inngått avtale om å fryses ned når han dør. Han tror at mennesket i framtiden vil bli en slags kyborg som glir inn mellom den fysiske verden og cyberspace. Medvirkende: Anders Sandberg Annelin Eriksen. Hør episoden i appen NRK Radio

FUTURES Podcast
Our Superhuman Future w/ Elise Bohan, Prof. Steve Fuller & Anders Sandberg

FUTURES Podcast

Play Episode Listen Later Jul 31, 2023 93:25


Transhumanists Elise Bohan, Prof. Steve Fuller and Anders Sandberg share their thoughts on the future of humanity, the role artificial intelligence will play in society, and the radical ways advanced technology may redefine what it means to be human. Recorded in front of a live audience at Kings Place, London on 16 February 2023. Elise Bohan is a Senior Research Scholar at the University of Oxford's Future of Humanity Institute (FHI). She holds a PhD in evolutionary macrohistory, wrote the world's first book-length history of transhumanism as a doctoral student, and recently launched her debut book Future Superhuman: Our transhuman lives in a make-or-break century (NewSouth, 2022). Prof. Steve Fuller is Auguste Comte Professor of Social Epistemology at the University of Warwick, UK. Originally trained in history and philosophy of science, he is the author of more than twenty books. From 2011 to 2014 he published three books with Palgrave on ‘Humanity 2.0'. His most recent book is Nietzschean Meditations: Untimely Thoughts at the Dawn of Transhuman Era (Schwabe Verlag, 2020). Anders Sandberg is a Senior Research Fellow at the Future of Humanity Institute (FHI) at Oxford University where his research focuses on the societal and ethical issues surrounding human enhancement and new technologies. He is also research associate at the Oxford Uehiro Centre for Practical Ethics and the Oxford Centre for Neuroethics. Find out more: futurespodcast.net FOLLOW Twitter: twitter.com/futurespodcast Instagram: instagram.com/futurespodcast Facebook: facebook.com/futurespodcast ABOUT THE HOST Luke Robert Mason is a British-born futures theorist who is passionate about engaging the public with emerging scientific theories and technological developments. He hosts documentaries for Futurism, and has contributed to BBC Radio, BBC One, The Guardian, Discovery Channel, VICE Motherboard and Wired Magazine. Follow him on Twitter: twitter.com/lukerobertmason CREDITS Produced by FUTURES Podcast Recorded, Mixed & Edited by Luke Robert Mason

Learning With Lowell
Anders Sandberg: Myanmar, Brain Emulation, Biohacking vs AI Terrorism | Learning with Lowell 198

Learning With Lowell

Play Episode Listen Later Jul 4, 2023 88:38


Anders Sandberg is a Swedish researcher, futurist and transhumanist. He holds a PhD in computational neuroscience from Stockholm University, and is currently a senior research fellow at the Future of Humanity Institute at the University... Source

Philosophy for our times
Love and other drugs | Rupert Sheldrake, Anders Sandberg, Ella Whelan

Philosophy for our times

Play Episode Listen Later May 16, 2023 49:12


Can synthetic drugs induce true feelings of love?Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesFrom the Christian tenet 'God is love' to the plots of countless novels and films, love is seen as central to our lives. Yet from scientific studies along with anecdotal accounts we know that psychoactive substances and MDMA in particular can enhance and even induce feelings of love. If love can be hacked by a change in brain chemistry, might our romanticised idea of love itself be the distortion? Should we use drugs to encourage, initiate and repair relationships as some therapists advocate? Or are such experiences false, damaging, and potentially socially dangerous? Is love a product of brain chemistry, or, is it something deeper that a drug could never replicate? There are thousands of big ideas to discover at IAI.tv – videos, articles, and courses waiting for you to explore. Find out more: https://iai.tv/podcast-offers?utm_source=podcast&utm_medium=shownotes&utm_campaign=love-and-other-drugsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Hear This Idea
#62 – Anders Sandberg on Exploratory Engineering, Value Diversity, and Grand Futures

Hear This Idea

Play Episode Listen Later Apr 20, 2023 52:52


Anders Sandberg is a researcher, futurist, transhumanist and author. He holds a PhD in computational neuroscience from Stockholm University, and is currently a Senior Research Fellow at the Future of Humanity Institute at the University of Oxford. His research covers human enhancement, exploratory engineering, and 'grand futures' for humanity. This episode is a recording of a live interview at EAGx Cambridge (2023). You can find upcoming effective altruism conferences here: www.effectivealtruism.org/ea-global We talk about: What is exploratory engineering and what is it good for? Progress on whole brain emulation Are we near the end of humanity's tech tree? Is diversity intrinsically valuable in grand futures? How Anders does research Virtue ethics for civilisations Anders' takes on AI risk and whether LLMs are close to general intelligence And much more! Further reading and a transcript is available on our website: hearthisidea.com/episodes/sandberg-live If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

The Foresight Institute Podcast
Existential Hope Podcast: Anders Sandberg | Grand Futures & The Post-Human Coral Reef

The Foresight Institute Podcast

Play Episode Listen Later Mar 31, 2023 59:27


In this episode of the Existential Hope Podcast, we have the pleasure of interviewing Anders Sandberg, a Swedish philosopher, and researcher at the Future of Humanity Institute at Oxford University.During our conversation with Anders, we explore the concept of grand futures and what it means to strive for them as a society. We discuss the potential benefits and risks of emerging technologies such as artificial intelligence, nanotechnology, and biotechnology, and how we can navigate these developments responsibly.Full transcript, list of resources, and art piece: Anders Sandberg | Grand Futures & The Post-Human Coral ReefExistential Hope was created to collect positive and possible scenarios for the future, so that we can have more people commit to the creation of a brighter future, and to start mapping out the main developments and challenges that need to be navigated to reach it. Find all previous podcast episodes here.The Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. Beatrice Erkers is Chief of Operations at Foresight Institute, and program manager of the Existential Hope group. She has a background in publishing and years of experience working with communication at Foresight and at a publishing house. Apply to Foresight's virtual salons and in person workshops here!We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page.Visit our website for more content, or join us here:TwitterFacebookLinkedInEvery word ever spoken on this podcast is now AI-searchable using Fathom.fm, a search engine for podcasts. Hosted on Acast. See acast.com/privacy for more information.

The Nonlinear Library
EA - Predicting what future people value: A terse introduction to Axiological Futurism by Jim Buhler

The Nonlinear Library

Play Episode Listen Later Mar 25, 2023 6:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting what future people value: A terse introduction to Axiological Futurism, published by Jim Buhler on March 24, 2023 on The Effective Altruism Forum. Why this is worth researching Humanity might develop artificial general intelligence (AGI), colonize space, and create astronomical amounts of things in the future (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016). But what things? How (dis)valuable? And how does this compare with things grabby aliens would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future? While this depends on many factors, a crucial one will likely be the values of our successors. Here's a position that might tempt us while considering whether it is worth researching this topic: Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can't predict. Therefore, trying to predict their values is a waste of time and resources. While I see how this can seem compelling, I think this is very ill-informed. First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn't seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix). Second, a scenario where the values of our descendants don't significantly differ from ours appears quite unlikely to me. We should watch for things like the End of History illusion, here. Values seem to notably evolve through History, and there is no reason to assume we are special enough to make us drop that prior. Besides being tractable, I believe axiological futurism to be uncommonly important given its instrumentality in answering the crucial questions mentioned earlier. It therefore also seems unwarrantedly neglected as of today. How to research this Here are examples of broad questions that could be part of a research agenda on this topic: What are the best predictors of future human values? What can we learn from usual forecasting methods? How have people's values changed throughout History? Why? What can we learn from this? (see, e.g., MacAskill 2022, Chapter 3; Harris 2019; Hopster 2022) Are there reasons to think we'll observe less change in the future? Why? Value lock-in? Some form of moral convergence happening soon? Are there reasons to expect more change? Would that be due to the development of AGI, whole brain emulation, space colonization, and/or accelerated value drift? More broadly, what impact will future technological progress have on values? (see Hanson 2016 for a forecast example.) Should we expect some values to be selected for? (see, e.g., Christiano 2013; Bostrom 2009, Tomasik 2017) Might a period of “long reflection” take place? If yes, can we get some idea of what could result from it? Does something like coherent extrapolated volition have any chance of being pursued and if so, what could realistically result from it? Are there futures – where humanity has certain values – that are unlikely but worth wagering on? Might our research on this topic affect the values we should expect our successors to have by, e.g., triggering a self-defeating or self-fulfilling prophecy effect? (Danaher 2021, section 2) What do/will aliens value (see my forthcoming next post) and what does that tell us about ourselves? John Danaher (2021) gives examples of methodologies that could be used to answer these questions. Also, my Appendix references examples and other relevant work, including the (forthcoming) next posts in this sequence. Acknowledgment Thanks to Anders Sandberg for pointing m...

The Nonlinear Library
EA - Apply to attend EA conferences in Europe by OllieBase

The Nonlinear Library

Play Episode Listen Later Feb 28, 2023 2:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to attend EA conferences in Europe, published by OllieBase on February 28, 2023 on The Effective Altruism Forum. Europe is about to get significantly warmer and lighter. People like warmth and light, so we (CEA) have been busy organising several EA conferences in Europe over the next few months in partnership with local community-builders and EA groups: EAGxCambridge will take place at Guildhall, 17–19 March. Applications are open now and will close on Friday (3 March). Speakers include Lord Martin Rees, Saloni Dattani (Our World In Data) and Anders Sandberg (including a live interview for the Hear This Idea podcast). EAGxNordics will take place at Munchenbryggeriet, Stockholm 21–23 April. Applications are open now and will close 28 March. If you register by 5 March, you can claim a discounted early bird ticket. EA Global: London will take place at Tobacco Dock, 19–21 May 2023. Applications are open now. If you were already accepted to EA Global: Bay Area, you can register for EAG London now; you don't need to apply again. EAGxWarsaw will take place at POLIN, 9–11 June 2023. Applications will open in the coming weeks. You can apply to all of these events using the same application details, bar a few small questions specific to each event. Which events should I apply to? (mostly pulled from our FAQ page) EA Global is mostly aimed at people who have a solid understanding of the core ideas of EA and who are taking significant actions based on those ideas. Many EA Global attendees are already professionally working on effective-altruism-inspired projects or working out how best to work on such projects. EA Global is for EAs around the world and has no location restrictions (though we recommend applying ASAP if you will need a visa to enter the UK). EAGx conferences have a lower bar. They are for people who are: Familiar with the core ideas of effective altruism; Interested in learning more about what to do with these ideas. EAGx events also have a more regional focus: EAGxCambridge is for people who are based in the UK or Ireland, or have plans to move to the UK within the next year; EAGxNordics is primarily for people in the Nordics, but also welcomes international applications; EAGxWarsaw is primarily for people based in Eastern Europe but also welcomes international applications. If you want to attend but are unsure about whether to apply, please err on the side of applying! See e.g. Expat Explore on the “Best Time to Visit Europe” Pew Research Center surveyed Americans on this matter (n = 2,260) and concluded that “Most Like It Hot”. There seem to be significant health benefits, though some people dislike sunlight. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours by david reinstein

The Nonlinear Library

Play Episode Listen Later Feb 6, 2023 5:02


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours, published by david reinstein on February 6, 2023 on The Effective Altruism Forum. The Unjournal: reporting some progress (Link: our main page, see also our post Unjournal: Call for participants and research.) Our group (curating articles and evaluations) is now live on Sciety HERE. The first evaluated research project (paper) has now been posted HERE . First evaluation: Denkenberger et al Our first evaluation is for "Long Term Cost-Effectiveness of Resilient Foods for Global Catastrophes Compared to Artificial General Intelligence Safety", by David Denkenberger, Anders Sandberg, Ross Tieman, and Joshua M. Pearce, published in the International Journal of Disaster Risk Reduction.These three reports and ratings (see a sample below) come from three experts with (what we believe to be) complementary backgrounds (note, these evaluators agreed to be identified rather than remain anonymous): Alex Bates: An award-winning cost-effectiveness analyst with some background in considering long-term and existential risks Scott Janzwood: A political scientist and Research Director at the Cascade Institute Anca Hanea: A senior researcher and applied probabilist based at the Centre of Excellence for Biosecurity Risk Analysis (CEBRA) at the University of Melbourne. She has done prominent research into eliciting and aggregating (expert) judgments, working with the RepliCATS project. These evaluations were, overall, fairly involved. They engaged with specific details of the paper as well as overall themes, directions, and implications. While they were largely positive about the paper, they did not seem to pull punches. Some examples of their feedback and evaluation below (direct quotes). Extract of evaluation content Bates: I'd be surprised if I ever again read a paper with such potential importance to global priorities. My view is that it would be premature to reallocate funding from AI Risk reduction to resilient food on the basis of this paper alone. I think the paper would have benefitted from more attention being paid to the underlying theory of cost-effectiveness motivating the investigation. Decisions made in places seem to have multiplied uncertainty which could have been resolved with a more consistent approach to analysis. The most serious conceptual issue which I think needs to be resolved before this can happen is to demonstrate that ‘do nothing' would be less cost-effective than investing $86m in resilient foods, given that the ‘do nothing' approach would potentially include strong market dynamics leaning towards resilient foods.". Janzwood the authors' cost-effectiveness model, which attempts to decrease uncertainty about the potential uncertainty-reducing and harm/likelihood-reducing 'power' of resilient food R&D and compare it to R&D on AGI safety, is an important contribution" It would have been useful to see a brief discussion of some of these acknowledged epistemic uncertainties (e.g., the impact of resilient foods on public health, immunology, and disease resistance) to emphasize that some epistemic uncertainty could be reduced by exactly the kind of resilient food R&D they are advocating for. Hanea: The structure of the models is not discussed. How did [they] decide that this is a robust structure (no sensitivity to structure performed as far as I understood)" It is unclear if the compiled data sets are compatible. I think the quantification of the model should be documented better or in a more compact way." The authors also responded in detail. Some excerpts: The evaluations provided well thought out and constructively critical analysis of the work, pointing out several assumptions which could impact findings of the paper while also recognizing the value of the work in spite of s...

London Futurists
Questioning the Fermi Paradox, with Anders Sandberg

London Futurists

Play Episode Listen Later Jan 4, 2023 36:16


In the summer of 1950, the physicist Enrico Fermi and some colleagues at the Los Alamos Lab in New Mexico were walking to lunch, and casually discussing flying saucers, when Fermi blurted out “But where is everybody?” He was not the first to pose the question, and the precise phrasing is disputed, but the mystery he was referring to remains compelling.We appear to live in a vast universe, with billions of galaxies, each with billions of stars, mostly surrounded by planets, including many like the Earth. The universe appears to be 13.7 billion years old, and even if intelligent life requires an Earth-like planet, and even if it can only travel and communicate at the speed of light, we ought to see lots of evidence of intelligent life. But we don't. No beams of light from stars occluded by artificial satellites spelling out pi. No signs of galactic-scale engineering. No clear evidence of little green men demanding to meet our leaders.Numerous explanations have been advanced to explain this discrepancy, and one man who has spent more brainpower than most exploring them is the always-fascinating Anders Sandberg. Anders is a computational neuroscientist who got waylaid by philosophy, which he pursues at Oxford University, where he is a senior research fellow.Topics in this episode include:* The Drake equation for estimating the number of active, communicative extraterrestrial civilizations in our galaxy* Changes in recent decades in estimates of some of the factors in the Drake equation* The amount of time it would take self-replicating space probes to spread across the galaxy* The Dark Forest hypothesis - that all extraterrestrial civilizations are deliberately quiet, out of fear* The likelihood of extraterrestrial civilizations emitting observable signs of their existence, even if they try to suppress them* The implausibility of all extraterrestrial civilizations converging to the same set of practices, rather than at least some acting in ways where we would notice their existence - and a counter argument* The possibility of civilisations opting to spend all their time inside virtual reality computers located in deep interstellar space* The Aestivation hypothesis, in which extraterrestrial civilizations put themselves into a "pause" mode until the background temperature of the universe has become much lower* The Quarantine or Zoo hypothesis, in which extraterrestrial civilizations are deliberately shielding their existence from an immature civilization like ours* The Great Filter hypothesis, in which life on other planets has a high probability, either of failing to progress to the level of space-travel, or of failing to exist for long after attaining the ability to self-destruct* Possible examples of "great filters"* Should we hope to find signs of life on Mars?* The Simulation hypothesis, in which the universe is itself a kind of video game, created by simulators, who had no need (or lacked sufficient resources) to create more than one intelligent civilization* Implications of this discussion for the wisdom of the METI project - Messaging to Extraterrestrial IntelligenceSelected follow-up reading:* Anders' website at FHI Oxford: https://www.fhi.ox.ac.uk/team/anders-sandberg/* The Great Filter, by Robin Hanson: http://mason.gmu.edu/~rhanson/greatfilter.html* "Seventy-Five Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life" - a book by Stephen Webb: https://link.springer.com/book/10.1007/978-3-319-13236-5* The aestivation hypothesis: https://www.fhi.ox.ac.uk/aestivation-hypothesis-resolving-fermis-paradox/* Should We Message ET? by David Brin: http://www.davidbrin.com/nonfiction/meti.html

Philosophy for our times
If it doesn't kill you | Susie Orbach, Anders Sandberg, and Havl Carel

Philosophy for our times

Play Episode Listen Later Oct 4, 2022 45:50


Do we need suffering to lead a meaningful life? Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesFrom the plots of Hollywood movies to the roots of Christianity, many see value in adversity and suffering. Be it in character building boot camps or overcoming the trials of a difficult childhood or adult life. Yet the great majority of us do our very best to avoid suffering in our own lives.Should we conclude that the value of adversity and suffering is an illusion? A hangover from Christianity that modernity needs to excise? Or is it a vital and critical element in building personality and enabling a meaningful, fulfilling and significant life? Britain's most beloved psychotherapist and author of “Fat is a Feminist Issue” Susie Orbach, renowned transhumanist Anders Sandberg, and Professor of Philosophy at the University of Bristol Havi Carel explore the significance of suffering in modern society. Hosted by philosopher Julian Baggini.There are thousands of big ideas to discover at IAI.tv – videos, articles, and courses waiting for you to explore. Find out more: https://iai.tv/podcast-offers?utm_source=podcast&utm_medium=shownotes&utm_campaign=if-it-doesn't-kill-youSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Dave Glover Show
Dr. Anders Sandberg warns us about sending messages to space- hour 2

The Dave Glover Show

Play Episode Listen Later Apr 25, 2022 31:17