Podcasts about existential risk

Hypothetical future events that could damage human well-being globally

  • 220PODCASTS
  • 546EPISODES
  • 48mAVG DURATION
  • 1WEEKLY EPISODE
  • Apr 28, 2025LATEST
existential risk

POPULARITY

20172018201920202021202220232024


Best podcasts about existential risk

Latest podcast episodes about existential risk

Artificial Intelligence and You
254 - Guest: Seth Baum, Global Catastrophic Risks Institute, part 2

Artificial Intelligence and You

Play Episode Listen Later Apr 28, 2025 32:25


This and all episodes at: https://aiandyou.net/ . We're talking about catastrophic risks, something that can be depressing for people who haven't confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that's a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He's authored papers on pandemics, nuclear winter, and notably for our show, AI. We talk about national bias in models, coherent extrapolated volition – like, what is it – the risks inherent in a world of numerous different models, and using AI itself to solve some of these problems. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

London Futurists
Human extinction: thinking the unthinkable, with Sean ÓhÉigeartaigh

London Futurists

Play Episode Listen Later Apr 23, 2025 42:27


Our subject in this episode may seem grim – it's the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time.These scenarios aren't pleasant to contemplate, but there's a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we'll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?”Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford.Selected follow-ups:Seán Ó hÉigeartaigh - Leverhulme Centre ProfileExtinction of the human species - by Sean ÓhÉigeartaighHerman Kahn - WikipediaMoral.me - by ConsciumClassifying global catastrophic risks - by Shahar Avin et alDefence in Depth Against Human Extinction - by Anders Sandberg et alThe Precipice - book by Toby OrdMeasuring AI Ability to Complete Long Tasks - by METRCold Takes - blog by Holden KarnofskyWhat Comes After the Paris AI Summit? - Article by SeanARC-AGI - by François CholletHenry Shevlin - Leverhulme Centre profileEleos (includes Rosie Campbell and Robert Long)NeurIPS talk by David ChalmersTrustworthy AI Systems To Monitor Other AI: Yoshua BengioThe Unilateralist's Curse - by Nick Bostrom and Anders SandbergMusic: Spike Protein, by Koi Discovery, availabPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Artificial Intelligence and You
253 - Guest: Seth Baum, Global Catastrophic Risks Institute, part 1

Artificial Intelligence and You

Play Episode Listen Later Apr 21, 2025 31:40


This and all episodes at: https://aiandyou.net/ . We're talking about catastrophic risks, something that can be depressing for people who haven't confronted these things before, and so I have had to be careful in talking about those with most audiences. Yet the paradox is that the more you do look at those risks, the more that effect fades, and that's a good thing, because my guest today is someone who takes on the onerous task of thinking about and doing something about those risks every day. Seth Baum is the co-founder and Executive Director of the Global Catastrophic Risks Institute in New York, which has tackled the biggest of big problems since 2011. He is also a research affiliate at the Cambridge Centre for the Study of Existential Risk. He's authored papers on pandemics, nuclear winter, and notably for our show, AI. We talk about how it feels to work on existential threats every day, AI as a horizontal risk as well as a vertical one, near-term value versus long-term value, AI being used to change the decisions of populations or voting blocs, and AI as a dual-use technology. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

80k After Hours
Highlights: #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway

80k After Hours

Play Episode Listen Later Apr 18, 2025 41:26


Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we're unlikely to know we've solved the problem before the arrival of human-level and superhuman systems in as little as three years.So some — including Buck Shlegeris, CEO of Redwood Research — are developing a backup plan to safely deploy models we fear are actively scheming to harm us: so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier. These highlights are from episode #214 of The 80,000 Hours Podcast: Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway, and include:What is AI control? (00:00:15)One way to catch AIs that are up to no good (00:07:00)What do we do once we catch a model trying to escape? (00:13:39)Team Human vs Team AI (00:18:24)If an AI escapes, is it likely to be able to beat humanity from there? (00:24:59)Is alignment still useful? (00:32:10)Could 10 safety-focused people in an AGI company do anything useful? (00:35:34)These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org. Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong

80k After Hours
Off the Clock #8: Leaving Las London with Matt Reardon

80k After Hours

Play Episode Listen Later Apr 1, 2025 103:21


Watch this episode on YouTube! https://youtu.be/fJssGodnCQgConor and Arden sit down with Matt in his farewell episode to discuss the law, their team retreat, his lessons learned from 80k, and the fate of the show.

Artificial Intelligence and You
250 - Special: Military Use of AI

Artificial Intelligence and You

Play Episode Listen Later Mar 31, 2025 50:03


This and all episodes at: https://aiandyou.net/ . In this special episode we are focused on the military use of AI, and making it even more special, we have not one guest but nine: Peter Asaro, co-founder and co-chair of the International Committee for Robot Arms Control; Stuart Russell, Computer Science professor at UC Berkeley, renowned co-author of the leading text on AI, and influential AI Safety expert; Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and member of the International Committee for Robot Arms Control; Tony Gillespie, author of Systems Engineering for Ethical Autonomous Systems, and a fellow in avionics and mission systems in the UK's Defence Science and Technology Laboratory; Rajiv Malhotra, author of  “Artificial Intelligence and the Future of Power: 5 Battlegrounds.” and Chairman of the Board of Governors of the Center for Indic Studies at the University of Massachusetts; David Brin, scientist and science fiction author famous for the Uplift series and Earth; Roman Yampolskiy, Associate Professor of Computer Science at the University of Louisville in Kentucky and author of AI: Unexplainable, Unpredictable, Uncontrollable; Jaan Tallinn, founder of Skype and billionaire funder of the Centre for the Study of Existential Risk and the Future of Life Institute; Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI; I've collected together portions of their appearances on earlier episodes of this show to create one interwoven narrative about the military use of AI. We talk about autonomy, killer drones, ethics of hands-off decision making, treaties, the perspectives of people and countries outside the major powers, risks of losing control, data center monitoring, and more.  All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

80k After Hours
Highlights: #213 – Will MacAskill on AI causing a “century in a decade” — and how we're completely unprepared

80k After Hours

Play Episode Listen Later Mar 25, 2025 33:35


The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.That's the future Will MacAskill — philosopher and researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.These highlights are from episode #213 of The 80,000 Hours Podcast: Will MacAskill on AI causing a “century in a decade” — and how we're completely unprepared, and include:Rob's intro (00:00:00)A century of history crammed into a decade (00:00:17)What does a good future with AGI even look like? (00:04:48)AI takeover might happen anyway — should we rush to load in our values? (00:09:29)Lock-in is plausible where it never was before (00:14:40)ML researchers are feverishly working to destroy their own power (00:20:07)People distrust utopianism for good reason (00:24:30)Non-technological disruption (00:29:18)The 3 intelligence explosions (00:31:10)These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org. Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

80k After Hours

Play Episode Listen Later Mar 12, 2025 29:21


Technology doesn't force us to do anything — it merely opens doors. But military and economic competition pushes us through. That's how Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don't. Those who resist too much can find themselves taken over or rendered irrelevant.These highlights are from episode #212 of The 80,000 Hours Podcast: Allan Dafoe on why technology is unstoppable & how to shape AI development anyway, and include:Who's Allan Dafoe? (00:00:00)Astounding patterns in macrohistory (00:00:23)Are humans just along for the ride when it comes to technological progress? (00:03:58)Flavours of technological determinism (00:07:11)The super-cooperative AGI hypothesis and backdoors (00:12:50)Could having more cooperative AIs backfire? (00:19:16)The offence-defence balance (00:24:23)These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org. Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

Effective Altruism Forum Podcast
“From Comfort Zone to Frontiers of Impact: Pursuing A Late-Career Shift to Existential Risk Reduction” by Jim Chapman

Effective Altruism Forum Podcast

Play Episode Listen Later Mar 8, 2025 28:34


By Jim Chapman, Linkedin. TL;DR: In 2023, I was a 57-year-old urban planning consultant and non-profit professional with 30 years of leadership experience. After talking with my son about rationality, effective altruism, and AI risks, I decided to pursue a pivot to existential risk reduction work. The last time I had to apply for a job was in 1994. By the end of 2024, I had spent ~740 hours on courses, conferences, meetings with ~140 people, and 21 job applications. I hope that by sharing my experiences, you can gain practical insights, inspiration, and resources to navigate your career transition, especially for those who are later in their career and interested in making an impact in similar fields. I share my experience in 5 sections - sparks, take stock, start, do, meta-learnings, and next steps. [Note - as of 03/05/2025, I am still pursuing my career shift.] Sparks – [...] ---Outline:(01:16) Sparks - 2022(02:29) Take Stock - 2023(03:36) Start(04:15) Do - 2023 and 2024(05:13) Learn(10:46) Get a Job(14:21) Create a Job(16:49) Contractor(18:16) Meta-Learnings(19:50) Next Steps(20:48) Appendix A - Helpful FeedbackThe original text contained 30 footnotes which were omitted from this narration. The original text contained 9 images which were described by AI. --- First published: March 4th, 2025 Source: https://forum.effectivealtruism.org/posts/FcKpAGn75pRLsoxjE/from-comfort-zone-to-frontiers-of-impact-pursuing-a-late-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Principle of Charity
Should We Care About Existential Risk? Pt. 2 On the Couch

Principle of Charity

Play Episode Listen Later Mar 4, 2025 29:09


This week the Honorable Dr Andrew Leigh MP, and philosopher Peter Singer, join host Lloyd Vogelman on the couch for an unfiltered conversation that digs into the personal side of the Principle of Charity.Peter Singer - BioPeter Singer is emeritus professor of bioethics at Princeton University. He has a background in philosophy and works mostly in practical ethics. He is best known for Animal Liberation and for his writings about global poverty.In 2021, Peter received the Berggruen Prize for Philosophy and Culture. The prize comes with $1 million, which Peter donated to the most effective organizations working to assist people in extreme poverty and to reduce the suffering of animals in factory farms.Peter is the founder of The Life You Can Save, an organization based on his book of the same name.His writings in this area include the 1972 essay “Famine, Affluence, and Morality”, in which Peter argues for donating to help the global poor, and two books that make the case for effective giving, The Life You Can Save (2009, 2nd edition 2019) and The Most Good You Can Do (2015).Andrew LeighAndrew Leigh is the Assistant Minister for Competition, Charities, Treasury and Employment, and Federal Member for Fenner in the ACT. Prior to being elected in 2010, Andrew was a professor of economics at the Australian National University. He holds a PhD in Public Policy from Harvard, having graduated from the University of Sydney with first class honours in Arts and Law. Andrew is a past recipient of the Economic Society of Australia's Young Economist Award and a Fellow of the Australian Academy of Social Sciences.His books include Innovation + Equality: How to Create a Future That Is More Star Trek Than Terminator (with Joshua Gans) (2019), Reconnected: A Community Builder's Handbook (with Nick Terrell) (2020), What's the Worst That Could Happen? Existential Risk and Extreme Politics (2021), Fair Game: Lessons From Sport for a Fairer Society and a Stronger Economy (2022) and The Shortest History of Economics (2024).Andrew is a keen Ironman triathlete and marathon runner, and hosts a podcast called The Good Life: Andrew Leigh in Conversation, about living a happier, healthier and more ethical life.CREDITSYour hosts are Lloyd Vogelman and Emile Sherman This podcast is proud to partner with The Ethics CentreFind Lloyd @LloydVogelman on Linked inFind Emile @EmileSherman on Linked In and XThis podcast is produced by Jonah Primo and Sabrina OrganoFind Jonah at jonahprimo.com or @JonahPrimo on Instagram Hosted on Acast. See acast.com/privacy for more information.

Principle of Charity
Should We Care About Existential Risk?

Principle of Charity

Play Episode Listen Later Feb 24, 2025 57:10


In this episode we're joined by Federal Member for Fenner, the Honorable Dr Andrew Leigh MP, and philosopher and emeritus professor of bioethics at Princeton University, Peter Singer, to consider if we should value the lives of unborn future generations, more than we value those of us alive today. The consideration of lives unborn sits at the heart of ‘existential risk'. It asks us to take seriously all the future generations who, if humanity gets it right, could end up far far more numerous than every life lived to date. We could in fact, be just at the beginning of our beautiful journey as a species. But we do face a number of very real risks that could literally destroy us all - biowarfare, climate change and AI to name but a few.So, should we spend our limited resources helping the poorest and most in need today, wherever they live? Or should we divert resources to reduce the sorts of risks which, if left unchecked, could prevent countless generations from coming into existence at all?Peter Singer - BioPeter Singer is emeritus professor of bioethics at Princeton University. He has a background in philosophy and works mostly in practical ethics. He is best known for Animal Liberation and for his writings about global poverty. In 2021, Peter received the Berggruen Prize for Philosophy and Culture. The prize comes with $1 million, which Peter donated to the most effective organizations working to assist people in extreme poverty and to reduce the suffering of animals in factory farms.Peter is the founder of The Life You Can Save, an organization based on his book of the same name. His writings in this area include the 1972 essay “Famine, Affluence, and Morality”, in which Peter argues for donating to help the global poor, and two books that make the case for effective giving, The Life You Can Save (2009, 2nd edition 2019) and The Most Good You Can Do (2015).Andrew LeighAndrew Leigh is the Assistant Minister for Competition, Charities, Treasury and Employment, and Federal Member for Fenner in the ACT. Prior to being elected in 2010, Andrew was a professor of economics at the Australian National University. He holds a PhD in Public Policy from Harvard, having graduated from the University of Sydney with first class honours in Arts and Law. Andrew is a past recipient of the Economic Society of Australia's Young Economist Award and a Fellow of the Australian Academy of Social Sciences.His books include Innovation + Equality: How to Create a Future That Is More Star Trek Than Terminator (with Joshua Gans) (2019), Reconnected: A Community Builder's Handbook (with Nick Terrell) (2020), What's the Worst That Could Happen? Existential Risk and Extreme Politics (2021), Fair Game: Lessons From Sport for a Fairer Society and a Stronger Economy (2022) and The Shortest History of Economics (2024).Andrew is a keen Ironman triathlete and marathon runner, and hosts a podcast called The Good Life: Andrew Leigh in Conversation, about living a happier, healthier and more ethical life. CREDITSYour hosts are Lloyd Vogelman and Emile Sherman This podcast is proud to partner with The Ethics CentreFind Lloyd @LloydVogelman on Linked inFind Emile @EmileSherman on Linked In and XThis podcast is produced by Jonah Primo and Sabrina OrganoFind Jonah at jonahprimo.com or @JonahPrimo on Instagram Hosted on Acast. See acast.com/privacy for more information.

BAOS: Beer & Other Shhh Podcast
Episode #192: Living On The Edge, Baby with Rob Hern of Short Finger Brewing | Adjunct Series

BAOS: Beer & Other Shhh Podcast

Play Episode Listen Later Feb 19, 2025 172:00


Kitchener, ON is a tight-knit brewing scene, and one man has been a pillar of the community for around a decade now. Rob Hern launched Short Finger Brewing in 2019 after running a successful home brew shop out of the same space, and he jumped back on the pod after two years to catch Cee and Nate up. They touched on his love for his "bastard Gueuze", why he transitioned from a home brew shop to a brewery, his connection with Bebo from Third Moon, how he curates his taproom lineup, the full story behind Pulp (his legendary collab with the now-defunct Barncat Artisan Ales) and successful launch event, his Eisbock collab with the also now-defunct Half Hours On Earth, his relationship with Arabella Park, the story behind their flagship Lando, and they also talked about the Short Finger x BAOS collab for the BAOS 10th Anniversary this spring. They got into five killer beers - Lil' Sippy mixed fermentation session saison collab with Escarpment Labs, True Believer Pale Ale, Existential Risk golden sour aged in peach brandy barrels, Fresh Hell Helles Lager, and Lando (AI) variant, their neutral oak blended barrel-aged farmhouse "bastard" Gueuze. This was fantastic - cheers! BAOS Podcast Subscribe to the podcast on YouTube | Website | Theme tune: Cee - BrewHeads

Faster, Please! — The Podcast

The 2020s have so far been marked by pandemic, war, and startling technological breakthroughs. Conversations around climate disaster, great-power conflict, and malicious AI are seemingly everywhere. It's enough to make anyone feel like the end might be near. Toby Ord has made it his mission to figure out just how close we are to catastrophe — and maybe not close at all!Ord is the author of the 2020 book, The Precipice: Existential Risk and the Future of Humanity. Back then, I interviewed Ord on the American Enterprise Institute's Political Economy podcast, and you can listen to that episode here. In 2024, he delivered his talk, The Precipice Revisited, in which he reassessed his outlook on the biggest threats facing humanity.Today on Faster, Please — The Podcast, Ord and I address the lessons of Covid, our risk of nuclear war, potential pathways for AI, and much more.Ord is a senior researcher at Oxford University. He has previously advised the UN, World Health Organization, World Economic Forum, and the office of the UK Prime Minister.In This Episode* Climate change (1:30)* Nuclear energy (6:14)* Nuclear war (8:00)* Pandemic (10:19)* Killer AI (15:07)* Artificial General Intelligence (21:01)Below is a lightly edited transcript of our conversation. Climate change (1:30). . . the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.Pethokoukis: Let's just start out by taking a brief tour through the existential landscape and how you see it now versus when you first wrote the book The Precipice, which I've mentioned frequently in my writings. I love that book, love to see a sequel at some point, maybe one's in the works . . . but let's start with the existential risk, which has dominated many people's thinking for the past quarter-century, which is climate change.My sense is, not just you, but many people are somewhat less worried than they were five years ago, 10 years ago. Perhaps they see at least the most extreme outcomes less likely. How do you see it?Ord: I would agree with that. I'm not sure that everyone sees it that way, but there were two really big and good pieces of news on climate that were rarely reported in the media. One of them is that there's the question about how many emissions there'll be. We don't know how much carbon humanity will emit into the atmosphere before we get it under control, and there are these different emissions pathways, these RCP 4.5 and things like this you'll have heard of. And often, when people would give a sketch of how bad things could be, they would talk about RCP 8.5, which is the worst of these pathways, and we're very clearly not on that, and we're also, I think pretty clearly now, not on RCP 6, either. So the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.What are we doing right?Ultimately, some of those pathways were based on business-as-usual ideas that there wouldn't be climate change as one of the biggest issues in the international political sphere over decades. So ultimately, nations have been switching over to renewables and low-carbon forms of power, which is good news. They could be doing it much more of it, but it's still good news. Back when we initially created these things, I think we would've been surprised and happy to find out that we were going to end up among the better two pathways instead of the worst ones.The other big one is that, as well as how much we'll admit, there's the question of how bad is it to have a certain amount of carbon in the atmosphere? In particular, how much warming does it produce? And this is something of which there's been massive uncertainty. The general idea is that we're trying to predict, if we were to double the amount of carbon in the atmosphere compared to pre-industrial times, how many degrees of warming would there be? The best guess since the year I was born, 1979, has been three degrees of warming, but the uncertainty has been somewhere between one and a half degrees and four and a half.Is that Celsius or Fahrenheit, by the way?This is all Celsius. The climate community has kept the same uncertainty from 1979 all the way up to 2020, and it's a wild level of uncertainty: Four and a half degrees of warming is three times one and a half degrees of warming, so the range is up to triple these levels of degrees of warming based on this amount of carbon. So massive uncertainty that hadn't changed over many decades.Now they've actually revised that and have actually brought in the range of uncertainty. Now they're pretty sure that it's somewhere between two and a half and four degrees, and this is based on better understanding of climate feedbacks. This is good news if you're concerned about worst-case climate change. It's saying it's closer to the central estimate than we'd previously thought, whereas previously we thought that there was a pretty high chance that it could even be higher than four and a half degrees of warming.When you hear these targets of one and a half degrees of warming or two degrees of warming, they sound quite precise, but in reality, we were just so uncertain of how much warming would follow from any particular amount of emissions that it was very hard to know. And that could mean that things are better than we'd thought, but it could also mean things could be much worse. And if you are concerned about existential risks from climate change, then those kind of tail events where it's much worse than we would've thought the things would really get, and we're now pretty sure that we're not on one of those extreme emissions pathways and also that we're not in a world where the temperature is extremely sensitive to those emissions.Nuclear energy (6:14)Ultimately, when it comes to the deaths caused by different power sources, coal . . . killed many more people than nuclear does — much, much more . . .What do you make of this emerging nuclear power revival you're seeing across Europe, Asia, and in the United States? At least the United States it's partially being driven by the need for more power for these AI data centers. How does it change your perception of risk in a world where many rich countries, or maybe even not-so-rich countries, start re-embracing nuclear energy?In terms of the local risks with the power plants, so risks of meltdown or other types of harmful radiation leak, I'm not too concerned about that. Ultimately, when it comes to the deaths caused by different power sources, coal, even setting aside global warming, just through particulates being produced in the soot, killed many more people than nuclear does — much, much more, and so nuclear is a pretty safe form of energy production as it happens, contrary to popular perception. So I'm in favor of that. But the proliferation concerns, if it is countries that didn't already have nuclear power, then the possibility that they would be able to use that to start a weapons program would be concerning.And as sort of a mechanism for more clean energy. Do you view nuclear as clean energy?Yes, I think so. It's certainly not carbon-producing energy. I think that it has various downsides, including the difficulty of knowing exactly what to do with the fuel, that will be a very long lasting problem. But I think it's become clear that the problems caused by other forms of energy are much larger and we should switch to the thing that has fewer problems, rather than more problems.Nuclear war (8:00)I do think that the Ukraine war, in particular, has created a lot of possible flashpoints.I recently finished a book called Nuclear War: A Scenario, which is kind of a minute-by-minute look at how a nuclear war could break out. If you read the book, the book is terrifying because it really goes into a lot of — and I live near Washington DC, so when it gives its various scenarios, certainly my house is included in the blast zone, so really a frightening book. But when it tried to explain how a war would start, I didn't find it a particularly compelling book. The scenarios for actually starting a conflict, I didn't think sounded particularly realistic.Do you feel — and obviously we have Russia invade Ukraine and loose talk by Vladimir Putin about nuclear weapons — do you feel more or less confident that we'll avoid a nuclear war than you did when you wrote the book?Much less confident, actually. I guess I should say, when I wrote the book, it came out in 2020, I finished the writing in 2019, and ultimately we were in a time of relatively low nuclear risk, and I feel that the risk has risen. That said, I was trying to provide estimates for the risk over the next hundred years, and so I wasn't assuming that the low-risk period would continue indefinitely, but it was quite a shock to end up so quickly back in this period of heightened tensions and threats of nuclear escalation, the type of thing I thought was really from my parents' generation. So yes, I do think that the Ukraine war, in particular, has created a lot of possible flashpoints. That said, the temperature has come down on the conversation in the last year, so that's something.Of course, the conversation might heat right back up if we see a Chinese invasion of Taiwan. I've been very bullish about the US economy and world economy over the rest of this decade, but the exception is as long as we don't have a war with China, from an economic point of view, but certainly also a nuclear point of view. Two nuclear armed powers in conflict? That would not be an insignificant event from the existential-risk perspective.It is good that China has a smaller nuclear arsenal than the US or Russia, but there could easily be a great tragedy.Pandemic (10:19)Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either.The book comes out during the pandemic. Did our response to the pandemic make you more or less confident in our ability and willingness to confront that kind of outbreak? The worst one that saw in a hundred years?Yeah, overall, it made me much less confident. There'd been general thought by those who look at these large catastrophic risks that when the chips are down and the threat is imminent, that people will see it and will band together and put a lot of effort into it; that once you see the asteroid in your telescope and it's headed for you, then things will really get together — a bit like in the action movies or what have you.That's where I take my cue from, exactly.And with Covid, it was kind of staring us in the face. Those of us who followed these things closely were quite alarmed a long time before the national authorities were. Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either. That said, scientists, particularly developing RNA vaccines, did better than I expected.In the years leading up to the pandemic, certainly we'd seen other outbreaks, they'd had the avian flu outbreak, and you know as well as I do, there were . . . how many white papers or scenario-planning exercises for just this sort of event. I think I recall a story where, in 2018, Bill Gates had a conversation with President Trump during his first term about the risk of just such an outbreak. So it's not as if this thing came out of the blue. In many ways we saw the asteroid, it was just pretty far away. But to me, that says something again about as humans, our ability to deal with severe, but infrequent, risks.And obviously, not having a true global, nasty outbreak in a hundred years, where should we focus our efforts? On preparation? Making sure we have enough ventilators? Or our ability to respond? Because it seems like the preparation route will only go so far, and the reason it wasn't a much worse outbreak is because we have a really strong ability to respond.I'm not sure if it's the same across all risks as to how preparation versus ability to respond, which one is better. In some risks, there's also other possibilities like avoiding an outbreak, say, an accidental outbreak happening at all, or avoiding a nuclear war starting and not needing to actually respond at all. I'm not sure if there's an overall rule as to which one was better.Do you have an opinion on the outbreak of Covid?I don't know whether it was a lab leak. I think it's a very plausible hypothesis, but plausible doesn't mean it's proven.And does the post-Covid reaction, at least in the United States, to vaccines, does that make you more or less confident in our ability to deal with . . . the kind of societal cohesion and confidence to tackle a big problem, to have enough trust? Maybe our leaders don't deserve that trust, but what do you make from this kind of pushback against vaccines and — at least in the United States — our medical authorities?When Covid was first really striking Europe and America, it was generally thought that, while China was locking down the Wuhan area, that Western countries wouldn't be able to lock down, that it wasn't something that we could really do, but then various governments did order lockdowns. That said, if you look at the data on movement of citizens, it turns out that citizens stopped moving around prior to the lockdowns, so the lockdown announcements were more kind of like the tail, rather than the dog.But over time, citizens wanted to kind of get back out and interact more, and the rules were preventing them, and if a large fraction of the citizens were under something like house arrest for the better part of a year, would that lead to some fairly extreme resentment and some backlash, some of which was fairly irrational? Yeah, that is actually exactly the kind of thing that you would expect. It was very difficult to get a whole lot of people to row together and take the same kind of response that we needed to coordinate the response to prevent the spread, and pushing for that had some of these bad consequences, which are also going to make it harder for next time. We haven't exactly learned the right lessons.Killer AI (15:07)If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.We're more than halfway through our chat and now we're going to get to the topic probably most people would like to hear about: After the robots take our jobs, are they going to kill us? What do you think? What is your concern about AI risk?I'm quite concerned about it. Ultimately, when I wrote my book, I put AI risk as the biggest existential risk, albeit the most uncertain, as well, and I would still say that. That said, some things have gotten better since then.I would assume what makes you less confident is one, what seems to be the rapid advance — not just the rapid advance of the technology, but you have the two leading countries in a geopolitical globalization also being the leaders in the technology and not wanting to slow it down. I would imagine that would make you more worried that we will move too quickly. What would make you more confident that we would avoid any serious existential downsides?I agree with your supposition that the attempts by the US and China to turn this into some kind of arms race are quite concerning. But here are a few things: Back when I was writing the book, the leading AI systems with things like AlphaGo, if you remember that, or the Atari plane systems.Quaint. Quite quaint.It was very zero-sum, reinforcement-learning-based game playing, where these systems were learning directly to behave adversarially to other systems, and they could only understand the kind of limited aspect about the world, and struggle, and overcoming your adversary. That was really all they could do, and the idea of teaching them about ethics, or how to treat people, and the diversity of human values seemed almost impossible: How do you tell a chess program about that?But then what we've ended up with is systems that are not inherently agents, they're not inherently trying to maximize something. Rather, you ask them questions and they blurt out some answers. These systems have read more books on ethics and moral philosophy than I have, and they've read all kinds of books about the human condition. Almost all novels that have ever been published, and pretty much every page of every novel involves people judging the actions of other people and having some kind of opinions about them, and so there's a huge amount of data about human values, and how we think about each other, and what's inappropriate behavior. And if you ask the systems about these things, they're pretty good at judging whether something's inappropriate behavior, if you describe it.The real challenge remaining is to get them to care about that, but at least the knowledge is in the system, and that's something that previously seemed extremely difficult to do. Also, these systems, there are versions that do reasoning and that spend longer with a private text stream where they think — it's kind of like sub-vocalizing thoughts to themselves before they answer. When they do that, these systems are thinking in plain English, and that's something that we really didn't expect. If you look at all of the weights of a neural network, it's quite inscrutable, famously difficult to know what it's doing, but somehow we've ended up with systems that are actually thinking in English and where that could be inspected by some oversight process. There are a number of ways in which things are better than I'd feared.So what is your actual existential risk scenario look like? This is what you're most concerned about happening with AI.I think it's quite hard to be all that concrete on it at the moment, partly because things change so quickly. I don't think that there's going to be some kind of existential catastrophe from AI in the next couple of years, partly because the current systems require so much compute in order to run them that they can only be run at very specialized and large places, of which there's only a few in the world. So that means the possibility that they break out and copy themselves into other systems is not really there, in which case, the possibility of turning them off is much possible as well.Also, they're not yet intelligent enough to be able to execute a lengthy plan. If you have some kind of complex task for them, that requires, say, 10 steps — for example, booking a flight on the internet by clicking through all of the appropriate pages, and finding out when the times are, and managing to book your ticket, and fill in the special codes they sent to your email, and things like that. That's a somewhat laborious task and the systems can't do things like that yet. There's still the case that, even if they've got a, say, 90 percent chance of completing any particular step, that the 10 percent chances of failure add up, and eventually it's likely to fail somewhere along the line and not be able to recover. They'll probably get better at that, but at the moment, the inability to actually execute any complex plans does provide some safety.Ultimately, the concern is that, at a more abstract level, we're building systems which are smarter than us at many things, and we're attempting to make them much more general and to be smarter than us across the board. If you know that one player is a better chess player than another, suppose Magnus Carlsen's playing me at chess, I can't predict exactly how he's going to beat me, but I can know with quite high likelihood that he will end up beating me. I'll end up in checkmate, even though I don't know what moves will happen in between here and there, and I think that it's similar with AI systems. If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.Artificial General Intelligence (21:01)Ultimately, existential risks are global public goods problems.I frequently check out the Metaculus online prediction platform, and I think currently on that platform, 2027 for what they would call “weak AGI,” artificial general intelligence — a date which has moved up two months in the past week as we're recording this, and then I think 2031 also has accelerated for “strong AGI,” so this is pretty soon, 2027 or 2031, quite soon. Is that kind of what you're assuming is going to happen, that we're going to have to deal with very powerful technologies quite quickly?Yeah, I think that those are good numbers for the typical case, what you should be expecting. I think that a lot of people wouldn't be shocked if it turns out that there is some kind of obstacle that slows down progress and takes longer before it gets overcome, but it's also wouldn't be surprising at this point if there are no more big obstacles and it's just a matter of scaling things up and doing fairly simple processes to get it to work.It's now a multi-billion dollar industry, so there's a lot of money focused on ironing out any kinks or overcoming any obstacles on the way. So I expect it to move pretty quickly and those timelines sound very realistic. Maybe even sooner.When you wrote the book, what did you put as the risk to human existence over the next a hundred years, and what is it now?When I wrote the book, I thought it was about one in six.So it's still one in six . . . ?Yeah, I think that's still about right, and I would say that most of that is coming from AI.This isn't, I guess, a specific risk, but, to the extent that being positive about our future means also being positive on our ability to work together, countries working together, what do you make of society going in the other direction where we seem more suspicious of other countries, or more even — in the United States — more suspicious of our allies, more suspicious of international agreements, whether they're trade or military alliances. To me, I would think that the Age of Globalization would've, on net, lowered that risk to one in six, and if we're going to have less globalization, to me, that would tend to increase that risk.That could be right. Certainly increased suspicion, to the point of paranoia or cynicism about other nations and their ability to form deals on these things, is not going to be helpful at all. Ultimately, existential risks are global public goods problems. This continued functioning of human civilization is this global public good and existential risk is the opposite. And so these are things where, one way to look at it is that the US has about four percent of the world's people, so one in 25 people live in the US, and so an existential risk is hitting 25 times as many people as. So if every country is just interested in themself, they'll undervalue it by a factor of 25 or so, and the countries need to work together in order to overcome that kind of problem. Ultimately, if one of us falls victim to these risks, then we all do, and so it definitely does call out for international cooperation. And I think that it has a strong basis for international cooperation. It is in all of our interests. There are also verification possibilities and so on, and I'm actually quite optimistic about treaties and other ways to move forward.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Tech tycoons have got the economics of AI wrong - Economist* Progress in Artificial Intelligence and its Determinants - Arxiv* The role of personality traits in shaping economic returns amid technological change - CEPR▶ Business* Tech CEOs try to reassure Wall Street after DeepSeek shock - Wapo* DeepSeek Calls for Deep Breaths From Big Tech Over Earnings - Bberg Opinion* Apple's AI Moment Is Still a Ways Off - WSJ* Bill Gates Isn't Like Those Other Tech Billionaires - NYT* OpenAI's Sam Altman and SoftBank's Masayoshi Son Are AI's New Power Couple - WSJ* SoftBank Said to Be in Talks to Invest as Much as $25 Billion in OpenAI - NYT* Microsoft sheds $200bn in market value after cloud sales disappoint - FT▶ Policy/Politics* ‘High anxiety moment': Biden's NIH chief talks Trump 2.0 and the future of US science - Nature* Government Tech Workers Forced to Defend Projects to Random Elon Musk Bros - Wired* EXCLUSIVE: NSF starts vetting all grants to comply with Trump's orders - Science* Milei, Modi, Trump: an anti-red-tape revolution is under way - Economist* FDA Deregulation of E-Cigarettes Saved Lives and Spurred Innovation - Marginal Revolution* Donald Trump revives ideas of a Star Wars-like missile shield - Economist▶ AI/Digital* Is DeepSeek Really a Threat? - PS* ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Work Assistant - WSJ* OpenAI teases “new era” of AI in US, deepens ties with government - Ars* AI's Power Requirements Under Exponential Growth - Rand* How DeepSeek Took a Chunk Out of Big AI - Bberg* DeepSeek poses a challenge to Beijing as much as to Silicon Valley - Economist▶ Biotech/Health* Creatine shows promise for treating depression - NS* FDA approves new, non-opioid painkiller Journavx - Wapo▶ Clean Energy/Climate* Another Boffo Energy Forecast, Just in Time for DeepSeek - Heatmap News* Column: Nuclear revival puts uranium back in the critical spotlight - Mining* A Michigan nuclear plant is slated to restart, but Trump could complicate things - Grist▶ Robotics/AVs* AIs and Robots Should Sound Robotic - IEEE Spectrum* Robot beauticians touch down in California - FT Opinion▶ Space/Transportation* A Flag on Mars? Maybe Not So Soon. - NYT* Asteroid triggers global defence plan amid chance of collision with Earth in 2032 - The Guardian* Lurking Inside an Asteroid: Life's Ingredients - NYT▶ Up Wing/Down Wing* An Ancient 'Lost City' Is Uncovered in Mexico - NYT* Reflecting on Rome, London and Chicago after the Los Angeles fires - Wapo Opinion▶ Substacks/Newsletters* I spent two days testing DeepSeek R1 - Understanding AI* China's Technological Advantage -overlapping tech-industrial ecosystems - AI Supremacy* The state of decarbonization in five charts - Exponential View* The mistake of the century - Slow Boring* The Child Penalty: An International View - Conversable Economist* Deep Deepseek History and Impact on the Future of AI - next BIG futureFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Highlights from Newstalk Breakfast
What is the Doomsday clock?

Highlights from Newstalk Breakfast

Play Episode Listen Later Jan 28, 2025 5:27


Now, you might consider adjusting your watches today, as the Doomsday clock is being updated to tell us how close we are to the end of the world But what does the clock actually mean and is there any chance of pushing it back? We find out with SJ Beard, Senior research Associate at the Centre for the study of Existential Risk.

80k After Hours
Off the Clock #7: Getting on the Crazy Train with Chi Nguyen

80k After Hours

Play Episode Listen Later Jan 13, 2025 84:27


Watch this episode on YouTube! https://youtu.be/IRRwHCK279EMatt, Bella, and Huon sit down with Chi Nguyen to discuss cooperating with aliens, elections of future past, and Bad Billionaires pt. 2.Check out: Matt's summer appearance on the BBC on funding for the artsChi's ECL Explainer (get in touch to support!)

AI AMA – Part 2: AI Utopia, Consciousness, and the Future of Work

Play Episode Listen Later Jan 8, 2025 121:36


In this second part of the special AMA episode, Nathan explores profound questions about AI's future and its impact on society. From painting a picture of AI utopia to discussing the challenges of consciousness and potential doom scenarios, Nathan shares insights on how we might adapt and thrive in an AI-transformed world. Join us for a thought-provoking conversation that delves into the practical strategies for engaging with AI, the role of safety measures, and the importance of maintaining ethical considerations as we navigate this technological revolution. Check out http://aipodcast.ing for AI-powered podcast production services or reach out to Adithyan for more information. Help shape our show by taking our quick listener survey at https://bit.ly/TurpentinePulse SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitive 80,000 Hours: 80,000 Hours is dedicated to helping you find a fulfilling career that makes a difference. With nearly a decade of research, they offer in-depth material on AI risks, AI policy, and AI safety research. Explore their articles, career reviews, and a podcast featuring experts like Anthropic CEO Dario Amodei. Everything is free, including their Career Guide. Visit https://80000hours.org/cognitiverevolution to start making a meaningful impact today. NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive. CHAPTERS: (00:00:00) Teaser (00:00:56) AI Utopia (00:05:48) Adapting to AI (00:08:01) Probability of Utopia (00:10:51) Challenging Worldviews (Part 1) (00:11:02) Sponsors: Oracle Cloud Infrastructure (OCI) | 80,000 Hours (00:13:42) Challenging Worldviews (Part 2) (00:23:50) Audience Questions (Part 1) (00:24:07) Sponsors: NetSuite (00:25:39) Audience Questions (Part 2) (00:30:15) AI in Various Fields (00:33:16) AI in Psychiatry (00:36:16) Superintelligence (00:40:50) Societal Shift with ASI (00:49:27) Doom Discourse (00:57:05) Existential Risk (01:05:53) AI Takeover (01:14:30) AI Safety Efforts (01:18:36) Model Release Secrecy (01:27:20) AI Consciousness (01:37:51) Practical AI Strategies (01:50:34) Book Recommendation (01:59:34) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://www.linkedin.com/in/nathanlabenz/ Youtube: https://www.youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.appl...

80k After Hours
Highlights: #211 – Sam Bowman on why housing still isn't fixed and what would actually work

80k After Hours

Play Episode Listen Later Jan 6, 2025 61:20


Economist and editor of Works in Progress Sam Bowman isn't content to just condemn the Not In My Back Yard (NIMBY) mentality behind rich countries' construction stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs' has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you.So Sam lays out three alternative strategies in our full interview with him — including highlights like:Rich countries have a crisis of underconstruction (00:00:19)The UK builds shockingly little because of its planning permission system (00:04:57)Overcoming NIMBYism means fixing incentives (00:07:21)NIMBYs aren't wrong: they are often harmed by development (00:10:44)Street votes give existing residents a say (00:16:29)It's essential to define in advance who gets a say (00:24:37)Property tax distribution might be the most important policy you've never heard of (00:28:55)Using aesthetics to get buy-in for new construction (00:35:48)Locals actually really like having nuclear power plants nearby (00:44:14)It can be really useful to let old and new institutions coexist for a while (00:48:27)Ozempic and living in the decade that we conquered obesity (00:53:05)Northern latitudes still need nuclear power (00:55:30)These highlights are from episode #211 of The 80,000 Hours Podcast: Sam Bowman on why housing still isn't fixed and what would actually work. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org. (And you may have noticed this episode is longer than most of our highlights episodes — let us know if you liked that or not!)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #210 – Cameron Meyer Shorb on dismantling the myth that we can't do anything to help wild animals

80k After Hours

Play Episode Listen Later Dec 13, 2024 29:56


We explored the cutting edge of wild animal welfare science our full interview with Cameron Meyer Shorb, executive director of Wild Animal Initiative, including highlights like:One concrete example of how we might improve wild animal welfare (00:00:16)How many wild animals are there, and which animals are they? (00:04:24)Why might wild animals be suffering? (00:08:40)The objection that we shouldn't meddle in nature because nature is good (00:12:25)Vaccines for wild animals (00:17:37)Gene drive technologies (00:20:50)Optimising for high-welfare landscapes (00:24:52)These highlights are from episode #210 of The 80,000 Hours Podcast: Cameron Meyer Shorb on dismantling the myth that we can't do anything to help wild animals. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #209 – Rose Chan Loui on OpenAI's gambit to ditch its nonprofit

80k After Hours

Play Episode Listen Later Dec 11, 2024 24:13


Nonprofit legal expert Rose Chan Loui lays out the legal case and implications of OpenAI's attempt to shed its nonprofit parent. This episode is a selection of highlights from our full interview with Rose, including:How OpenAI carefully chose a complex nonprofit structure (00:00:26)The nonprofit board is out-resourced and in a tough spot (00:04:09)Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:06:47)Control of OpenAI is independently incredibly valuable and requires compensation (00:11:06)It's very important that the nonprofit gets cash and not just equity (00:16:06)How the nonprofit board can best play their hand (00:21:20)These highlights are from episode #209 of The 80,000 Hours Podcast: Rose Chan Loui on OpenAI's gambit to ditch its nonprofit. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world

80k After Hours

Play Episode Listen Later Dec 5, 2024 29:15


Elizabeth Cox — founder of the independent production company Should We Studio — makes the case that storytelling can improve the world. This episode is a selection of highlights from our full interview with Elizabeth, including:Keiran's intro (00:00:00)Empirical evidence of the impact of storytelling (00:00:16)The hits-based approach to storytelling (00:03:35)Debating the merits of thinking about target audiences (00:07:48)Ada vs other approaches to impact-focused storytelling (00:13:15)Why animation? (00:18:56)How long will humans stay relevant as creative writers, given AI advances? (00:22:40)These highlights are from episode #208 of The 80,000 Hours Podcast: Elizabeth Cox on the case that TV shows, movies, and novels can improve the world. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

Macro Musings with David Beckworth
Zachary Mazlish on the Political Implications of Inflation and the Impact of Transformative AI

Macro Musings with David Beckworth

Play Episode Listen Later Dec 2, 2024 50:02


Zachary Mazlish is an economist at the University of Oxford, and he joins David on Macro Musings to explain some recent and important macroeconomic developments, specifically the inflation linkages to the 2024 presidential election and the macroeconomic implications of transformative AI. David and Zach also discuss transformative AI's impact on asset pricing, optimal monetary policy in world of high growth, the causes of the slowdown in trend productivity, and more.   Transcript for this week's  episode.   Zach's Twitter: @ZMazlish Zach's Substack Zach's website   David Beckworth's Twitter: @DavidBeckworth Follow us on Twitter: @Macro_Musings   Check out our new AI chatbot: the Macro Musebot! Join the new Macro Musings Discord server!   Join the Macro Musings mailing list! Check out our Macro Musings merch!   Related Links:   *Yes, Inflation Made the Median Voter Poorer* by Zachary Mazlish   *Transformative AI, Existential Risk, and Real Interest Rates* by Trevor Chow, Basil Halperin, and Zachary Mazlish   *Decomposing the Great Stagnation: Baumol's Cost Disease vs. “Ideas Are Getting Hard to Find”* by Basil Halperin and Zachary Mazlish   *The Unexpected Compression: Competition at Work in the Low Wage Labor Market* by David Autor, Arin Dube, and Annie McGrew   Timestamps:   (00:00:00) – Intro   (00:04:03) – Inflation Made the Median Voter Poorer: Comparing Periods of Wage Growth   (00:15:26) – Inflation Made the Median Voter Poorer: The Median Change in the Wage   (00:22:19) – Assessing the Feedback to Zachary's Article   (00:25:05) – The Significance of Transformative AI and its Double-Edged Sword   (00:27:02) – The Impact of Transformative AI on Asset Pricing and its Policy Challenges   (00:38:07) – The Broader Macroeconomic Effects of Rapid Growth   (00:41:05) – Optimal Monetary Policy in a World of High Growth   (00:43:19) – Exploring the Causes of the Productivity Slowdown   (00:49:21) – Outro

80k After Hours
Highlights: #207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead

80k After Hours

Play Episode Listen Later Dec 2, 2024 22:31


Charity founder Sarah Eustis-Guthrie has a candid conversation about her experience starting and running her maternal health charity, and ultimately making the difficult decision to shut down when the programme wasn't as impactful as they expected. This episode is a selection of highlights from our full interview with Sarah:Luisa's intro (00:00:00)What it's like to found a charity (00:00:14)Yellow flags and difficult calls (00:03:17)Disappointing results (00:06:28)The ups and downs of founding an organisation (00:08:37)Entrepreneurship and being willing to make risky bets (00:12:58)Why aren't more charities shutting down? (00:16:50)How to think about shutting down (00:19:39)These highlights are from episode #207 of The 80,000 Hours Podcast: Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

21st Talks
#24 - Critqiues of X-Risk, AI, and EA with TTotal

21st Talks

Play Episode Listen Later Nov 29, 2024 61:03


Become a Patron on Patreon and support the show!Check out Titotal's Articles if you enjoyed the conversation! In this conversation, Coleman and TiTotal discuss critiques surrounding Effective Altruism and the rationality community. They emphasize the complexities of computational physics and the limitations of AI in making significant scientific breakthroughs. The discussion also touches on the importance of peer review, the challenges of nanotechnology, and the need for humility in scientific discourse. In this conversation, the speakers delve into various themes surrounding nanotechnology, bio risk, global catastrophic risks, and critiques of effective altruism. They explore the theoretical foundations of nanotechnology, the implications of bio risk, and the potential dangers posed by omnicidal actors. The discussion also critiques the effective altruism community and emphasizes the importance of welcoming constructive criticism. Finally, they touch on the Leroy Jenkins principle, discussing how faulty AI could serve as a warning shot for future risks.Big thank you to the Bilal and Jacob for helping to make this show happen! On What Matters is a Kairos.FM production. 

Thoughtful Money with Adam Taggart
Grant Williams: The Perversion Of Money Is The Great Existential Risk Investors Now Face

Thoughtful Money with Adam Taggart

Play Episode Listen Later Nov 21, 2024 77:57


The markets are locked in a battle of dangerous extremes vs high complacency. Yes, there are many reasons to be concerned about today's record high valuations. But those concerns haven't mattered yet. And there's no guarantee they will on any relevant time frame to today's investors. And so the party in stocks continues on...for now. In today's interview, macro analyst Grant Williams discusses how we've reached this point: we debauched the value of money. And in doing so, we re-focused the goal of every company towards boosting the stock price, and away from creating sustainable value. How long can this continue before the markets reprice, downwards, to valuations that can be sustained by fundamentals? This is THE existential question facing investors today, Grant posits. Your financial well-bring will be determined by the choice you make here. For one of the more important discussions you'll hear on investing this year, watch this interview with the great Grant Williams. WORRIED ABOUT THE MARKET? SCHEDULE YOUR FREE PORTFOLIO REVIEW with Thoughtful Money's endorsed financial advisors at https://www.thoughtfulmoney.com --- Support this podcast: https://podcasters.spotify.com/pod/show/thoughtful-money/support

Increments
#77 (Bonus) - AI Doom Debate (w/ Liron Shapira)

Increments

Play Episode Listen Later Nov 19, 2024 141:22


Back on Liron's Doom Debates podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208). We discuss Definitions of "new knowledge" The reliance of deep learning on induction Can AIs be creative? The limits of statistical prediction Predictions of what deep learning cannot accomplish Can ChatGPT write funny jokes? Trends versus principles The psychological consequences of doomerism Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron Come join our discord server! DM us on twitter or send us an email to get a supersecret link The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) Was Vaden's two week anti-debate bro reeducation camp successful? Tell us at incrementspodcast@gmail.com Special Guest: Liron Shapira.

80k After Hours
Highlights: #206 – Anil Seth on the predictive brain and how to study consciousness

80k After Hours

Play Episode Listen Later Nov 15, 2024 19:37


Neuroscientist Anil Seth explains how much we can learn about consciousness by studying the brain in these highlights from our full interview — including:Luisa's intro (00:00:00)How our brain interprets reality (00:00:15)How our brain experiences our organs (00:04:04)What psychedelics teach us about consciousness (00:07:37)The physical footprint of consciousness in the brain (00:12:10)How to study the neural correlates of consciousness (00:15:37)This is a selection of highlights from episode #206 of The 80,000 Hours Podcast: Anil Seth on the predictive brain and how to study consciousness. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

21st Talks
#23 - National Security and Surviving Disinformation with Ryan McBeth

21st Talks

Play Episode Listen Later Nov 14, 2024 91:44


Become a Patron on Patreon and support the show! Check out Ryan McBeth's channel if you enjoyed the conversation! Ryan McBeth, a software engineer and cybersecurity expert, discusses his work in combating disinformation and misinformation online. He explains his methodology of using Occam's razor to analyze information and debunk conspiracy theories. Ryan also talks about the importance of internet literacy and the promotion of critical thinking. He shares examples of disinformation he has encountered, particularly in relation to the conflict between Israel and Palestine. Ryan emphasizes the need to take information warfare seriously and calls for a stronger response to combat disinformation. The conversation covers various topics related to cybersecurity, nuclear weapons, and international relations. Some key themes include the use of nuclear weapons against naval targets, the influence of China and Russia, the role of AI in military operations, the spread of misinformation, and the importance of community engagement. The conversation also touches on the vulnerabilities of internet-connected devices and the need for robust cybersecurity measures. The conversation explores the relationship between disinformation and human error, as well as the challenges of building vulnerability into critical structures. It discusses the role of social media platforms in combating disinformation and suggests implementing tools such as vectorizing imagery and truth scores. The importance of critical thinking and fact-checking before sharing emotional content online is emphasized. The book 'The Field Guide to Understanding Human Error' is recommended for understanding the factors contributing to errors.On What Matters is a Kairos.FM production. 

80k After Hours
Highlights: #205 – Sébastien Moro on the most insane things fish can do

80k After Hours

Play Episode Listen Later Nov 12, 2024 30:55


Science writer and video blogger Sébastien Moro blows our minds with the latest research on fish consciousness, intelligence, and potential sentience.This is a selection of highlights from episode #205 of The 80,000 Hours Podcast: Sébastien Moro on the most insane things fish can do. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode.And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Luisa's intro (00:00:00)The ingenious innovations of Atlantic cod (00:00:19)The mirror test triumph of cleaner wrasses (00:05:46)The astounding accuracy of archerfish (00:10:30)The magnificent memory of gobies (00:14:25)The tactical teamwork of the grouper and moray eel (00:17:42)The remarkable relationships of wild guppies (00:22:01)Sébastien's take on fish consciousness (00:26:48)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

Machine Learning Street Talk
Eliezer Yudkowsky and Stephen Wolfram on AI X-risk

Machine Learning Street Talk

Play Episode Listen Later Nov 11, 2024 258:30


Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky's argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems' fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** TOC: 1. Foundational AI Concepts and Risks [00:00:01] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0

Wild with Sarah Wilson
LUKE KEMP: Will our global civilisation go the way of the Roman Empire?

Wild with Sarah Wilson

Play Episode Listen Later Nov 5, 2024 74:07


Luke Kemp (historical collapse expert; associate at the Centre for the Study of Existential Risk) has studied past civilisations and mapped out a picture of how long they tend to last before they collapse, what tends to tip them and what (if anything) can be done to stall their demise. Luke works alongside Lord Martin Rees and Yuval Noah Harari, is an honorary lecturer in environmental policy at the Australian National University and his collapse insights have been covered by the BBC, the New York Times and the New Yorker. His first book, 'Goliath's Curse: The History and Future of Societal Collapse' will be published in June 2025. In this episode I get Luke to provide a bit of a 101 on how civilisations do indeed decline and perish and to update us on the latest theories on how and whether ours might make it through. The answer is surprising.SHOW NOTESHere's Luke's original report on complex civilisation's lifespans.Keep up to date with Luke's work hereA few past Wild guests are referenced by Luke. You can catch the episode on Moloch with Liv Boeree here, the interview with Adam Mastroianni here and my chat with Nate Hagens hereThe first chapter of my book serialisation – about hope – is available to everyone hereAnd here are the two chapters that I reference at the end--If you need to know a bit more about me… head to my "about" pageFor more such conversations subscribe to my Substack newsletter, it's where I interact the most!Get your copy of my book, This One Wild and Precious LifeLet's connect on InstagramIf you need to know a bit more about me… head to my "about" pageFor more such conversations subscribe to my Substack newsletter, it's where I interact the most!Get your copy of my book, This One Wild and Precious LifeLet's connect on Instagram Hosted on Acast. See acast.com/privacy for more information.

80k After Hours
Highlights: #204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

80k After Hours

Play Episode Listen Later Oct 30, 2024 19:20


Election forecaster Nate Silver gives his takes on: how effective altruism could be better, the stark tradeoffs we faced with COVID, whether the 13 Keys to the White House is "junk science," how to tell whose election predictions are better, and if venture capitalists really take risks.This is a selection of highlights from episode #204 of The 80,000 Hours Podcast: Nate Silver on making sense of SBF, and his biggest critiques of effective altruism. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode. And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Rob's intro (00:00:00)Is anyone doing better at "doing good better"? (00:00:29)Is effective altruism too big to succeed? (00:02:19)The stark tradeoffs we faced with COVID (00:06:02)The 13 Keys to the White House (00:07:53)Can we tell whose election predictions are better? (00:11:40)Do venture capitalists really take risks? (00:16:29)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

80k After Hours

Play Episode Listen Later Oct 21, 2024 13:15


This is a selection of highlights from our April 2023 episode with host Luisa Rodriguez and producer Keiran Harris on 80k After Hours. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shameAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Keiran's intro (00:00:00)Jerk Syndrome (00:00:53)The basic case for free will being an illusion (00:05:10)Feeling bad about not being a different person (00:08:29)Implications for the criminal justice system (00:10:57)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

80k After Hours

Play Episode Listen Later Oct 18, 2024 33:46


This is a selection of highlights from episode #203 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisationAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Luisa's intro (00:00:00)Thinking about death (00:00:24)Uploads of ourselves (00:05:32)Against intervening in wild nature (00:12:36)Eliminating the worst experiences in wild nature (00:16:15)To be human or wild animal? (00:21:46)Challenges for water-based animals (00:27:38)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Off the Clock #6: Starting Small with Conor Barnes

80k After Hours

Play Episode Listen Later Oct 15, 2024 65:43


Watch this episode on YouTube! https://youtu.be/yncw2T77OAcMatt, Bella, and Huon sit down with Conor Barnes to discuss unlikely journeys, EA criticism, discipline, timeless decision theory, and how to do the most good with a degree in classics. Check out:Conor's 100 Tips for a Better Life: https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-lifeConor's writing: https://parhelia.conorbarnes.com/Zvi on timeless decision theory: https://www.lesswrong.com/posts/scwoBEju75C45W5n3/how-i-lost-100-pounds-using-tdt

80k After Hours
Highlights: #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

80k After Hours

Play Episode Listen Later Oct 4, 2024 23:10


This is a selection of highlights from episode #202 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Venki Ramakrishnan on the cutting edge of anti-ageing scienceAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Luisa's intro (00:00:00)Is death an inevitable consequence of evolution? (00:00:15)How much additional healthspan will the next 20 to 30 years of ageing research buy us? (00:03:10)The social impacts of radical life extension (00:05:46)Could increased longevity increase inequality? (00:10:06)Does injecting an old body with young blood slow ageing? (00:14:23)Freezing cells, organs, and bodies (00:18:35)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #201 – Ken Goldberg on why your robot butler isn't here yet

80k After Hours

Play Episode Listen Later Sep 30, 2024 22:25


This is a selection of highlights from episode #201 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Ken Goldberg on why your robot butler isn't here yetAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Luisa's intro (00:00:00)Moravec's paradox (00:00:22)Successes in robotics to date (00:03:51)Why perception is a big challenge for robotics (00:07:02)Why low fault tolerance makes some skills extra hard to automate (00:12:29)How might robot labour affect the job market? (00:17:19)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks

80k After Hours

Play Episode Listen Later Sep 18, 2024 22:54


This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Ezra Karger on what superforecasters and experts think about existential risksAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Luisa's intro (00:00:00)Why we need forecasts about existential risks (00:00:26)Headline estimates of existential and catastrophic risks (00:02:43)What explains disagreements about AI risks? (00:06:18)Learning more doesn't resolve disagreements about AI risks (00:08:59)A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)Cruxes about AI risks (00:15:17)Is forecasting actually useful in the real world? (00:18:24)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

The Disagreement
17: AI and Existential Risk

The Disagreement

Play Episode Listen Later Sep 12, 2024 50:43


Today's disagreement is on Artificial Intelligence and Existential Risk. In this episode, we ask the most consequential question we've asked so far on this show: Do rapidly advancing AI systems pose an existential threat to humanity?To have this conversation, we've brought together two experts: a world class computer scientist and a Silicon Valley AI entrepreneur.Roman Yampolskiy is an associate professor of Computer Engineering and Computer Science at the University of Louisville. His most recent book is: AI: Unexplainable, Unpredictable, Uncontrollable.Alan Cowen is the Chief Executive Officer of Hume AI, a startup developing “emotionally intelligent AI.” His company recently raised $50M from top-tier venture capitalists to pursue the first fully empathic AI – an AI that can both understand our emotional states and replicate them. Alan has a PhD in computational psychology from Berkeley and previously worked at Google in the DeepMind AI lab.What did you think about this episode? Email us at podcast@thedisagreement.com. You can also DM us on Instagram @thedisagreementhq.

80k After Hours
Highlights: #199 – Nathan Calvin on California's AI bill SB 1047 and its potential to shape US AI policy

80k After Hours

Play Episode Listen Later Sep 12, 2024 15:18


This is a selection of highlights from episode #199 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Nathan Calvin on California's AI bill SB 1047 and its potential to shape US AI policyAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Luisa's intro (00:00:00)Why we can't count on AI companies to self-regulate (00:00:21)SB 1047's impact on open source models (00:04:24)Why it's not "too early" for AI policies (00:07:54)Why working on state-level policy could have an outsized impact (00:11:47)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #198 – Meghan Barrett on challenging our assumptions about insects

80k After Hours

Play Episode Listen Later Sep 9, 2024 23:53


This is a selection of highlights from episode #198 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Meghan Barrett on challenging our assumptions about insectsAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Luisa's intro (00:00:00)Size diversity (00:00:16)Offspring, parental investment, and lifespan (00:03:18)Headless cockroaches (00:06:13)Is self-protective behaviour a reflex? (00:08:50)If insects feel pain, is it mild or severe? (00:11:54)Evolutionary perspective on insect sentience (00:16:53)How likely is insect sentience? (00:20:25)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

80k After Hours

Play Episode Listen Later Sep 5, 2024 22:10


This is a selection of highlights from episode #197 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Nick Joseph on whether Anthropic's AI safety policy is up to the taskAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Rob's intro (00:00:00)What Anthropic's responsible scaling policy commits the company to doing (00:00:17)Why Nick is a big fan of the RSP approach (00:02:13)Are RSPs still valuable if the people using them aren't bought in? (00:05:07)Nick's biggest reservations about the RSP approach (00:08:01)Should Anthropic's RSP have wider safety buffers? (00:11:17)Alternatives to RSPs (00:14:57)Should concerned people be willing to take capabilities roles? (00:19:22)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

Keen On Democracy
Episode 2182: Andrew Leigh on how economics explains the world

Keen On Democracy

Play Episode Listen Later Sep 5, 2024 45:35


Andrew Leigh is a minister in the Australian parliament with a doctorate in economics from Harvard. Unlike many academic economists, however, Leigh has the gift of simplifying economics for all of us. His new book, How Economics Explains the World, presents economics as the prism to understand the human story. From the dawn of agriculture to AI, Leigh tells the story of how ingenuity, greed, and desire for betterment have, to an astonishing degree, determined humanity's past, present, and future. Andrew Leigh is the Assistant Minister for Competition, Charities, Treasury and Employment, and Federal Member for Fenner in the Australian Parliament. Prior to being elected in 2010, Andrew was a professor of economics at the Australian National University. He holds a PhD in Public Policy from Harvard, having graduated from the University of Sydney with first class honours in Arts and Law. Andrew is a past recipient of the Economic Society of Australia's Young Economist Award and a Fellow of the Australian Academy of Social Sciences. His books include Disconnected (2010), Battlers and Billionaires: The Story of Inequality in Australia (2013), The Economics of Just About Everything (2014), The Luck of Politics (2015), Choosing Openness: Why Global Engagement is Best for Australia (2017), Randomistas: How Radical Researchers Changed Our World (2018), Innovation + Equality: How to Create a Future That Is More Star Trek Than Terminator (with Joshua Gans) (2019), Reconnected: A Community Builder's Handbook (with Nick Terrell) (2020), What's the Worst That Could Happen? Existential Risk and Extreme Politics (2021) and Fair Game: Lessons From Sport for a Fairer Society & a Stronger Economy (2022). Andrew is a keen triathlete and marathon runner, and hosts a podcast called The Good Life: Andrew Leigh in Conversation, about living a happier, healthier and more ethical life. Andrew is the father of three sons - Sebastian, Theodore and Zachary, and lives with his wife Gweneth in Canberra.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

80k After Hours
Highlights: #196 – Jonathan Birch on the edge cases of sentience and why they matter

80k After Hours

Play Episode Listen Later Aug 30, 2024 25:34


This is a selection of highlights from episode #196 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Jonathan Birch on the edge cases of sentience and why they matterAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Chapters:Luisa's intro (00:00:00)The history of neonatal surgery without anaesthetic (00:00:23)Overconfidence around disorders of consciousness (00:03:17)Separating abortion from the issue of foetal sentience (00:07:26)The cases for and against neural organoids (00:11:30)Artificial sentience arising from whole brain emulations of roundworms and fruit flies (00:15:45)Using citizens' assemblies to do policymaking (00:22:00)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

80k After Hours

Play Episode Listen Later Aug 19, 2024 18:03


This is a selection of highlights from episode #195 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Sella Nevo on who's trying to steal frontier AI models, and what they could do with themAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Chapters:Luisa's intro (00:00:00)Why protect model weights? (00:00:23)SolarWinds hack (00:03:51)Zero-days (00:08:16)Side-channel attacks (00:11:45)USB cables (00:15:11)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

Crazy Wisdom
Episode #381: Why We Still Matter: Human Decision-Making in an AI-Driven Future

Crazy Wisdom

Play Episode Listen Later Aug 12, 2024 40:29


In this episode of the Crazy Wisdom Podcast, Stewart Alsop speaks with Francisco D'Agostino, a business development expert focused on helping entrepreneurs expand into new markets. They discuss a range of topics, from the ethical implications of emerging technologies like AI to the historical parallels with figures like Oppenheimer, who grappled with the consequences of their creations. The conversation also touches on the human aspects of business development, cultural considerations in market expansion, and the potential future of AI in shaping societal structures. To learn more about Francisco and his work, you can follow him on Instagram at @Pancho_D'Agostino or connect with him on LinkedIn here.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:29 The Ethical Dilemma of New Technologies04:58 AI vs. Human Intelligence14:41 The Role of Religion and Philosophy in Technology28:32 Business Development Insights38:48 Conclusion and Contact InformationKey InsightsThe Ethical Dilemmas of Innovation: The episode draws parallels between historical figures like Oppenheimer, who grappled with the moral implications of creating the atomic bomb, and modern technologists dealing with AI. Both scenarios highlight the ethical complexities that arise when powerful new technologies are developed, forcing creators to consider the broader consequences of their innovations.AI as a Tool, Not an Inherent Threat: Francisco D'Agostino emphasizes that AI, like any technology, is neutral by nature. Its impact depends entirely on how it is applied by humans. Just as a hammer can be used to build a house or cause harm, AI's effects on society will be determined by the intentions and decisions of those who control it.Human Decision-Making Remains Central in Business: Despite the rise of AI and automation, the conversation underscores that human decision-making is still the core driver of business success. Markets are fundamentally shaped by human behavior, emotions, and cultural contexts, which cannot be entirely predicted or replicated by machines.Cultural Sensitivity in Market Expansion: A key insight from Francisco's experience in business development is the importance of understanding local cultures when entering new markets. Success in one country does not guarantee the same results in another, as seen in the example of Paraguay, where cultural conservatism posed unexpected challenges.The Potential for AI to Redefine Power Structures: The discussion touches on the idea that AI could significantly alter societal power dynamics. Those who master AI technology could wield unprecedented influence, potentially creating new forms of dependency or even challenging traditional concepts of power and authority.The Role of AI in the Future of Religion: The episode explores the provocative idea that AI might either become a new object of worship or contribute to the further decline of traditional religious beliefs. This reflects a broader question about whether technology will fill the existential void left by the diminishing role of religion in modern life.The Inevitable Dependency on Technology: Both Stewart Alsop and Francisco D'Agostino reflect on how deeply integrated technology has become in daily life, creating new dependencies. This reliance on technology, while making life more convenient, also raises concerns about losing essential human skills and connections, as people increasingly turn to machines to solve problems that once required human effort and interaction.

80k After Hours
Highlights: #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

80k After Hours

Play Episode Listen Later Aug 12, 2024 35:19


This is a selection of highlights from episode #194 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Vitalik Buterin on defensive acceleration and how to regulate AI when you fear governmentAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Chapters:Rob's intro (00:00:00)Vitalik's "d/acc" alternative (00:00:14)Biodefence (00:05:31)How much do people actually disagree? (00:09:49)Distrust of authority is a big deal (00:15:09)Info defence and X's Community Notes (00:19:35)Quadratic voting and funding (00:26:22)Vitalik's philosophy of half-assing everything (00:30:32)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #193 – Sihao Huang on the risk that US–China AI competition leads to war

80k After Hours

Play Episode Listen Later Jul 31, 2024 24:48


This is a selection of highlights from episode #192 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Sihao Huang on the risk that US–China AI competition leads to warAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Chapters:Luisa's intro (00:00:00)How advanced is Chinese AI? (00:00:25)Is China catching up to the US and UK? (00:05:14)Could China be a source of catastrophic AI risk? (00:07:50)AI enabling human rights abuses and undermining democracy (00:13:53)China's attempts at indigenising its semiconductor supply chain (00:18:14)How the US and UK might coordinate with China (00:20:32)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Highlights: #192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

80k After Hours

Play Episode Listen Later Jul 25, 2024 23:33


This is a selection of highlights from episode #192 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the USAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Chapters:Luisa's intro (00:00:00)The minutes after an incoming nuclear attack is detected (00:00:22)Deciding whether to retaliate (00:04:24)Russian misperception of US counterattack (00:07:37)The nuclear launch plans that would kill millions in neighbouring countries (00:11:38)The war games that suggest escalation is inevitable (00:15:31)A super-electromagnetic pulse (00:19:12)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80k After Hours
Off the Clock #5: Leaving 80k with Maria Gutierrez Rojas

80k After Hours

Play Episode Listen Later Jul 23, 2024 83:23


You can check out the video version of this episode on YouTube at https://youtu.be/AUuEaYltONgMatt, Bella, and Cody sit down with Maria Gutierrez Rojas to discuss the 80k's aesthetics, religion (again), bad billionaires, and why it's hard to be an org that both gives advice and has opinions.

80k After Hours
Highlights: #191 (Part 2) – Carl Shulman on government and society after AGI

80k After Hours

Play Episode Listen Later Jul 19, 2024 33:06


This is a selection of highlights from episode #191 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Carl Shulman on government and society after AGIAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Chapters:How AI advisors could have saved us from COVID-19 (00:00:05)Why Carl doesn't support enforced pauses on AI research (00:06:34)Value lock-in (00:12:58)How democracies avoid coups (00:17:11)Building trust between adversaries about which models you can believe (00:24:00)Opportunities for listeners (00:30:11)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong