POPULARITY
When METR says something like "Claude Opus 4.5 has a 50% time horizon of 4 hours and 50 minutes", what does that mean? In this episode David Rein, METR researcher and co-author of the paper "Measuring AI ability to complete long tasks", talks about METR's work on measuring time horizons, the methodology behind those numbers, and what work remains to be done in this domain. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Transcript: https://axrp.net/episode/2026/01/03/episode-47-david-rein-metr-time-horizons.html Topics we discuss, and timestamps: 0:00:32 Measuring AI Ability to Complete Long Tasks 0:10:54 The meaning of "task length" 0:19:27 Examples of intermediate and hard tasks 0:25:12 Why the software engineering focus 0:32:17 Why task length as difficulty measure 0:46:32 Is AI progress going superexponential? 0:50:58 Is AI progress due to increased cost to run models? 0:54:45 Why METR measures model capabilities 1:04:10 How time horizons relate to recursive self-improvement 1:12:58 Cost of estimating time horizons 1:16:23 Task realism vs mimicking important task features 1:19:50 Excursus on "Inventing Temperature" 1:25:46 Return to task realism discussion 1:33:53 Open questions on time horizons Links for METR: Main website: https://metr.org/ X/Twitter account: https://x.com/METR_Evals/ Research we discuss: Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499 RE-Bench: Evaluating frontier AI R&D capabilities of language model agents against human experts: https://arxiv.org/abs/2411.15114 HCAST: Human-Calibrated Autonomy Software Tasks: https://arxiv.org/abs/2503.17354 Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity: https://arxiv.org/abs/2507.09089 Anthropic Economic Index: Tracking AI's role in the US and global economy: https://www.anthropic.com/research/anthropic-economic-index-september-2025-report Bridging RL Theory and Practice with the Effective Horizon (i.e. the Cassidy Laidlaw paper): https://arxiv.org/abs/2304.09853 How Does Time Horizon Vary Across Domains?: https://metr.org/blog/2025-07-14-how-does-time-horizon-vary-across-domains/ Inventing Temperature: https://global.oup.com/academic/product/inventing-temperature-9780195337389 Is there a Half-Life for the Success Rates of AI Agents? (by Toby Ord): https://www.tobyord.com/writing/half-life Lawrence Chan's response to the above: https://nitter.net/justanotherlaw/status/1920254586771710009 AI Task Length Horizons in Offensive Cybersecurity: https://sean-peters-au.github.io/2025/07/02/ai-task-length-horizons-in-offensive-cybersecurity.html Episode art by Hamish Doodles: hamishdoodles.com
It's that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:Kyle Fish explaining how Anthropic's AI Claude descends into spiritual woo when left to talk to itselfIan Dunt on why the unelected House of Lords is by far the best part of the British governmentSam Bowman's strategy to get NIMBYs to love it when things get built next to their housesBuck Shlegeris on how to get an AI model that wants to seize control to accidentally help you foil its plans…as well as 18 other top observations and arguments from the past year of the show.Links to learn more, video, and full transcript: https://80k.info/best25It's been another year of living through history, whether we asked for it or not. Luisa and Rob will be back in 2026 to help you make sense of whatever comes next — as Earth continues its indifferent journey through the cosmos, now accompanied by AI systems that can summarise our meetings and generate adequate birthday messages for colleagues we barely know.Chapters:Cold open (00:00:00)Rob's intro (00:02:35)Helen Toner on whether we're racing China to build AGI (00:03:43)Hugh White on what he'd say to Americans (00:06:09)Buck Shlegeris on convincing AI models they've already escaped (00:12:09)Paul Scharre on a personal experience in Afghanistan that influenced his views on autonomous weapons (00:15:10)Ian Dunt on how unelected septuagenarians are the heroes of UK governance (00:19:06)Beth Barnes on AI companies being locally reasonable, but globally reckless (00:24:27)Tyler Whitmer on one thing the California and Delaware attorneys general forced on the OpenAI for-profit as part of their restructure (00:28:02)Toby Ord on whether rich people will get access to AGI first (00:30:13)Andrew Snyder-Beattie on how the worst biorisks are defence dominant (00:34:24)Eileen Yam on the most eye-watering gaps in opinions about AI between experts and the US public (00:39:41)Will MacAskill on what a century of history crammed into a decade might feel like (00:44:07)Kyle Fish on what happens when two instances of Claude are left to interact with each other (00:49:08)Sam Bowman on where the Not In My Back Yard movement actually has a point (00:56:29)Neel Nanda on how mechanistic interpretability is trying to be the biology of AI (01:03:12)Tom Davidson on the potential to install secret AI loyalties at a very early stage (01:07:19)Luisa and Rob discussing how medicine doesn't take the health burden of pregnancy seriously enough (01:10:53)Marius Hobbhahn on why scheming is a very natural path for AI models — and people (01:16:23)Holden Karnofsky on lessons for AI regulation drawn from successful farm animal welfare advocacy (01:21:29)Allan Dafoe on how AGI is an inescapable idea but one we have to define well (01:26:19)Ryan Greenblatt on the most likely ways for AI to take over (01:29:35)Updates Daniel Kokotajlo has made to his forecasts since writing and publishing the AI 2027 scenario (01:32:47)Dean Ball on why regulation invites path dependency, and that's a major problem (01:37:21)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore
Timestamps0:00 - What Existential Risks Does AI Pose?8:57 - How AI Systems Lie to Us26:04 - Why We Should Be Worried About AI33:28 - What Does It Mean for AI System to “Want” Something?43:41 - AI Weapons and Nuclear Warfare48:57 - Where Should We Focus Our Resources?53:35 - What Policies Should We Enact?
This is the latest in a series of essays on AI Scaling. You can find the others on my site. Summary: RL-training for LLMs scales surprisingly poorly. Most of its gains are from allowing LLMs to productively use longer chains of thought, allowing them to think longer about a problem. There is some improvement for a fixed length of answer, but not enough to drive AI progress. Given the scaling up of pre-training compute also stalled, we'll see less AI progress via compute scaling than you might have thought, and more of it will come from inference scaling (which has different effects on the world). That lengthens timelines and affects strategies for AI governance and safety. The current era of improving AI capabilities using reinforcement learning (from verifiable rewards) involves two key types of scaling: Scaling the amount of compute used for RL during training Scaling [...] ---Outline:(09:12) How do these compare to pre-training scaling?(13:42) Conclusion --- First published: October 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/TysuCdgwDnQjH3LyY/how-well-does-rl-scale --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is the latest in a series of essays on AI Scaling. You can find the others on my site. Summary: RL-training for LLMs scales surprisingly poorly. Most of its gains are from allowing LLMs to productively use longer chains of thought, allowing them to think longer about a problem. There is some improvement for a fixed length of answer, but not enough to drive AI progress. Given the scaling up of pre-training compute also stalled, we'll see less AI progress via compute scaling than you might have thought, and more of it will come from inference scaling (which has different effects on the world). That lengthens timelines and affects strategies for AI governance and safety. The current era of improving AI capabilities using reinforcement learning (from verifiable rewards) involves two key types of scaling: Scaling the amount of compute used for RL during training Scaling [...] ---Outline:(09:46) How do these compare to pre-training scaling?(14:16) Conclusion --- First published: October 22nd, 2025 Source: https://www.lesswrong.com/posts/xpj6KhDM9bJybdnEe/how-well-does-rl-scale --- Narrated by TYPE III AUDIO. ---Images from the article:
What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity's ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulses to pandemics and climate disasters, we explore both the threats that could bring down modern civilisation and the practical solutions that could help us bounce back.Learn more and see the full transcript: https://80k.info/cr25Chapters:Cold open (00:00:00)Luisa's intro (00:01:16)Zach Weinersmith on how settling space won't help with threats to civilisation anytime soon (unless AI gets crazy good) (00:03:12)Luisa Rodriguez on what the world might look like after a global catastrophe (00:11:42)Dave Denkenberger on the catastrophes that could cause global starvation (00:22:29)Lewis Dartnell on how we could rediscover essential information if the worst happened (00:34:36)Andy Weber on how people in US defence circles think about nuclear winter (00:39:24)Toby Ord on risks to our atmosphere and whether climate change could really threaten civilisation (00:42:34)Mark Lynas on how likely it is that climate change leads to civilisational collapse (00:54:27)Lewis Dartnell on how we could recover without much coal or oil (01:02:17)Kevin Esvelt on people who want to bring down civilisation — and how AI could help them succeed (01:08:41)Toby Ord on whether rogue AI really could wipe us all out (01:19:50)Joan Rohlfing on why we need to worry about more than just nuclear winter (01:25:06)Annie Jacobsen on the effects of firestorms, rings of annihilation, and electromagnetic pulses from nuclear blasts (01:31:25)Dave Denkenberger on disruptions to electricity and communications (01:44:43)Luisa Rodriguez on how we might lose critical knowledge (01:53:01)Kevin Esvelt on the pandemic scenarios that could bring down civilisation (01:57:32)Andy Weber on tech to help with pandemics (02:15:45)Christian Ruhl on why we need the equivalents of seatbelts and airbags to prevent nuclear war from threatening civilisation (02:24:54)Mark Lynas on whether wide-scale famine would lead to civilisational collapse (02:37:58)Dave Denkenberger on low-cost, low-tech solutions to make sure everyone is fed no matter what (02:49:02)Athena Aktipis on whether society would go all Mad Max in the apocalypse (02:59:57)Luisa Rodriguez on why she's optimistic survivors wouldn't turn on one another (03:08:02)David Denkenberger on how resilient foods research overlaps with space technologies (03:16:08)Zach Weinersmith on what we'd practically need to do to save a pocket of humanity in space (03:18:57)Lewis Dartnell on changes we could make today to make us more resilient to potential catastrophes (03:40:45)Christian Ruhl on thoughtful philanthropy to reduce the impact of catastrophes (03:46:40)Toby Ord on whether civilisation could rebuild from a small surviving population (03:55:21)Luisa Rodriguez on how fast populations might rebound (04:00:07)David Denkenberger on the odds civilisation recovers even without much preparation (04:02:13)Athena Aktipis on the best ways to prepare for a catastrophe, and keeping it fun (04:04:15)Will MacAskill on the virtues of the potato (04:19:43)Luisa's outro (04:25:37)Tell us what you thought! https://forms.gle/T2PHNQjwGj2dyCqV9Content editing: Katy Moore and Milo McGuireAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellTranscriptions and web: Katy Moore
The era of making AI smarter just by making it bigger is ending. But that doesn't mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what coming years will look like.Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.Links to learn more, video, highlights, and full transcript: https://80k.info/to25As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that's over.What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.So they pivoted to something radically different: instead of training smarter models, they're giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.The results are impressive but this extra computing time comes at a cost: OpenAI's o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica's worth of reasoning to solve individual problems at a cost of over $1,000 per question.This isn't just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out, starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.Toby and host Rob discuss the implications of all that, plus the return of reinforcement learning (and resulting increase in deception), and Toby's commitment to clarifying the misleading graphs coming out of AI companies — to separate the snake oil and fads from the reality of what's likely a "transformative moment in human history."Recorded on May 23, 2025.Chapters:Cold open (00:00:00)Toby Ord is back — for a 4th time! (00:01:20)Everything has changed (and changed again) since 2020 (00:01:37)Is x-risk up or down? (00:07:47)The new scaling era: compute at inference (00:09:12)Inference scaling means less concentration (00:31:21)Will rich people get access to AGI first? Will the rest of us even know? (00:35:11)The new regime makes 'compute governance' harder (00:41:08)How 'IDA' might let AI blast past human level — or not (00:50:14)Reinforcement learning brings back 'reward hacking' agents (01:04:56)Will we get warning shots? Will they even help? (01:14:41)The scaling paradox (01:22:09)Misleading charts from AI companies (01:30:55)Policy debates should dream much bigger (01:43:04)Scientific moratoriums have worked before (01:56:04)Might AI 'go rogue' early on? (02:13:16)Lamps are regulated much more than AI (02:20:55)Companies made a strategic error shooting down SB 1047 (02:29:57)Companies should build in emergency brakes for their AI (02:35:49)Toby's bottom lines (02:44:32)Tell us what you thought! https://forms.gle/enUSk8HXiCrqSA9J8Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteTranscriptions and web: Katy Moore
The 2020s have so far been marked by pandemic, war, and startling technological breakthroughs. Conversations around climate disaster, great-power conflict, and malicious AI are seemingly everywhere. It's enough to make anyone feel like the end might be near. Toby Ord has made it his mission to figure out just how close we are to catastrophe — and maybe not close at all!Ord is the author of the 2020 book, The Precipice: Existential Risk and the Future of Humanity. Back then, I interviewed Ord on the American Enterprise Institute's Political Economy podcast, and you can listen to that episode here. In 2024, he delivered his talk, The Precipice Revisited, in which he reassessed his outlook on the biggest threats facing humanity.Today on Faster, Please — The Podcast, Ord and I address the lessons of Covid, our risk of nuclear war, potential pathways for AI, and much more.Ord is a senior researcher at Oxford University. He has previously advised the UN, World Health Organization, World Economic Forum, and the office of the UK Prime Minister.In This Episode* Climate change (1:30)* Nuclear energy (6:14)* Nuclear war (8:00)* Pandemic (10:19)* Killer AI (15:07)* Artificial General Intelligence (21:01)Below is a lightly edited transcript of our conversation. Climate change (1:30). . . the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.Pethokoukis: Let's just start out by taking a brief tour through the existential landscape and how you see it now versus when you first wrote the book The Precipice, which I've mentioned frequently in my writings. I love that book, love to see a sequel at some point, maybe one's in the works . . . but let's start with the existential risk, which has dominated many people's thinking for the past quarter-century, which is climate change.My sense is, not just you, but many people are somewhat less worried than they were five years ago, 10 years ago. Perhaps they see at least the most extreme outcomes less likely. How do you see it?Ord: I would agree with that. I'm not sure that everyone sees it that way, but there were two really big and good pieces of news on climate that were rarely reported in the media. One of them is that there's the question about how many emissions there'll be. We don't know how much carbon humanity will emit into the atmosphere before we get it under control, and there are these different emissions pathways, these RCP 4.5 and things like this you'll have heard of. And often, when people would give a sketch of how bad things could be, they would talk about RCP 8.5, which is the worst of these pathways, and we're very clearly not on that, and we're also, I think pretty clearly now, not on RCP 6, either. So the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.What are we doing right?Ultimately, some of those pathways were based on business-as-usual ideas that there wouldn't be climate change as one of the biggest issues in the international political sphere over decades. So ultimately, nations have been switching over to renewables and low-carbon forms of power, which is good news. They could be doing it much more of it, but it's still good news. Back when we initially created these things, I think we would've been surprised and happy to find out that we were going to end up among the better two pathways instead of the worst ones.The other big one is that, as well as how much we'll admit, there's the question of how bad is it to have a certain amount of carbon in the atmosphere? In particular, how much warming does it produce? And this is something of which there's been massive uncertainty. The general idea is that we're trying to predict, if we were to double the amount of carbon in the atmosphere compared to pre-industrial times, how many degrees of warming would there be? The best guess since the year I was born, 1979, has been three degrees of warming, but the uncertainty has been somewhere between one and a half degrees and four and a half.Is that Celsius or Fahrenheit, by the way?This is all Celsius. The climate community has kept the same uncertainty from 1979 all the way up to 2020, and it's a wild level of uncertainty: Four and a half degrees of warming is three times one and a half degrees of warming, so the range is up to triple these levels of degrees of warming based on this amount of carbon. So massive uncertainty that hadn't changed over many decades.Now they've actually revised that and have actually brought in the range of uncertainty. Now they're pretty sure that it's somewhere between two and a half and four degrees, and this is based on better understanding of climate feedbacks. This is good news if you're concerned about worst-case climate change. It's saying it's closer to the central estimate than we'd previously thought, whereas previously we thought that there was a pretty high chance that it could even be higher than four and a half degrees of warming.When you hear these targets of one and a half degrees of warming or two degrees of warming, they sound quite precise, but in reality, we were just so uncertain of how much warming would follow from any particular amount of emissions that it was very hard to know. And that could mean that things are better than we'd thought, but it could also mean things could be much worse. And if you are concerned about existential risks from climate change, then those kind of tail events where it's much worse than we would've thought the things would really get, and we're now pretty sure that we're not on one of those extreme emissions pathways and also that we're not in a world where the temperature is extremely sensitive to those emissions.Nuclear energy (6:14)Ultimately, when it comes to the deaths caused by different power sources, coal . . . killed many more people than nuclear does — much, much more . . .What do you make of this emerging nuclear power revival you're seeing across Europe, Asia, and in the United States? At least the United States it's partially being driven by the need for more power for these AI data centers. How does it change your perception of risk in a world where many rich countries, or maybe even not-so-rich countries, start re-embracing nuclear energy?In terms of the local risks with the power plants, so risks of meltdown or other types of harmful radiation leak, I'm not too concerned about that. Ultimately, when it comes to the deaths caused by different power sources, coal, even setting aside global warming, just through particulates being produced in the soot, killed many more people than nuclear does — much, much more, and so nuclear is a pretty safe form of energy production as it happens, contrary to popular perception. So I'm in favor of that. But the proliferation concerns, if it is countries that didn't already have nuclear power, then the possibility that they would be able to use that to start a weapons program would be concerning.And as sort of a mechanism for more clean energy. Do you view nuclear as clean energy?Yes, I think so. It's certainly not carbon-producing energy. I think that it has various downsides, including the difficulty of knowing exactly what to do with the fuel, that will be a very long lasting problem. But I think it's become clear that the problems caused by other forms of energy are much larger and we should switch to the thing that has fewer problems, rather than more problems.Nuclear war (8:00)I do think that the Ukraine war, in particular, has created a lot of possible flashpoints.I recently finished a book called Nuclear War: A Scenario, which is kind of a minute-by-minute look at how a nuclear war could break out. If you read the book, the book is terrifying because it really goes into a lot of — and I live near Washington DC, so when it gives its various scenarios, certainly my house is included in the blast zone, so really a frightening book. But when it tried to explain how a war would start, I didn't find it a particularly compelling book. The scenarios for actually starting a conflict, I didn't think sounded particularly realistic.Do you feel — and obviously we have Russia invade Ukraine and loose talk by Vladimir Putin about nuclear weapons — do you feel more or less confident that we'll avoid a nuclear war than you did when you wrote the book?Much less confident, actually. I guess I should say, when I wrote the book, it came out in 2020, I finished the writing in 2019, and ultimately we were in a time of relatively low nuclear risk, and I feel that the risk has risen. That said, I was trying to provide estimates for the risk over the next hundred years, and so I wasn't assuming that the low-risk period would continue indefinitely, but it was quite a shock to end up so quickly back in this period of heightened tensions and threats of nuclear escalation, the type of thing I thought was really from my parents' generation. So yes, I do think that the Ukraine war, in particular, has created a lot of possible flashpoints. That said, the temperature has come down on the conversation in the last year, so that's something.Of course, the conversation might heat right back up if we see a Chinese invasion of Taiwan. I've been very bullish about the US economy and world economy over the rest of this decade, but the exception is as long as we don't have a war with China, from an economic point of view, but certainly also a nuclear point of view. Two nuclear armed powers in conflict? That would not be an insignificant event from the existential-risk perspective.It is good that China has a smaller nuclear arsenal than the US or Russia, but there could easily be a great tragedy.Pandemic (10:19)Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either.The book comes out during the pandemic. Did our response to the pandemic make you more or less confident in our ability and willingness to confront that kind of outbreak? The worst one that saw in a hundred years?Yeah, overall, it made me much less confident. There'd been general thought by those who look at these large catastrophic risks that when the chips are down and the threat is imminent, that people will see it and will band together and put a lot of effort into it; that once you see the asteroid in your telescope and it's headed for you, then things will really get together — a bit like in the action movies or what have you.That's where I take my cue from, exactly.And with Covid, it was kind of staring us in the face. Those of us who followed these things closely were quite alarmed a long time before the national authorities were. Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either. That said, scientists, particularly developing RNA vaccines, did better than I expected.In the years leading up to the pandemic, certainly we'd seen other outbreaks, they'd had the avian flu outbreak, and you know as well as I do, there were . . . how many white papers or scenario-planning exercises for just this sort of event. I think I recall a story where, in 2018, Bill Gates had a conversation with President Trump during his first term about the risk of just such an outbreak. So it's not as if this thing came out of the blue. In many ways we saw the asteroid, it was just pretty far away. But to me, that says something again about as humans, our ability to deal with severe, but infrequent, risks.And obviously, not having a true global, nasty outbreak in a hundred years, where should we focus our efforts? On preparation? Making sure we have enough ventilators? Or our ability to respond? Because it seems like the preparation route will only go so far, and the reason it wasn't a much worse outbreak is because we have a really strong ability to respond.I'm not sure if it's the same across all risks as to how preparation versus ability to respond, which one is better. In some risks, there's also other possibilities like avoiding an outbreak, say, an accidental outbreak happening at all, or avoiding a nuclear war starting and not needing to actually respond at all. I'm not sure if there's an overall rule as to which one was better.Do you have an opinion on the outbreak of Covid?I don't know whether it was a lab leak. I think it's a very plausible hypothesis, but plausible doesn't mean it's proven.And does the post-Covid reaction, at least in the United States, to vaccines, does that make you more or less confident in our ability to deal with . . . the kind of societal cohesion and confidence to tackle a big problem, to have enough trust? Maybe our leaders don't deserve that trust, but what do you make from this kind of pushback against vaccines and — at least in the United States — our medical authorities?When Covid was first really striking Europe and America, it was generally thought that, while China was locking down the Wuhan area, that Western countries wouldn't be able to lock down, that it wasn't something that we could really do, but then various governments did order lockdowns. That said, if you look at the data on movement of citizens, it turns out that citizens stopped moving around prior to the lockdowns, so the lockdown announcements were more kind of like the tail, rather than the dog.But over time, citizens wanted to kind of get back out and interact more, and the rules were preventing them, and if a large fraction of the citizens were under something like house arrest for the better part of a year, would that lead to some fairly extreme resentment and some backlash, some of which was fairly irrational? Yeah, that is actually exactly the kind of thing that you would expect. It was very difficult to get a whole lot of people to row together and take the same kind of response that we needed to coordinate the response to prevent the spread, and pushing for that had some of these bad consequences, which are also going to make it harder for next time. We haven't exactly learned the right lessons.Killer AI (15:07)If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.We're more than halfway through our chat and now we're going to get to the topic probably most people would like to hear about: After the robots take our jobs, are they going to kill us? What do you think? What is your concern about AI risk?I'm quite concerned about it. Ultimately, when I wrote my book, I put AI risk as the biggest existential risk, albeit the most uncertain, as well, and I would still say that. That said, some things have gotten better since then.I would assume what makes you less confident is one, what seems to be the rapid advance — not just the rapid advance of the technology, but you have the two leading countries in a geopolitical globalization also being the leaders in the technology and not wanting to slow it down. I would imagine that would make you more worried that we will move too quickly. What would make you more confident that we would avoid any serious existential downsides?I agree with your supposition that the attempts by the US and China to turn this into some kind of arms race are quite concerning. But here are a few things: Back when I was writing the book, the leading AI systems with things like AlphaGo, if you remember that, or the Atari plane systems.Quaint. Quite quaint.It was very zero-sum, reinforcement-learning-based game playing, where these systems were learning directly to behave adversarially to other systems, and they could only understand the kind of limited aspect about the world, and struggle, and overcoming your adversary. That was really all they could do, and the idea of teaching them about ethics, or how to treat people, and the diversity of human values seemed almost impossible: How do you tell a chess program about that?But then what we've ended up with is systems that are not inherently agents, they're not inherently trying to maximize something. Rather, you ask them questions and they blurt out some answers. These systems have read more books on ethics and moral philosophy than I have, and they've read all kinds of books about the human condition. Almost all novels that have ever been published, and pretty much every page of every novel involves people judging the actions of other people and having some kind of opinions about them, and so there's a huge amount of data about human values, and how we think about each other, and what's inappropriate behavior. And if you ask the systems about these things, they're pretty good at judging whether something's inappropriate behavior, if you describe it.The real challenge remaining is to get them to care about that, but at least the knowledge is in the system, and that's something that previously seemed extremely difficult to do. Also, these systems, there are versions that do reasoning and that spend longer with a private text stream where they think — it's kind of like sub-vocalizing thoughts to themselves before they answer. When they do that, these systems are thinking in plain English, and that's something that we really didn't expect. If you look at all of the weights of a neural network, it's quite inscrutable, famously difficult to know what it's doing, but somehow we've ended up with systems that are actually thinking in English and where that could be inspected by some oversight process. There are a number of ways in which things are better than I'd feared.So what is your actual existential risk scenario look like? This is what you're most concerned about happening with AI.I think it's quite hard to be all that concrete on it at the moment, partly because things change so quickly. I don't think that there's going to be some kind of existential catastrophe from AI in the next couple of years, partly because the current systems require so much compute in order to run them that they can only be run at very specialized and large places, of which there's only a few in the world. So that means the possibility that they break out and copy themselves into other systems is not really there, in which case, the possibility of turning them off is much possible as well.Also, they're not yet intelligent enough to be able to execute a lengthy plan. If you have some kind of complex task for them, that requires, say, 10 steps — for example, booking a flight on the internet by clicking through all of the appropriate pages, and finding out when the times are, and managing to book your ticket, and fill in the special codes they sent to your email, and things like that. That's a somewhat laborious task and the systems can't do things like that yet. There's still the case that, even if they've got a, say, 90 percent chance of completing any particular step, that the 10 percent chances of failure add up, and eventually it's likely to fail somewhere along the line and not be able to recover. They'll probably get better at that, but at the moment, the inability to actually execute any complex plans does provide some safety.Ultimately, the concern is that, at a more abstract level, we're building systems which are smarter than us at many things, and we're attempting to make them much more general and to be smarter than us across the board. If you know that one player is a better chess player than another, suppose Magnus Carlsen's playing me at chess, I can't predict exactly how he's going to beat me, but I can know with quite high likelihood that he will end up beating me. I'll end up in checkmate, even though I don't know what moves will happen in between here and there, and I think that it's similar with AI systems. If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.Artificial General Intelligence (21:01)Ultimately, existential risks are global public goods problems.I frequently check out the Metaculus online prediction platform, and I think currently on that platform, 2027 for what they would call “weak AGI,” artificial general intelligence — a date which has moved up two months in the past week as we're recording this, and then I think 2031 also has accelerated for “strong AGI,” so this is pretty soon, 2027 or 2031, quite soon. Is that kind of what you're assuming is going to happen, that we're going to have to deal with very powerful technologies quite quickly?Yeah, I think that those are good numbers for the typical case, what you should be expecting. I think that a lot of people wouldn't be shocked if it turns out that there is some kind of obstacle that slows down progress and takes longer before it gets overcome, but it's also wouldn't be surprising at this point if there are no more big obstacles and it's just a matter of scaling things up and doing fairly simple processes to get it to work.It's now a multi-billion dollar industry, so there's a lot of money focused on ironing out any kinks or overcoming any obstacles on the way. So I expect it to move pretty quickly and those timelines sound very realistic. Maybe even sooner.When you wrote the book, what did you put as the risk to human existence over the next a hundred years, and what is it now?When I wrote the book, I thought it was about one in six.So it's still one in six . . . ?Yeah, I think that's still about right, and I would say that most of that is coming from AI.This isn't, I guess, a specific risk, but, to the extent that being positive about our future means also being positive on our ability to work together, countries working together, what do you make of society going in the other direction where we seem more suspicious of other countries, or more even — in the United States — more suspicious of our allies, more suspicious of international agreements, whether they're trade or military alliances. To me, I would think that the Age of Globalization would've, on net, lowered that risk to one in six, and if we're going to have less globalization, to me, that would tend to increase that risk.That could be right. Certainly increased suspicion, to the point of paranoia or cynicism about other nations and their ability to form deals on these things, is not going to be helpful at all. Ultimately, existential risks are global public goods problems. This continued functioning of human civilization is this global public good and existential risk is the opposite. And so these are things where, one way to look at it is that the US has about four percent of the world's people, so one in 25 people live in the US, and so an existential risk is hitting 25 times as many people as. So if every country is just interested in themself, they'll undervalue it by a factor of 25 or so, and the countries need to work together in order to overcome that kind of problem. Ultimately, if one of us falls victim to these risks, then we all do, and so it definitely does call out for international cooperation. And I think that it has a strong basis for international cooperation. It is in all of our interests. There are also verification possibilities and so on, and I'm actually quite optimistic about treaties and other ways to move forward.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Tech tycoons have got the economics of AI wrong - Economist* Progress in Artificial Intelligence and its Determinants - Arxiv* The role of personality traits in shaping economic returns amid technological change - CEPR▶ Business* Tech CEOs try to reassure Wall Street after DeepSeek shock - Wapo* DeepSeek Calls for Deep Breaths From Big Tech Over Earnings - Bberg Opinion* Apple's AI Moment Is Still a Ways Off - WSJ* Bill Gates Isn't Like Those Other Tech Billionaires - NYT* OpenAI's Sam Altman and SoftBank's Masayoshi Son Are AI's New Power Couple - WSJ* SoftBank Said to Be in Talks to Invest as Much as $25 Billion in OpenAI - NYT* Microsoft sheds $200bn in market value after cloud sales disappoint - FT▶ Policy/Politics* ‘High anxiety moment': Biden's NIH chief talks Trump 2.0 and the future of US science - Nature* Government Tech Workers Forced to Defend Projects to Random Elon Musk Bros - Wired* EXCLUSIVE: NSF starts vetting all grants to comply with Trump's orders - Science* Milei, Modi, Trump: an anti-red-tape revolution is under way - Economist* FDA Deregulation of E-Cigarettes Saved Lives and Spurred Innovation - Marginal Revolution* Donald Trump revives ideas of a Star Wars-like missile shield - Economist▶ AI/Digital* Is DeepSeek Really a Threat? - PS* ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Work Assistant - WSJ* OpenAI teases “new era” of AI in US, deepens ties with government - Ars* AI's Power Requirements Under Exponential Growth - Rand* How DeepSeek Took a Chunk Out of Big AI - Bberg* DeepSeek poses a challenge to Beijing as much as to Silicon Valley - Economist▶ Biotech/Health* Creatine shows promise for treating depression - NS* FDA approves new, non-opioid painkiller Journavx - Wapo▶ Clean Energy/Climate* Another Boffo Energy Forecast, Just in Time for DeepSeek - Heatmap News* Column: Nuclear revival puts uranium back in the critical spotlight - Mining* A Michigan nuclear plant is slated to restart, but Trump could complicate things - Grist▶ Robotics/AVs* AIs and Robots Should Sound Robotic - IEEE Spectrum* Robot beauticians touch down in California - FT Opinion▶ Space/Transportation* A Flag on Mars? Maybe Not So Soon. - NYT* Asteroid triggers global defence plan amid chance of collision with Earth in 2032 - The Guardian* Lurking Inside an Asteroid: Life's Ingredients - NYT▶ Up Wing/Down Wing* An Ancient 'Lost City' Is Uncovered in Mexico - NYT* Reflecting on Rome, London and Chicago after the Los Angeles fires - Wapo Opinion▶ Substacks/Newsletters* I spent two days testing DeepSeek R1 - Understanding AI* China's Technological Advantage -overlapping tech-industrial ecosystems - AI Supremacy* The state of decarbonization in five charts - Exponential View* The mistake of the century - Slow Boring* The Child Penalty: An International View - Conversable Economist* Deep Deepseek History and Impact on the Future of AI - next BIG futureFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Humanity isn't remotely longtermist, so arguments for AGI x-risk should focus on the near term, published by Seth Herd on August 13, 2024 on LessWrong. Toby Ord recently published a nice piece On the Value of Advancing Progress about mathematical projections of far-future outcomes given different rates of progress and risk levels. The problem with that and many arguments for caution is that people usually barely care about possibilities even twenty years out. We could talk about sharp discounting curves in decision-making studies, and how that makes sense given evolutionary pressures in tribal environments. But I think this is pretty obvious from talking to people and watching our political and economic practices. Utilitarianism is a nicely self-consistent value system. Utilitarianism pretty clearly implies longtermism. Most people don't care that much about logical consistency,[1] so they are happily non-utilitarian and non-longtermist in a variety of ways. Many arguments for AGI safety are longtermist, or at least long-term, so they're not going to work well for most of humanity. This is a fairly obvious, but worth-keeping-in-mind point. One non-obvious lemma of this observation is that much skepticism about AGI x-risk is probably based on skepticism about AGI happening soon. This doesn't explain all skepticism, but it's a significant factor worth addressing. When people dig into their logic, that's often a central point. They start out saying "AGI wouldn't kill humans" then over the course of a conversation it turns out that they feel that way primarily because they don't think real AGI will happen in their lifetimes. Any discussion of AGI x-risks isn't productive, because they just don't care about it. The obvious counterpoint is "You're pretty sure it won't happen soon? I didn't know you were an expert in AI or cognition!" Please don't say this - nothing convinces your opponents to cling to their positions beyond all logic like calling them stupid.[2] Something like "well, a lot of people with the most relevant expertise think it will happen pretty soon. A bunch more think it will take longer. So I just assume I don't know which is right, and it might very well happen pretty soon". It looks to me like discussing whether AGI might threaten humans is pretty pointless if the person is still assuming it's not going to happen for a long time. Once you're past that, it might make sense to actually talk about why you think AGI would be risky for humans.[3] 1. ^ This is an aside, but you'll probably find that utilitarianism isn't that much more logical than other value systems anyway. Preferring what your brain wants you to prefer, while avoiding drastic inconsistency, has practical advantages over values that are more consistent but that clash with your felt emotions. So let's not assume humanity isn't utilitarian just because it's stupid. 2. ^ Making sure any discussions you have about x-risk are pleasant for all involved is probably actually the most important strategy. I strongly suspect that personal affinity weighs more heavily than logic on average, even for fairly intellectual people. (Rationalists are a special case; I think we're resistant but not immune to motivated reasoning). So making a few points in a pleasant way, then moving on to other topics they like is probably way better than making the perfect logical argument while even slightly irritating them. 3. ^ From there you might be having the actual discussion on why AGI might threaten humans. Here are some things I've seen be convincing. People seem to often think "okay fine it might happen soon, but surely AI smarter than us still won't have free will and make its own goals". From there you could point out that it needs goals to be useful, and if it misunderstands those goals even slightly, it might be...
Future Affairs LIVE: De mensheid is in staat pandemieën te creëren, nucleaire oorlogen te starten en de natuur nog verder te ontregelen. Maar een moreel kader om dat níet te doen ontbreekt. De kans dat we deze eeuw als mensheid ten onder gaan is volgens de wereldberoemd moraalfilosoof Toby Ord zo'n 1 op 6. De hoogste tijd dat we een plan maken voor ons voortbestaan hier op aarde. Toby Ord is verbonden aan The Future of Humanity Institute van de universiteit van Oxford en hij is adviseur voor World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council en the UK Prime Minister's Office. Zijn onderzoek richt zich op de rampen die ons voortbestaan op aarde bedreigen.Wij spraken hem tijdens Brainwash Festival in Amsterdam over de grote vraag: hoeveel toekomst hebben we nog?Gast: Toby OrdPresentatie: Jessica van der Schalk & Wouter van NoortProductie: Brainwash FestivalMontage: Gal Tsadok-HaiZie het privacybeleid op https://art19.com/privacy en de privacyverklaring van Californië op https://art19.com/privacy#do-not-sell-my-info.
Leven we inderdaad op een historisch keerpunt waarop de mensheid gedwongen wordt heel anders te leven? Wat zal de opvolger van het antropoceen zijn waarin de mens centraal stond? Kan de intelligentie van de natuur ons helpen een nieuwe weg in te slaan? Kunnen we omgaan met de overweldigende complexiteit van de uitdagingen die op ons afkomen? In deze aflevering zetten we de hoogtepunten van een jaar lang gesprekken met pioniers en denkers voor Future Affairs op een rij. Hoe heeft deze podcastserie onszelf nou veranderd? En wat komt er de komende decennia op ons af?Met fragmenten uit de afleveringen met: Philipp Blom, Toby Ord, René ten Bos, Haroon Sheikh, Marleen Stikker, Bernardo Kastrup, Jalila Essaidi, Bob Hendrikx en Jeremy Lent.Abonneer je hier op de Future Affairs nieuwsbrief: nrc.nl/futureaffairsPresentatie: Jessica van der Schalk & Wouter van NoortProductie: Ruben PestMontage: Gal Tsadok-HaiZie het privacybeleid op https://art19.com/privacy en de privacyverklaring van Californië op https://art19.com/privacy#do-not-sell-my-info.
Last week's episode covered a man-made existential risk to humanity—nuclear war. But what about natural risks? Could there, right now, be a vast asteroid sailing through space that'll collide with Earth, sending us go the way of the dinosaurs?In this rocky episode of The Studies Show, Tom and Stuart look at the data on how often we should expect civilisation-destroying asteroids to hit Earth - and what if anything we can do about it if one is approaching.The Studies Show is brought to you by Works in Progress magazine, the best place on the internet to find mind-changing essays on science, technology, and human progress. We've both written for WiP—one of Tom's articles there is the basis for this episode. You can find all their issues for free at worksinprogress.co.Show notes* Tom's Works in Progress article on the threat from asteroids, on which this episode is based* Toby Ord's book The Precipice, on existential risk (including discussion of asteroids)* Article from Finn Moorhouse on risks from asteroids* Analysis of moon craters to work out how often asteroids hit* And an equation to calculate the impact power of an asteroid hit, from the characteristics of the asteroid* Report from the 2013 US Congressional hearing on threats from outer space* NASA's explanation of how it scans space for asteroids* Carl Sagan's 1994 article on the “dual-use” propensity of asteroid-deflection technology* 2015 article on mining asteroids, and how nudging them closer could help* Just one example of a recent article (2024) on asteroid deflection techniques* 2023 Nature article about the successful DART mission to nudge an asteroid with kinetic force* NASA's DART page with extra news and infoCreditsThe Studies Show is produced by Julian Mayers at Yada Yada Productions. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thestudiesshowpod.com/subscribe
I'm often asked about how the existential risk landscape has changed in the years since I wrote The Precipice. Earlier this year, I gave a talk on exactly that, and I want to share it here. Here's a video of the talk and a full transcript. In the years since I wrote The Precipice, the question I'm asked most is how the risks have changed. It's now almost four years since the book came out, but the text has to be locked down a long time earlier, so we are really coming up on about five years of changes to the risk landscape. I'm going to dive into four of the biggest risks — climate change, nuclear, pandemics, and AI — to show how they've changed. Now a lot has happened over those years, and I don't want this to just be recapping the news in fast-forward. But [...] ---Outline:(01:30) Climate Change(01:58) Carbon Emissions(03:18) Climate Sensitivity(06:43) Nuclear(06:46) Heightened Chance of Onset(08:16) Likely New Arms Race(09:54) Funding Collapse(10:53) Pandemics(10:56) Covid(16:03) Protective technologies(18:59) AI in Biotech(20:32) AI(20:50) RL agents ⇒ language models(24:59) Racing(27:05) Governance(30:14) ConclusionsThe original text contained 7 images which were described by AI. --- First published: July 12th, 2024 Source: https://forum.effectivealtruism.org/posts/iKLLSYHvnhgcpoBxH/the-precipice-revisited --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I'm often asked about how the existential risk landscape has changed in the years since I wrote The Precipice. Earlier this year, I gave a talk on exactly that, and I want to share it here. Here's a video of the talk and a full transcript. In the years since I wrote The Precipice, the question I'm asked most is how the risks have changed. It's now almost four years since the book came out, but the text has to be locked down a long time earlier, so we are really coming up on about five years of changes to the risk landscape. I'm going to dive into four of the biggest risks — climate change, nuclear, pandemics, and AI — to show how they've changed. Now a lot has happened over those years, and I don't want this to just be recapping the news in fast-forward. But [...] ---Outline:(01:30) Climate Change(01:58) Carbon Emissions(03:18) Climate Sensitivity(06:36) Nuclear(06:39) Heightened Chance of Onset(08:09) Likely New Arms Race(09:48) Funding Collapse(10:47) Pandemics(10:50) Covid(15:57) Protective technologies(18:53) AI in Biotech(20:27) AI(20:45) RL agents ⇒ language models(24:53) Racing(26:59) Governance(30:09) ConclusionsThe original text contained 7 images which were described by AI. --- First published: July 12th, 2024 Source: https://forum.effectivealtruism.org/posts/iKLLSYHvnhgcpoBxH/the-precipice-revisited --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Precipice Revisited, published by Toby Ord on July 12, 2024 on The Effective Altruism Forum. I'm often asked about how the existential risk landscape has changed in the years since I wrote The Precipice. Earlier this year, I gave a talk on exactly that, and I want to share it here. Here's a video of the talk and a full transcript. In the years since I wrote The Precipice, the question I'm asked most is how the risks have changed. It's now almost four years since the book came out, but the text has to be locked down a long time earlier, so we are really coming up on about five years of changes to the risk landscape. I'm going to dive into four of the biggest risks - climate change, nuclear, pandemics, and AI - to show how they've changed. Now a lot has happened over those years, and I don't want this to just be recapping the news in fast-forward. But luckily, for each of these risks I think there are some key insights and takeaways that one can distill from all that has happened. So I'm going to take you through them and tease out these key updates and why they matter. I'm going to focus on changes to the landscape of existential risk - which includes human extinction and other ways that humanity's entire potential could be permanently lost. For most of these areas, there are many other serious risks and ongoing harms that have also changed, but I won't be able to get into those. The point of this talk is to really narrow in on the changes to existential risk. Climate Change Let's start with climate change. We can estimate the potential damages from climate change in three steps: 1. how much carbon will we emit? 2. how much warming does that carbon produce? 3. how much damage does that warming do? And there are key updates on the first two of these, which have mostly flown under the radar for the general public. Carbon Emissions The question of how much carbon we will emit is often put in terms of Representative Concentration Pathways (RCPs). Initially there were 4 of these, with higher numbers meaning more greenhouse effect in the year 2100. They are all somewhat arbitrarily chosen - meant to represent broad possibilities for how our emissions might unfold over the century. Our lack of knowledge about which path we would take was a huge source of uncertainty about how bad climate change would be. Many of the more dire climate predictions are based on the worst of these paths, RCP 8.5. It is now clear that we are not at all on RCP 8.5, and that our own path is headed somewhere between the lower two paths. This isn't great news. Many people were hoping we could control our emissions faster than this. But for the purposes of existential risk from climate change, much of the risk comes from the worst case possibilities, so even just moving towards the middle of the range means lower existential risk - and the lower part of the middle is even better. Climate Sensitivity Now what about the second question of how much warming that carbon will produce? The key measure here is something called the equilibrium climate sensitivity. This is roughly defined as how many degrees of warming there would be if the concentrations of carbon in the atmosphere were to double from pre-industrial levels. If there were no feedbacks, this would be easy to estimate: doubling carbon dioxide while keeping everything else fixed produces about 1.2°C of warming. But the climate sensitivity also accounts for many climate feedbacks, including water vapour and cloud formation. These make it higher and also much harder to estimate. When I wrote The Precipice, the IPCC stated that climate sensitivity was likely to be somewhere between 1.5°C and 4.5°C. When it comes to estimating the impacts of warming, this is a vast range, with the top giving three times as much warming as the bottom. Moreover, the...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Value of Advancing Progress, published by Toby Ord on July 11, 2024 on The Effective Altruism Forum. (PDF available) I show how a standard argument for advancing progress is extremely sensitive to how humanity's story eventually ends. Whether advancing progress is ultimately good or bad depends crucially on whether it also advances the end of humanity. Because we know so little about the answer to this crucial question, the case for advancing progress is undermined. I suggest we must either overcome this objection through improving our understanding of these connections between progress and human extinction or switch our focus to advancing certain kinds of progress relative to others - changing where we are going, rather than just how soon we get there.[1] Things are getting better. While there are substantial ups and downs, long-term progress in science, technology, and values have tended to make people's lives longer, freer, and more prosperous. We could represent this as a graph of quality of life over time, giving a curve that generally trends upwards. What would happen if we were to advance all kinds of progress by a year? Imagine a burst of faster progress, where after a short period, all forms of progress end up a year ahead of where they would have been. We might think of the future trajectory of quality of life as being primarily driven by science, technology, the economy, population, culture, societal norms, moral norms, and so forth. We're considering what would happen if we could move all of these features a year ahead of where they would have been. While the burst of faster progress may be temporary, we should expect its effect of getting a year ahead to endure.[2] If we'd only advanced some domains of progress, we might expect further progress in those areas to be held back by the domains that didn't advance - but here we're imagining moving the entire internal clock of civilisation forward a year. If we were to advance progress in this way, we'd be shifting the curve of quality of life a year to the left. Since the curve is generally increasing, this would mean the new trajectory of our future is generally higher than the old one. So the value of advancing progress isn't just a matter of impatience - wanting to get to the good bits sooner - but of overall improvement in people's quality of life across the future. Figure 1. Sooner is better. The solid green curve is the default trajectory of quality of life over time, while the dashed curve is the trajectory if progress were uniformly advanced by one year (shifting the default curve to the left). Because the trajectories trend upwards, quality of life is generally higher under the advanced curve. To help see this, I've shaded the improvements to quality of life green and the worsenings red. That's a standard story within economics: progress in science, technology, and values has been making the world a better place, so a burst of faster progress that brought this all forward by a year would provide a lasting benefit for humanity. But this story is missing a crucial piece. The trajectory of humanity's future is not literally infinite. One day it will come to an end. This might be a global civilisation dying of old age, lumbering under the weight of accumulated bureaucracy or decadence. It might be a civilisation running out of resources: either using them up prematurely or enduring until the sun itself burns out. It might be a civilisation that ends in sudden catastrophe - a natural calamity or one of its own making. If the trajectory must come to an end, what happens to the previous story of an advancement in progress being a permanent uplifting of the quality of life? The answer depends on the nature of the end time. There are two very natural possibilities. One is that the end time is fixed. It...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Robust longterm comparisons, published by Toby Ord on May 15, 2024 on The Effective Altruism Forum. (Cross-posted from http://www.tobyord.com/writing/robust-longterm-comparisons ) The choice of discount rate is crucially important when comparing options that could affect our entire future. Except when it isn't. Can we tease out a class of comparisons that everyone can agree on regardless of their views on discounting? Some of the actions we can take today may have longterm effects - permanent changes to humanity's longterm trajectory. For example, we may take risks that could lead to human extinction. Or we might irreversibly destroy parts of our environment, creating permanent reductions in the quality of life. Evaluating and comparing such effects is usually extremely sensitive to what economists call the pure rate of time preference, denoted ρ. This is a way of encapsulating how much less we should value a benefit simply because it occurs at a later time. There are other components of the overall discount rate that adjust for the fact that an extra dollar is worth less when people are richer, that later benefits may be less likely to occur - or that the entire society may have ceased to exist by then. But the pure rate of time preference is the amount by which we should discount future benefits even after all those things have been accounted for. Most attempts to evaluate or compare options with longterm effects get caught up in intractable disagreements about ρ. Philosophers almost uniformly think ρ should be set to zero, with any bias towards the present being seen as unfair. That is my usual approach, and I've developed a framework for making longterm comparisons without any pure time preference. While some prominent economists agree that ρ should be zero, the default in economic analysis is to use a higher rate, such as 1% per year. The difference between a rate of 0% and 1% is small for most things economists evaluate, where the time horizon is a generation or less. But it makes a world of difference to the value of longterm effects. For example, ρ = 1% implies that a stream of damages starting in 500 years time and lasting a billion years is less bad than a single year of such damages today. So when you see a big disagreement on how to make a tradeoff between, say, economic benefits and existential risk, you can almost always pinpoint the source to a disagreement about ρ. This is why it was so surprising to read Charles Jones's recent paper: 'The AI Dilemma: Growth versus Existential Risk'. In his examination of whether and when the economic gains from developing advanced AI could outweigh the resulting existential risk, the rate of pure time preference just cancels out. The value of ρ plays no role in his primary model. There were many other results in the paper, but it was this detail that grabbed my attention. Here was a question about trading off risk of human extinction against improved economic consumption that economists and philosophers might actually be able to agree on. After all, even better than picking the correct level of ρ, deriving the correct conclusion, and yet still having half the readers ignore your findings, is if there is a way of conducting the analysis such that you are not only correct - but that everyone else can see that too. Might we be able to generalise this happy result further? Is there are broader range of long run effects in which the discount rate still cancels out? Are there other disputed parameters (empirical or normative) that also cancel out in those cases? What I found is that this can indeed be greatly generalised, creating a domain in which we can robustly compare long run effects - where the comparisons are completely unaffected by different assumptions about discounting. Let's start by considering a basic model w...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Supervolcanoes tail risk has been exaggerated?, published by Vasco Grilo on March 6, 2024 on The Effective Altruism Forum. This is a linkpost for the peer-reviewed article "Severe Global Cooling After Volcanic Super-Eruptions? The Answer Hinges on Unknown Aerosol Size" ( McGraw 2024). Below are its abstract, my notes, my estimation of a nearterm annual extinction risk from supervolcanoes of 3.38*10^-14, and a brief discussion of it. At the end, I have a table comparing my extinction risk estimates with Toby Ord's existential risk guesses given in The Precipice. Abstract Here is the abstract from McGraw 2024 (emphasis mine): Volcanic super-eruptions have been theorized to cause severe global cooling, with the 74 kya Toba eruption purported to have driven humanity to near-extinction. However, this eruption left little physical evidence of its severity and models diverge greatly on the magnitude of post-eruption cooling. A key factor controlling the super-eruption climate response is the size of volcanic sulfate aerosol, a quantity that left no physical record and is poorly constrained by models. Here we show that this knowledge gap severely limits confidence in model-based estimates of super-volcanic cooling, and accounts for much of the disagreement among prior studies. By simulating super-eruptions over a range of aerosol sizes, we obtain global mean responses varying from extreme cooling all the way to the previously unexplored scenario of widespread warming. We also use an interactive aerosol model to evaluate the scaling between injected sulfur mass and aerosol size. Combining our model results with the available paleoclimate constraints applicable to large eruptions, we estimate that global volcanic cooling is unlikely to exceed 1.5°C no matter how massive the stratospheric injection. Super-eruptions, we conclude, may be incapable of altering global temperatures substantially more than the largest Common Era eruptions. This lack of exceptional cooling could explain why no single super-eruption event has resulted in firm evidence of widespread catastrophe for humans or ecosystems. My notes I have no expertise in volcanology, but I found McGraw 2024 to be quite rigorous. In particular, they are able to use their model to replicate the more pessimistic results of past studies tweeking just 2 input parameters (highlighted by me below): "We next evaluate if the assessed aerosol size spread is the likely cause of disagreement among past studies with interactive aerosol models. For this task, we interpolated the peak surface temperature responses from our ModelE simulations to the injected mass and peak global mean aerosol size from several recent interactive aerosol model simulations of large eruptions (Fig. 7, left panel). Accounting for these two values alone (left panel), our model experiments are able to reproduce remarkably similar peak temperature responses as the original studies found". By "reproduce remarkably well", they are referring to a coefficient of determination (R^2) of 0.87 (see Fig. 7). "By comparison, if only the injected masses of the prior studies are used, the peak surface temperature responses cannot be reproduced". By this, they are referring to an R^2 ranging from -1.82 to -0.04[1] (see Fig. 7). They agree with past studies on the injected mass, but not on the aerosol size[2]. Fig. 3a (see below) illustrates the importance of the peak mean aerosol size. The greater the size, the weaker the cooling. I think this is explained as follows: Primarily, smaller particles reflect more sunlight per mass due to having greater cross-sectional area per mass[3]. Secondarily, larger particles have less time to reflect sunlight due to falling down faster[4]. According to Fig. 2 (see below), aerosol size increases with injected mass, which makes intuitive sen...
Remember when the airwaves were full of people questioning the idea of man-made climate change? You don't hear much from them any more - in large part becuase the evidence that our CO2 emissions are altering the climate has become so overwhelming.After a recap on how we know that carbon warms the climate, Tom and Stuart use this episode of The Studies Show to discuss climate predictions—er, I mean, projections—and how accurate they've been. They ask whether the media always gets it right when discussing climate (spoiler: no), and whether we should be optimistic or panicked about what's happening to the environment.The Studies Show is sponsored by Works in Progress magazine. Ever wondered what people mean when they talk about “progress studies”? Works in Progress is what they mean. It's a magazine bursting with fascinating articles on how science and technology have improved our lives - and how they could be even better in future. There's a whole new February 2024 issue out now - read it at this link.Show notes* 2023: the hottest year on record, with surprising and anomalous melting of ice in Antarctica* NASA on how the presence of CO2 in the atmosphere raises the Earth's temperature* Carbon Brief explains how scientists estimate climate sensitivity, and discusses the complexities of the latest climate models* The most recent IPCC report, from March 2023* The IEA's forecast of solar power, with the incredible and very optimistic graph mentioned in the episode:* Tom's unfortunately-titled Unherd article on the unlikely but much-discussed “RCP 8.5” scenario* Zeke Hausfather's study on matching up the projections of climate models with what actually happened years and decades later* Response from the sceptics (they still exist!)* Website offering responses to all the most common claims by climate change sceptics (e.g. “the Earth hasn't warmed since 1998”; “CO2 is plant food”)* Toby Ord on how, whereas climate change could be extremely bad, it's tricky to argue that it's a truly “existential” riskCredits and acknowledgementsThe Studies Show is produced by Julian Mayers at Yada Yada Productions. We're grateful to Karsten Haustein for talking to us for this episode (any errors are our own). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thestudiesshowpod.com/subscribe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear war tail risk has been exaggerated?, published by Vasco Grilo on February 26, 2024 on The Effective Altruism Forum. The views expressed here are my own, not those of Alliance to Feed the Earth in Disasters ( ALLFED), for which I work as a contractor. Summary I calculated a nearterm annual risk of human extinction from nuclear war of 5.93*10^-12 ( more). I consider grantmakers and donors interested in decreasing extinction risk had better focus on artificial intelligence ( AI) instead of nuclear war ( more). I would say the case for sometimes prioritising nuclear extinction risk over AI extinction risk is much weaker than the case for sometimes prioritising natural extinction risk over nuclear extinction risk ( more). I get a sense the extinction risk from nuclear war was massively overestimated in The Existential Risk Persuasion Tournament ( XPT) ( more). I have the impression Toby Ord greatly overestimated tail risk in The Precipice ( more). I believe interventions to decrease deaths from nuclear war should be assessed based on standard cost-benefit analysis ( more). I think increasing calorie production via new food sectors is less cost-effective to save lives than measures targeting distribution ( more). Extinction risk from nuclear war I calculated a nearterm annual risk of human extinction from nuclear war of 5.93*10^-12 (= (6.36*10^-14*5.53*10^-10)^0.5) from the geometric mean between[1]: My prior of 6.36*10^-14 for the annual probability of a war causing human extinction. My inside view estimate of 5.53*10^-10 for the nearterm annual probability of human extinction from nuclear war. By nearterm annual risk, I mean that in a randomly selected year from 2025 to 2050. I computed my inside view estimate of 5.53*10^-10 (= 0.0131*0.0422*10^-6) multiplying: 1.31 % annual probability of a nuclear weapon being detonated as an act of war. 4.22 % probability of insufficient calorie production given at least one nuclear detonation. 10^-6 probability of human extinction given insufficient calorie production. I explain the rationale for the above estimates in the next sections. Note nuclear war might have cascade effects which lead to civilisational collapse[2], which could increase longterm extinction risk while simultaneously having a negligible impact on the nearterm one I estimated. I do not explicitly assess this in the post, but I guess the nearterm annual risk of human extinction from nuclear war is a good proxy for the importance of decreasing nuclear risk from a longtermist perspective: My prior implicitly accounts for the cascade effects of wars. I derived it from historical data on the deaths of combatants due to not only fighting, but also disease and starvation, which are ever-present indirect effects of war. Nuclear war might have cascade effects, but so do other catastrophes. Global civilisational collapse due to nuclear war seems very unlikely to me. For instance, the maximum destroyable area by any country in a nuclear 1st strike was estimated to be 65.3 k km^2 in Suh 2023 (for a strike by Russia), which is just 70.8 % (= 65.3*10^3/( 92.2*10^3)) of the area of Portugal, or 3.42 % (= 65.3*10^3/( 1.91*10^6)) of the global urban area. Even if nuclear war causes a global civilisational collapse which eventually leads to extinction, I guess full recovery would be extremely likely. In contrast, an extinction caused by advanced AI would arguably not allow for a full recovery. I am open to the idea that nuclear war can have longterm implications even in the case of full recovery, but considerations along these lines would arguably be more pressing in the context of AI risk. For context, William MacAskill said the following on The 80,000 Hours Podcast. "It's quite plausible, actually, when we look to the very long-term future, that that's [whether artificial...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On being an EA for decades, published by Michelle Hutchinson on February 12, 2024 on The Effective Altruism Forum. A friend sent me on a lovely trip down memory lane last week. She forwarded me an email chain from 12 years ago in which we all pretended we left the EA community, and explained why. We were partly thinking through what might cause us to drift in order that we could do something to make that less likely, and partly joking around. It was a really nice reminder of how uncertain things felt back then, and how far we've come. Although the emails hadn't led us to take any specific actions, almost everyone on the thread was still devoting their time to helping others as much as they can. We're also, for the most part, still supporting each other in doing so. Of the 10 or so people on there, for example, Niel Bowerman is now my boss - CEO of 80k. Will MacAskill and Toby Ord both made good on their plans to write books and donate their salaries above £20k[1] a year. And Holly Morgan is who I turned to a couple of weeks ago when I needed help thinking through work stress. Here's what I wrote speculating about why I might drift away from EA. Note that the email below was written quickly and just trying to gesture at things I might worry about in future, I was paying very little attention to the details. The partner referenced became my husband a year and a half later, and we now have a four year old. On 10 February 2012 18:14, Michelle Hutchinson wrote: Writing this was even sadder than I expected it to be. 1. Holly's and my taste in music drove the rest of the HIH[2] crazy, and they murdered us both. 2. The feeling that I was letting down my parents, husband and children got the better of me, and I sought better paid employment. 3. Toby, Will and Nick got replaced by trustees whose good judgement and intelligence I didn't have nearly as much faith in, so I no longer felt fully supportive of the organisation - I was worried it was soon going to become preachy rather than effective, or even dangerous (researching future technology without much foresight). 4. I realised there were jobs that wouldn't involve working (or even emailing co-workers) Friday evenings :p 5. I remembered how much I loved reading philosophy all day, and teaching tutorials, and unaccountably Oxford offered me a stipendiary lectureship, so I returned to academia. 6. [My partner] got lazy career-wise, and I wanted a nice big house and expensive food, so I got a higher paid job. 7. I accidentally emailed confidential member information to our whole mailing list / said the wrong thing to one of our big funders / made a wrong call that got us into legal trouble, and it was politely suggested to me that the best way I could help the EA movement was by being nowhere near it. 8. When it became clear that although I rationally agreed with the moral positions of GWWC and CEA, most of my emotional motivation for working for them came from not wanting to let down the other [team members], it was decided really I wasn't a nice enough person to have a position of responsibility in such a great organisation. 9. [My partner] got a job too far away from other CEA people for me to be able to work for CEA and be with him. I chose him. 10. When we had children I took maternity leave, and then couldn't bear to leave the children to return to work. 11. I tried to give a talk about GWWC to a large audience, fainted with fright, and am now in a coma. When I wrote the email, I thought it was really quite likely that in 10 years I'd have left the organisation and community. Looking around the world, it seemed like a lot of people become less idealistic as they grow older. And looking inside myself, it felt pretty contingent that I happened to fall in with a group of people who supp...
A friend sent me on a lovely trip down memory lane last week. She forwarded me an email chain from 12 years ago in which we all pretended we left the EA community, and explained why. We were partly thinking through what might cause us to drift in order that we could do something to make that less likely, and partly joking around. It was a really nice reminder of how uncertain things felt back then, and how far we've come. Although the emails hadn't led us to take any specific actions, almost everyone on the thread was still devoting their time to helping others as much as they can. We're also, for the most part, still supporting each other in doing so. Of the 10 or so people on there, for example, Niel Bowerman is now my boss - CEO of 80k. Will MacAskill and Toby Ord [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: February 12th, 2024 Source: https://forum.effectivealtruism.org/posts/zEMvHK9Qa4pczWbJg/on-being-an-ea-for-decades --- Narrated by TYPE III AUDIO.
Rebroadcast: this episode was originally released in October 2021.Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.According to Carl Shulman, research associate at Oxford University's Future of Humanity Institute, that means you don't need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.Links to learn more, summary, and full transcript.The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American.So saving all US citizens at any given point in time would be worth $1,300 trillion.If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone.Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today.This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein.If the case is clear enough, why hasn't it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve?Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds.Carl suspects another reason is that it's difficult for the average voter to estimate and understand how large these respective risks are, and what responses would be appropriate rather than self-serving. If the public doesn't know what good performance looks like, politicians can't be given incentives to do the right thing.It's reasonable to assume that if we found out a giant asteroid were going to crash into the Earth one year from now, most of our resources would be quickly diverted into figuring out how to avert catastrophe.But even in the case of COVID-19, an event that massively disrupted the lives of everyone on Earth, we've still seen a substantial lack of investment in vaccine manufacturing capacity and other ways of controlling the spread of the virus, relative to what economists recommended.Carl expects that all the reasons we didn't adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we've never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on.Today's episode is in part our way of trying to improve this situation. In today's wide-ranging conversation, Carl and Rob also cover:A few reasons Carl isn't excited by ‘strong longtermism'How x-risk reduction compares to GiveWell recommendationsSolutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate changeThe history of bioweaponsWhether gain-of-function research is justifiableSuccesses and failures around COVID-19The history of existential riskAnd much moreProducer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exaggerating the risks (Part 13: Ord on Biorisk), published by Vasco Grilo on December 31, 2023 on The Effective Altruism Forum. This is a crosspost to Exaggerating the risks (Part 13: Ord on Biorisk), as published by David Thorstad on 29 December 2023. This massive democratization of technology in biological sciences … is at some level fantastic. People are very excited about it. But this has this dark side, which is that the pool of people that could include someone who has … omnicidal tendencies grows many, many times larger, thousands or millions of times larger as this technology is democratized, and you have more chance that you get one of these people with this very rare set of motivations where they're so misanthropic as to try to cause … worldwide catastrophe. Toby Ord, 80,000 Hours Interview Listen to this post [there is an option for this in the original post] 1. Introduction This is Part 13 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated. Part 1 introduced the series. Parts 2-5 ( sub-series: "Climate risk") looked at climate risk. Parts 6-8 ( sub-series: "AI risk") looked at the Carlsmith report on power-seeking AI. Parts 9, 10 and 11 began a new sub-series on biorisk. In Part 9, we saw that many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100. I think these estimates are a bit too high. Because I have had a hard time getting effective altruists to tell me directly what the threat is supposed to be, my approach was to first survey the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk. Parts 9, 10 and 11 gave a dozen preliminary reasons for doubt, surveyed at the end of Part 11. The second half of my approach is to show that initial arguments by effective altruists do not overcome the case for skepticism. Part 12 examined a series of risk estimates by Piers Millett and Andrew Snyder-Beattie. We saw, first, that many of these estimates are orders of magnitude lower than those returned by leading effective altruists and second, that Millett and Snyder-Beattie provide little in the way of credible support for even these estimates. Today's post looks at Toby Ord's arguments in The Precipice for high levels of existential risk. Ord estimates the risk of irreversible existential catastrophe by 2100 from naturally occurring pandemics at 1/10,000, and the risk from engineered pandemics at a whopping 1/30. That is a very high number. In this post, I argue that Ord does not provide sufficient support for either of his estimates. 2. Natural pandemics Ord begins with a discussion of natural pandemics. I don't want to spend too much time on this issue, since Ord takes the risk of natural pandemics to be much lower than that of engineered pandemics. At the same time, it is worth asking how Ord arrives at a risk of 1/10,000. Effective altruists effectively stress that humans have trouble understanding how large certain future-related quantities can be. For example, there might be 1020, 1050 or even 10100 future humans. However, effective altruists do not equally stress how small future-related probabilities can be. Risk probabilities can be on the order of 10-2 or even 10-5, but they can also be a great deal lower than that: for example, 10-10, 10-20, or 10-50 [for example, a terrorist attack causing human extinction is astronomically unlikely on priors]. Most events pose existential risks of this magnitude or lower, so if Ord wants us to accept that natural pandemics have a 1/10,000 chance of leading to irreversible existential catastrophe by 2100, Ord owes us a solid argument for this conclusion. It ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An exhaustive list of cosmic threats, published by JordanStone on December 5, 2023 on The Effective Altruism Forum. Toby Ord covers 'Asteroids and Comets' and 'Stellar Explosions' in The Precipice. But I thought it would be useful to provide an up-to-date and exhaustive list of all cosmic threats. I'm defining cosmic threat here as any existential risk potentially arising from space. I think this list may be useful for 3 main reasons: New cosmic threats are discovered frequently. So it's plausible that future cause areas could pop out of this space. I think that keeping an eye on it should help identify areas that may need research. Though it should be noted that some of the risks are totally impossible to protect against at this point (e.g. a rogue planet entering our solar system). Putting all of the cosmic threats together in one place could reveal that cosmic threats are more important than previously thought, or provide a good intro for someone interested in working in this space. There is momentum in existential risk reduction from outer space, with great powers (Russia, USA, China, India, Europe) already collaborating on asteroid impact risk. So harnessing that momentum to tackle some more of the risks on this list could be really tractable and may lead to collaboration on other x-risks like AI, biotech and nuclear. I will list each cosmic threat, provide a brief explanation, and find the best evidence I can to provide severity and probability estimates for each. Enjoy :) I'll use this format: Cosmic Threat [Severity of worst case scenario /10] [Probability of that scenario occurring in the next 100 years] Explanation of threat Explanation of rationale and approach Severity estimates For the severity, 10 is the extinction of all intelligent life on Earth, and 0 is a fart in the wind. It was difficult to pin down one number for threats with multiple outcomes (e.g. asteroids have different sizes). So the severity estimates are for the worst-case scenarios for each cosmic threat, and the probability estimate corresponds to that scenario. Probability estimates Probabilities are presented as % chance of that scenario occurring in the next 100 years. I have taken probabilities from the literature and converted values to normalise them as a probability of their occurrence within the next 100 years (as a %). This isn't a perfect way to do it, but I prioritised getting a general understanding of their probability, rather than numbers that are hard to imagine. When the severity or likelihood is unclear or not researched well enough, I've written 'unknown'. I'm trying my best to ignore reasoning along the lines of "if it hasn't happened before, then it very likely won't happen ever or is extremely rare" because of the anthropic principle. Our view of past events on Earth is biased towards a world that has allowed humanity to evolve, which likely required a few billion years of stable-ish conditions. So it is likely that we have just been lucky in the past, where no cosmic threats have disturbed Earth's habitability so extremely as to set back life's evolution by billions of years (not even the worst mass extinction ever at the Permian-Triassic boundary did this, as reptiles survived). An Exhaustive List of Cosmic Threats Format: Cosmic Threat [Severity of worst case scenario /10] [Probability of that scenario occurring in the next 100 years] Explanation of threat Solar flares [4/10] [1%]. Electromagnetic radiation erupts from the surface of the sun. Solar flares occur fairly regularly and cause minor impacts, mainly on communications. A large solar flare has the potential to cause electrical grids to fail, damage satellites, disrupt radio signals, cause increased radiation influx, destroy data storage devices, cause navigation errors, and permanently damage scientific eq...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 10 years of Earning to Give, published by AGB on November 8, 2023 on The Effective Altruism Forum. General note: The bulk of this post was written a couple of months ago, but I am releasing it now to coincide with the Effective Giving Spotlight week. I shortly expect to release a second post documenting some observations on the community building funding landscape. Introduction Way back in 2010, I was sitting in my parents' house, watching one of my favourite TV shows, the UK's Daily Politics. That day's guest was an Oxford academic by the name of Toby Ord. He was donating everything above 18000 (26300 in today's money) to charity, and gently pushing others to give 10%. "Nice guy," I thought. "Pity it'll never catch on." Two years later, a couple of peers interned at Giving What We Can. At the same time, I did my own internship in finance, and my estimate of my earning potential quadrupled[1]. One year after that, I graduated and took the Giving What We Can pledge myself. While my pledge form read that I had committed to donate 20% of my income, my goal was to hit far higher percentages. How did that go? Post goals Earning To Give was one of EA's first ideas to get major mainstream attention, much of it negative. Some was mean-spirited, but some of it read to me as a genuine attempt to warn young people about what they were signing up for. For example, from the linked David Brooks piece: From the article, Trigg seems like an earnest, morally serious man... First, you might start down this course seeing finance as a convenient means to realize your deepest commitment: fighting malaria. But the brain is a malleable organ....Every hour you spend with others, you become more like the people around you. If there is a large gap between your daily conduct and your core commitment, you will become more like your daily activities and less attached to your original commitment. You will become more hedge fund, less malaria. There's nothing wrong with working at a hedge fund, but it's not the priority you started out with. At the time, EAs had little choice but to respond to such speculation with speculation of their own. At this point, I can at least answer how some things have played out for me personally. I have divided this post into reflections on my personal EtG path and on the EA community. My path First, some context. Over the past decade: My wife Denise and I have donated 1.5m.[2] This equates to 46% of our combined gross incomes.[2] The rest of the money is split 550k / 550k / 700k between spending / saving (incl. pension) / taxes.[2] We have three children (ages 13, 6, 2) and live in London. I work as a trader, formerly at a quant trading firm and now at a hedge fund. Work Many critics of EtG assume that we really want to be doing something meaningful, but have - with a heavy heart - intellectually conceded that money is what matters. I want to emphasise this: This is not me, and I doubt it applies to even 20% of people doing EtG. If you currently feel this way, I strongly suspect you should stop. I like my work. I get to work with incredibly sharp and motivated people. I get to work on a diverse array of intellectual challenges. Most of all, I've managed to land a career that bears an uncanny resemblance to what I do with my spare time; playing games, looking for inconsistencies in others' beliefs, and exploiting that to win. But prior to discovering EtG, I was wrestling with the fact that this natural choice just seemed very selfish. As I saw it, my choices were to do something directly useful and be miserable but valuable, or to work in finance and be happy but worthless. So a reminder that the money I have a comparative advantage in earning is itself of value was a relief, not a burden. My career pathway has not been smooth, with a major derailment in 2018, which ...
General note: The bulk of this post was written a couple of months ago, but I am releasing it now to coincide with the Effective Giving Spotlight week. I shortly expect to release a second post documenting some observations on the community building funding landscape. IntroductionWay back in 2010, I was sitting in my parents' house, watching one of my favourite TV shows, the UK's Daily Politics. That day's guest was an Oxford academic by the name of Toby Ord. He was donating everything above £18000 (£26300 in today's money) to charity, and gently pushing others to give 10%."Nice guy," I thought. "Pity it'll never catch on."Two years later, a couple of peers interned at Giving What We Can. At the same time, I did my own internship in finance, and my estimate of my earning potential quadrupled[1]. One year after that, I graduated and took the Giving What We [...] ---Outline:(01:13) Post goals(02:28) My path(03:07) Work(05:36) Lifestyle Inflation(07:48) Savings(09:27) Donations(10:11) Community(10:14) Why engage?(11:58) Why stop?(13:48) Closing ThoughtsThe original text contained 7 footnotes which were omitted from this narration. --- First published: November 7th, 2023 Source: https://forum.effectivealtruism.org/posts/gxppfWhx7ta2fkF3R/10-years-of-earning-to-give --- Narrated by TYPE III AUDIO.
Figure 1 (see full caption below)This post is a part of Rethink Priorities' Worldview Investigations Team's CURVE Sequence: "Causes and Uncertainty: Rethinking Value in Expectation." The aim of this sequence is twofold: first, to consider alternatives to expected value maximisation for cause prioritisation; second, to evaluate the claim that a commitment to expected value maximisation robustly supports the conclusion that we ought to prioritise existential risk mitigation over all else.Executive SummaryBackgroundThis report builds on the model originally introduced by Toby Ord on how to estimate the value of existential risk mitigation. The previous framework has several limitations, including:The inability to model anything requiring shorter time units than centuries, like AI timelines.A very limited range of scenarios considered. In the previous model, risk and value growth can take different forms, and each combination represents one scenarioNo explicit treatment of persistence –– how long the mitigation efforts' effects last for ––as a variable of interest.No easy way [...] ---Outline:(00:38) Executive Summary(05:26) Abridged Report(11:20) Generalised Model: Arbitrary Risk Profile(13:37) Value(19:00) Great Filters and the Time of Perils Hypothesis(21:06) Decaying Risk(21:55) Results(21:58) Convergence(25:35) The Expected Value of Mitigating Risk Visualised(31:59) Concluding Remarks(35:00) AcknowledgementsThe original text contained 24 footnotes which were omitted from this narration. --- First published: October 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/S9H86osFKhfFBCday/how-bad-would-human-extinction-be --- Narrated by TYPE III AUDIO.
This is a selection of highlights from episode #163 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Toby Ord on the perils of maximising the good that you doAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The possibility of an indefinite AI pause, published by Matthew Barnett on September 19, 2023 on The Effective Altruism Forum. This post is part of AI Pause Debate Week. Please see this sequence for other posts in the debate. tl;dr An indefinite AI pause is a somewhat plausible outcome and could be made more likely if EAs actively push for a generic pause. I think an indefinite pause proposal is substantially worse than a brief pause proposal, and would probably be net negative. I recommend that alternative policies with greater effectiveness and fewer downsides should be considered instead. Broadly speaking, there seem to be two types of moratoriums on technologies: (1) moratoriums that are quickly lifted, and (2) moratoriums that are later codified into law as indefinite bans. In the first category, we find the voluntary 1974 moratorium on recombinant DNA research, the 2014 moratorium on gain of function research, and the FDA's partial 2013 moratorium on genetic screening. In the second category, we find the 1958 moratorium on conducting nuclear tests above the ground (later codified in the 1963 Partial Nuclear Test Ban Treaty), and the various moratoriums worldwide on human cloning and germline editing of human genomes. In these cases, it is unclear whether the bans will ever be lifted - unless at some point it becomes infeasible to enforce them. Overall I'm quite uncertain about the costs and benefits of a brief AI pause. The foreseeable costs of a brief pause, such as the potential for a compute overhang, have been discussed at length by others, and I will not focus my attention on them here. I recommend reading this essay to find a perspective on brief pauses that I'm sympathetic to. However, I think it's also important to consider whether, conditional on us getting an AI pause at all, we're actually going to get a pause that quickly ends. I currently think there is a considerable chance that society will impose an indefinite de facto ban on AI development, and this scenario seems worth analyzing in closer detail. Note: in this essay, I am only considering the merits of a potential lengthy moratorium on AI, and I freely admit that there are many meaningful axes on which regulatory policy can vary other than "more" or "less". Many forms of AI regulation may be desirable even if we think a long pause is not a good policy. Nevertheless, it still seems worth discussing the long pause as a concrete proposal of its own. The possibility of an indefinite pause Since an "indefinite pause" is vague, let me be more concrete. I currently think there is between a 10% and 50% chance that our society will impose legal restrictions on the development of advanced AI systems that, Prevent the proliferation of advanced AI for more than 10 years beyond the counterfactual under laissez-faire Have no fixed, predictable expiration date (without necessarily lasting forever) Eliezer Yudkowsky, perhaps the most influential person in the AI risk community, has already demanded an "indefinite and worldwide" moratorium on large training runs. This sentiment isn't exactly new. Some effective altruists, such as Toby Ord, have argued that humanity should engage in a "long reflection" before embarking on ambitious and irreversible technological projects, including AGI. William MacAskill suggested that this pause should perhaps last "a million years". Two decades ago, Nick Bostrom considered the ethics of delaying new technologies in a utilitarian framework and concluded a delay of "over 10 million years" may be justified if it reduces existential risk by a single percentage point. I suspect there are approximately three ways that such a pause could come about. The first possibility is that governments could explicitly write such a pause into law, fearing the development of AI in a broad sense,...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Love Letter to EA, published by utilistrutil on September 10, 2023 on The Effective Altruism Forum. Dear EA, I love you. In hindsight, my life before we met looks like it was designed to prepare me for you, like Fate left her fingerprints on so many formative books and conversations. Yet, for all that preparation, when my friend finally introduced us, I almost missed you! "Earning to give-me-a-break, am I right?" I didn't recognize you at first as the one I'd been looking for - I didn't even know I'd been looking. But that encounter made an indelible impression on me, one that eventually demanded a Proper Introduction, this time performed in your own words. At second sight, I could make out the form of the one I am now privileged to know so well. We survived the usual stages: talking, seeing each other casually, managing a bit of long (inferential) distance. It has been said that love is about changing and being changed - Lord, how I've changed. And you have too! I loved you when CEA was a basement, I loved you in your crypto era, and I'll love you through whatever changes the coming years may bring. I will admit, however, that thinking about the future scares me. I don't mean the far future (though I worry about that, too, of course); I mean our future. We're both so young, and there's so much that could go wrong. What if we make mistakes? What if I can't find an impactful job? What if your health (epistemic or otherwise) fails? In this light, our relationship appears quite fragile. All I can do is remind myself that we have weathered many crises together, and every day grants us greater understanding of the challenges that lie ahead and deeper humility with which to meet them. I also understand you better with each passing year. And that's a relief because, let's face it, loving you is complicated! There's so much to understand. Sometimes I feel like I'll never fully Get You, and then I feel a little jealous toward people who have known you longer or seem to know a different side of you. When I get to thinking this way, I am tempted to pronounce that I am only "adjacent" to you, but we both know that this would be true only in the sense that a wave is adjacent to the ocean. And you? You make me feel seen in a way I never thought was possible. We tend to talk the most when some Issue requires resolution, but I hope you know that for every day we argue, there are 99 days when I think of you with nothing but fondness. 99 days when I relish your companionship and delight in my memories with you: traveling the world, reading in the park, raising the next generation, talking late into the night, bouncing a spikeball off Toby Ord's window . . . I love your tweets, even when they make me cringe, I trust your judgement, even after you buy a castle, and I cherish a meal with you, even when it's a lukewarm bottle of Huel shared in an Oxford train station. When we disagree, I love how you challenge me to refine my map of the world. I am constantly impressed by your boundless empathy, and I am so thankful for everything you've taught me. I love your eccentric friends and rationalist neighbors. My parents like you too, by the way, even if they don't really get what I see in you. I love you x 99. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”Links to learn more, summary and full transcript.Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.Toby and Rob also discuss:The rise and fall of FTX and some of its impactsWhat Toby hoped effective altruism would and wouldn't become when he helped to get it off the groundWhat utilitarianism has going for it, and what's wrong with it in Toby's viewHow to mathematically model the importance of personal integrityWhich AI labs Toby thinks have been acting more responsibly than othersHow having a young child affects Toby's feelings about AI riskWhether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartialHow Toby ended up being the source of the highest quality images of the Earth from spaceGet this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type ‘80,000 Hours' into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon MonsourTranscriptions: Katy Moore
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Princeton course on longtermism, published by Calvin Baker on September 2, 2023 on The Effective Altruism Forum. This semester (Fall 2023), Prof Adam Elga and I will be co-instructing Longtermism, Existential Risk, and the Future of Humanity, an upper div undergraduate philosophy seminar at Princeton. (Yes, I did shamelessly steal half of our title from The Precipice.) We are grateful for support from an Open Phil course development grant and share the reading list here for all who may be interested. Part 1: Setting the stage Week 1: Introduction to longtermism and existential risk Core Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury. Read introduction, chapter 1, and chapter 2 (pp. 49-56 optional); chapters 4-5 optional but highly recommended. Optional Roser (2022) "The Future is Vast: Longtermism's perspective on humanity's past, present, and future" Our World in Data Karnofsky (2021) 'This can't go on' Cold Takes (blog) Kurzgesagt (2022) "The Last Human - A Glimpse into the Far Future" Week 2: Introduction to decision theory Core Weisberg, J. (2021). Odds & Ends. Read chapters 8, 11, and 14. Ord, T., Hillerbrand, R., & Sandberg, A. (2010). "Probing the improbable: Methodological challenges for risks with low probabilities and high stakes." Journal of Risk Research, 13(2), 191-205. Read sections 1-2. Optional Weisberg, J. (2021). Odds & Ends chapters 5-7 (these may be helpful background for understanding chapter 8, if you don't have much background in probability). Titelbaum, M. G. (2020) Fundamentals of Bayesian Epistemology chapters 3-4 Week 3: Introduction to population ethics Core Parfit, Derek. 1984. Reasons and Persons. Oxford: Oxford University Press. Read sections 4.16.120-23, 125, and 127 (pp. 355-64; 366-71, and 377-79). Parfit, Derek. 1986. "Overpopulation and the Quality of Life." In Applied Ethics, ed. P. Singer, 145-164. Oxford: Oxford University Press. Read sections 1-3. Optional Remainders of Part IV of Reasons and Persons and "Overpopulation and the Quality of Life" Greaves (2017) "Population Axiology" Philosophy Compass McMahan (2022) "Creating People and Saving People" section 1, first page of section 4, and section 8 Temkin (2012) Rethinking the Good 12.2 pp. 416-17 and section 12.3 (esp. pp. 422-27) Harman (2004) "Can We Harm and Benefit in Creating?" Roberts (2019) "The Nonidentity Problem" SEP Frick (2022) "Context-Dependent Betterness and the Mere Addition Paradox" Mogensen (2019) "Staking our future: deontic long-termism and the non-identity problem" sections 4-5 Week 4: Longtermism: for and against Core Greaves, Hilary and William MacAskill. 2021. "The Case for Strong Longtermism." Global Priorities Institute Working Paper No.5-2021. Read sections 1-6 and 9. Curran, Emma J. 2023. "Longtermism and the Complaints of Future People". Forthcoming in Essays on Longtermism, ed. H. Greaves, J. Barrett, and D. Thorstad. Oxford: OUP. Read section 1. Optional Thorstad (2023) "High risk, low reward: A challenge to the astronomical value of existential risk mitigation." Focus on sections 1-3. Curran, E. J. (2022). "Longtermism, Aggregation, and Catastrophic Risk" (GPI Working Paper 18-2022). Global Priorities Institute. Beckstead (2013) "On the Overwhelming Importance of Shaping the Far Future" Chapter 3 "Toby Ord on why the long-term future of humanity matters more than anything else, and what we should do about it" 80,000 Hours podcast Frick (2015) "Contractualism and Social Risk" sections 7-8 Part 2: Philosophical problems Week 5: Fanaticism Core Bostrom, N. (2009). "Pascal's mugging." Analysis, 69 (3): 443-445. Russell, J. S. "On two arguments for fanaticism." Noûs, forthcoming. Read sections 1, 2.1, and 2.2. Temkin, L. S. (2022). "How Expected Utility Theory Can Drive Us Off the Rails." In L. S. ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME), published by Otto on July 25, 2023 on The Effective Altruism Forum. Otto Barten is director of the Existential Risk Observatory, a nonprofit aiming to reduce existential risk by informing the public debate. Joep Meindertsma is founder of PauseAI, a movement campaigning for an AI Pause.The existential risks posed by artificial intelligence (AI) are now widely recognized. After hundreds of industry and science leaders warned that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the U.N. Secretary-General recently echoed their concern. So did the prime minister of the U.K., who is also investing 100 million pounds into AI safety research that is mostly meant to prevent existential risk. Other leaders are likely to follow in recognizing AI's ultimate threat. In the scientific field of existential risk, which studies the most likely causes of human extinction, AI is consistently ranked at the top of the list. In The Precipice, a book by Oxford existential risk researcher Toby Ord that aims to quantify human extinction risks, the likeliness of AI leading to human extinction exceeds that of climate change, pandemics, asteroid strikes, supervolcanoes, and nuclear war combined. One would expect that even for severe global problems, the risk that they lead to full human extinction is relatively small, and this is indeed true for most of the above risks. AI, however, may cause human extinction if only a few conditions are met. Among them is human-level AI, defined as an AI that can perform a broad range of cognitive tasks at least as well as we can. Studies outlining these ideas were previously known, but new AI breakthroughs have underlined their urgency: AI may be getting close to human level already. Recursive self-improvement is one of the reasons why existential-risk academics think human-level AI is so dangerous. Because human-level AI could do almost all tasks at our level, and since doing AI research is one of those tasks, advanced AI should therefore be able to improve the state of AI. Constantly improving AI would create a positive feedback loop with no scientifically established limits: an intelligence explosion. The endpoint of this intelligence explosion could be a superintelligence: a godlike AI that outsmarts us the way humans often outsmart insects. We would be no match for it. A godlike, superintelligent AI A superintelligent AI could therefore likely execute any goal it is given. Such a goal would be initially introduced by humans, but might come from a malicious actor, or not have been thought through carefully, or might get corrupted during training or deployment. If the resulting goal conflicts with what is in the best interest of humanity, a superintelligence would aim to execute it regardless. To do so, it could first hack large parts of the internet and then use any hardware connected to it. Or it could use its intelligence to construct narratives that are extremely convincing to us. Combined with hacked access to our social media timelines, it could create a fake reality on a massive scale. As Yuval Harari recently put it: "If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away - or even realise is there." As a third option, after either legally making money or hacking our financial system, a superintelligence could simply pay us to perform any actions it needs from us. And these are just some of the strategies a superintelligent AI could use in order to achieve its goals. There are likely many more. Like playing chess against grandmaster Magnus Carlsen, we cannot predict the moves he will play, but we can predict the outcome: we los...
Since writing The Precipice, one of my aims has been to better understand how reducing existential risk compares with other ways of influencing the l…--- First published: July 18th, 2023 Source: https://forum.effectivealtruism.org/posts/Doa69pezbZBqrcucs/shaping-humanity-s-longterm-trajectory Linkpost URL:http://files.tobyord.com/shaping-humanity's-longterm-trajectory.pdf --- Narrated by TYPE III AUDIO. Share feedback on this narration.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shaping Humanity's Longterm Trajectory, published by Toby Ord on July 18, 2023 on The Effective Altruism Forum. Since writing The Precipice, one of my aims has been to better understand how reducing existential risk compares with other ways of influencing the longterm future. Helping avert a catastrophe can have profound value due to the way that the short-run effects of our actions can have a systematic influence on the long-run future. But it isn't the only way that could happen. For example, if we advanced human progress by a year, perhaps we should expect to see us reach each subsequent milestone a year earlier. And if things are generally becoming better over time, then this may make all years across the whole future better on average. I've developed a clean mathematical framework in which possibilities like this can be made precise, the assumptions behind them can be clearly stated, and their value can be compared. The starting point is the longterm trajectory of humanity, understood as how the instantaneous value of humanity unfolds over time. In this framework, the value of our future is equal to the area under this curve and the value of altering our trajectory is equal to the area between the original curve and the altered curve. This allows us to compare the value of reducing existential risk to other ways our actions might improve the longterm future, such as improving the values that guide humanity, or advancing progress. Ultimately, I draw out and name 4 idealised ways our short-term actions could change the longterm trajectory: advancements speed-ups gains enhancements And I show how these compare to each other, and to reducing existential risk. While the framework is mathematical, the maths in these four cases turns out to simplify dramatically, so anyone should be able to follow it. My hope is that this framework, and this categorisation of some of the key ways we might hope to shape the longterm future, can improve our thinking about longtermism. Some upshots of the work: Some ways of altering our trajectory only scale with humanity's duration or its average value - but not both. There is a serious advantage to those that scale with both: speed-ups, enhancements, and reducing existential risk. When people talk about 'speed-ups', they are often conflating two different concepts. I disentangle these into advancements and speed-ups, showing that we mainly have advancements in mind, but that true speed-ups may yet be possible. The value of advancements and speed-ups depends crucially on whether they also bring forward the end of humanity. When they do, they have negative value. It is hard for pure advancements to compete with reducing existential risk as their value turns out not to scale with the duration of humanity's future. Advancements are competitive in outcomes where value increases exponentially up until the end time, but this isn't likely over the very long run. Work on creating longterm value via advancing progress is most likely to compete with reducing risk if the focus is on increasing the relative progress of some areas over others, in order to make a more radical change to the trajectory. The work is appearing as a chapter for the forthcoming book, Essays on Longtermism, but as of today, you can also read it online here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Overview of the AI Safety Funding Situation, published by Stephen McAleese on July 12, 2023 on The Effective Altruism Forum. Introduction AI safety is a field concerned with preventing negative outcomes from AI systems and ensuring that AI is beneficial to humanity. The field does research on problems such as the AI alignment problem which is the problem of designing AI systems that follow user intentions and behave in a desirable and beneficial way. Understanding and solving AI safety problems may involve reading past research, producing new research in the form of papers or posts, running experiments with ML models, and so on. Producing research typically involves many different inputs such as research staff, compute, equipment, and office space. These inputs all require funding and therefore funding is a crucial input for enabling or accelerating AI safety research. Securing funding is usually a prerequisite for starting or continuing AI safety research in industry, in an academic setting, or independently. There are many barriers that could prevent people from working on AI safety. Funding is one of them. Even if someone is working on AI safety, a lack of funding may prevent them from continuing to work on it. It's not clear how hard AI safety problems like AI alignment are. But in any case, humanity is more likely to solve them if there are hundreds or thousands of brilliant minds working on them rather than one guy. I would like there to be a large and thriving community of people working on AI safety and I think funding is an important prerequisite for enabling that. The goal of this post is to give the reader a better understanding of funding opportunities in AI safety so that hopefully funding will be less of a barrier if they want to work on AI safety. The post starts with a high-level overview of the AI safety funding situation followed by a more in-depth description of various funding opportunities. Past work To get an overview of AI safety spending, we first need to find out how much is spent on it per year. We can use past work as a prior and then use grant data to find a more accurate estimate. Changes in funding in the AI safety field (2017) by the Center for Effective Altruism estimated the change in AI safety funding between 2014 - 2017. In 2017, the post estimated that total AI safety spending was about $9 million. How are resources in effective altruism allocated across issues? (2020) by 80,000 Hours estimated the amount of money spent by EA on AI safety in 2019. Using data from the Open Philanthropy grants database, the post says that EA spent about $40 million on AI safety globally in 2019. In The Precipice (2020), Toby Ord estimated that between $10 million and $50 million was spent on reducing AI risk in 2020. 2021 AI Alignment Literature Review and Charity Comparison is an in-depth review of AI safety organizations and grantmakers and has a lot of relevant information. Overview of global AI safety funding One way to estimate total global spending on AI safety is to aggregate the total donations of major AI safety funds such as Open Philanthropy (Open Phil). It's important to note that the definition of 'AI safety' I'm using is AI safety research that is focused on reducing risks from advanced AI (AGI) such as existential risks which is the type of AI safety research I think is more neglected and important than other research in the long term. Therefore my analysis will be focused on EA funds and top AI labs and I don't intend to measure investment on near-term AI safety concerns such as effects on the labor market, fairness, privacy, ethics, disinformation, etc. The results of this analysis are shown in the following bar chart which was created in Google Sheets (link) and is based on data from analyzing grant databases from Open Philanthro...
Adam Thierer profileAdam's work:Microsoft's New AI Regulatory Framework & the Coming Battle over Computational ControlWhat If Everything You've Heard about AI Policy is Wrong?Can We Predict the Jobs and Skills Needed for the AI Era?Flexible, Pro-Innovation Governance Strategies for Artificial IntelligenceU.S. Chamber AI Commission Report Offers Constructive Path ForwardThe Coming Onslaught of “Algorithmic Fairness” RegulationsCorbin's review of Toby Ord's book The Precipice:World to End; Experts Hardest HitAlso:An article taking aim at the “IAEA for AI” concept!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you find EA conferences emotionally difficult, you're not alone, published by Amber Dawn on May 22, 2023 on The Effective Altruism Forum. I went to EAG London this weekend. I had some interesting chats, wrote some cryptic squiggles in my notebook (“Clockify” “the Easterlin paradox”, “functionalist eudaimonic theories”), and gave and received some hopefully-useful advice. Overall, the conference was fun and worthwhile for me. But at times, I also found the conference emotionally difficult. I think this is pretty common. After last year's EAG, Alastair Fraser-Urquhart wrote about how he burnt out at the conference and had to miss a retreat starting the next day. The post was popular, and many said they'd had similar experiences. The standard euphemism for this facet of EA conferences is ‘intense' or ‘tiring', but I suspect these adjectives are often a more socially-acceptable way of saying ‘I feel low/anxious/exhausted and want to curl up in a foetal position in a darkened room'. I want to write this post to: balance out the ‘woo EAG lfg!' hype, and help people who found it a bad or ambivalent experience to feel less alone dig into to why EAGs can be difficult: this might help attendees have better experiences themselves, and also create an environment where others are more likely to have good experiences help people who mostly enjoy EAGs understand what their more neurotic or introverted friends are going through Here are some reasons that EAGs might be emotionally difficult. Some of these I've experienced personally, others are based on comments I've heard, and others are plausible educated guesses. It's easy to compare oneself (negatively) to others EA conferences are attended by a bunch of “impressive” people: big-name EAs like Will MacAskill and Toby Ord, entrepreneurs, organisation leaders, politicians, and “inner-circle-y” people who are Forum- or Twitter-famous. You've probably scheduled meetings with people because they're impressive to you; perhaps you're seeking mentorship and advice from people who are more senior or advanced in your field, or you want to talk to someone because they have cool ideas. This can naturally inflame impostor syndrome, feelings of inadequacy, and negative comparisons. Everyone seems smarter, harder-working, more agentic, better informed. Everyone's got it all figured out, while you're still stuck at Stage 2 of 80k's career planning process. Everyone expects you to have a plan to save the world, and you don't even have a plan for how to start making a plan. Most EAs, I think, know that these thought patterns are counterproductive. But even if some rational part of you knows this, it can still be hard to fight them - especially if you're tired, scattered, or over-busy, since this makes it harder to employ therapeutic coping mechanisms. The stakes are high We're trying to solve immense, scary problems. We (and CEA) pour so much time and money into these conferences because we hope that they'll help us make progress on those problems. This can make the conferences anxiety-inducing - you really really hope that the conference pays off. This is especially true if you have some specific goal - such as finding a job, collaborators or funders - or if you think the conference has a high opportunity cost for you. You spend a lot of time talking about depressing things This is just part of being an EA, of course, but most of us don't spend all our time directly confronting the magnitude of these problems. Having multiple back-to-back conversations about ‘how can we solve [massive, seemingly-intractable problem]?' can be pretty discouraging. Everything is busy and frantic You're constantly rushing from meeting to meeting, trying not to bump into others who are doing the same. You see acquaintances but only have time to wave hello, because y...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: David Edmonds's biography of Parfit is out, published by Pablo on April 23, 2023 on The Effective Altruism Forum. Parfit: A Philosopher and His Mission to Save Morality, by David Edmonds, was published a few days ago (Amazon | Audible | Goodreads | Oxford). The book is worth reading in full, but here are some passages that focus specifically on Parfit's involvement with effective altruism: In his retirement, an organization with which he felt a natural affinity approached him for support. The Oxford-based charity Giving What We Can (GWWC) was set up by Toby Ord, an Australian-born philosopher, and another young philosopher, Will MacAskill. The charity's direct inspiration came from Peter Singer. Singer had devised a thought experiment that was simple but devastatingly effective in making people reflect on their behavior and their attitude to those in need. [...] This thought experiment has generated a significant secondary literature. Certainly, the GWWC initiators found it compelling. Their mission was to persuade people to give more of their income away and to donate to effective organizations that could make a real difference. [...] There were spinoff organizations from GWWC such as 80,000 Hours. The number refers to the rough number of hours we might have in our career, and 80,000 Hours was set up to research how people can most effectively devote their time rather than their money to tackling the world's most pressing problems. In 2012, the Centre for Effective Altruism was established to incorporate both GWWC and 80,000 Hours. Since its launch, the effective altruism movement has grown slowly but steadily. Most of the early backers were idealistic young postgraduates, many of them philosophers. If Singer was the intellectual father of the movement, Parfit was its grandfather. It became an in-joke among some members that anyone who came to work for GWWC had to possess a copy of Reasons and Persons. Some owned two copies: one for home, one for the office. But it took Parfit until 2014 to sign the GWWC pledge. And he agreed to do so only after wrangling over the wording. Initially, those who joined the GWWC campaign were required to make a public pledge to donate at least 10% of their income to charities that worked to relieve poverty. Parfit had several issues with this. For reasons the organizers never understood, he said that the participants had to make a promise rather than a pledge. He may have believed that a promise entailed a deeper level of commitment. Nor was he keen on the name "Giving What We Can". 10% of a person's income is certainly a generous sum, and in line with what adherents to some world religions are expected to give away. Nevertheless, Parfit pointed out, it was obvious that people could donate more. [...] Parfit also caviled at the word 'giving'. He believed this implied we are morally entitled to what we hand over, and morally entitled to our wealth and high incomes. This he rejected. Well-off people in the developed world were merely lucky that they were born into rich societies: they did not deserve their fortune. Linguistic quibbles aside, the issue that Parfit felt most strongly about was the movement's sole focus, initially, on poverty and development. While it was indeed pressing to relieve the suffering of people living today, Parfit argued, there should be an option that at least some of the money donated be earmarked for the problems of tomorrow. The human population has risen to a billion, and faces existential risks such as meteors, nuclear war, bioterrorism, pandemics, and climate change. Parfit claimed that between (A) peace, (B) a war which kills 7.5 billion people and (C) a war which killed everyone, the difference between (B) and (C) was much greater than the difference between (A) and (B). [...] Given how grim human exist...
SHOW THEME Even with no f***s to give, James and Catherine fill 38 minutes with movies and mummies before pondering humanity's precipice and agreeing that most art (other than "Jaws" props) is disposable. SHOW NOTES 00:12 - Catherine is finally on mic 00:48 - Good sleep yields not giving a f*** 01:45 - Know your puppy limitations 03:25 - Sinking the coffee cup 04:35 - Have we got Ancient Egypt's mummies all wrong? (No) 05:30 - The perfect movie! 07:15 - Skimming the sarcophagus 08:30 - Fayium (portrait) vs faience (glaze technique) 10:10 - Remnants of a Roman Egypt 11:06 - Difference is... mummies are real 12:32 - Unwrapping colonial entitlement 15:00 - Art, anthropology, or morbid fascination? 16:25 - Humanity's eve of destruction? The Precipice: existential risk and the future of humanity - Toby Ord 21:35 - Ancient alien afterlife destruction? 22:40 - Ancient Egyptian Instagram Influencers 24:15 - Inherent Vice: What to do with Decaying Masterpiece? 25:00 - Miami Vice fashion rant 28:28 - Naum Gabo: "If it's gone it's gone" 29:30 - Artwork will eventually fail, so throw it out! 31:00 - Use of dead animals, chocolate and lard is not super edgy but is dumb and or, gross 32:25 - Recalling ASU's "Vague Art Show" and ditching painting at RISD 35:44 - Hirst "props" can't touch "Jaws" 37:00 - "Jaws" the musical 38:05 - Back to the garbage heap!
Twenty years ago, Jason Matheny was a public health student who in his spare time was crusading to create a meat industry that would be less reliant on animals. In 2004, after he founded New Harvest to popularize cultured meat, his fame grew. The New York Times profiled him in its annual “Ideas of the Year” feature in 2005. That same year Discover magazine named cultured meat one of the most notable tech stories. For the next several years, Jason was the face of the movement to grow real meat without animals, traveling the world to persuade governments and food companies alike that they should be investing in a future where people would eat meat, but not animals. By 2009, now armed with his BA, MBA, MPH, and PhD, Jason began turning his attention toward preventing the more immediate and potentially catastrophic risks humanity faces. After leaving New Harvest, he eventually rose to become the director of Intelligence Advanced Research Projects Activity (IARPA), a federal agency that develops advanced technologies for national intelligence. Running the federal intelligence agency would eventually lead Jason to helm a national security center at Georgetown University, followed by a high-profile national security role in the Biden White House, to now being the CEO of the Rand Corporation. He was even named one of Foreign Policy's “Top 50 Global Thinkers.” As you'll hear in this interview, Jason shifted from his work on cultivated meat toward national security as he became convinced that technology can vastly improve both human and animal welfare, and that the only real threat to technological advancement is an apocalyptic catastrophe like a synthetic virus or asteroid. He still cares about the welfare of those of us living today—human and nonhuman alike—but Jason's primary preoccupation has become reducing civilization-threatening risks so that our species can keep progressing into the deep future. I think you'll find this conversation with this leading thinker as riveting as I did. Jason even talks about what technologies he hopes listeners will pursue to mitigate existential risks, so be sure to listen closely! Discussed in this episode Jason recommends reading The Precipice by Toby Ord. Jason passed the New Harvest torch onto Isha Datar, who was our guest on Episode 42. Our Episode 89 with Rep. Ro Khanna regarding his legislation relating to national security implications of losing the alt-meat race. Paul's thoughts in The Hill on government funding for alt-meat. More about Jason Matheny Jason Matheny is president and chief executive officer of the RAND Corporation, a nonprofit, nonpartisan research organization that helps improve policy and decisionmaking through research and analysis. Prior to becoming RAND's president and CEO in July 2022, he led White House policy on technology and national security at the National Security Council and the Office of Science and Technology Policy. Previously, he was founding director of the Center for Security and Emerging Technology at Georgetown University and director of the Intelligence Advanced Research Projects Activity (IARPA), where he was responsible for developing advanced technologies for the U.S. intelligence community. Before IARPA, he worked for Oxford University, the World Bank, the Applied Physics Laboratory, the Center for Biosecurity, and Princeton University. Matheny has served on many nonpartisan boards and committees, including the National Security Commission on Artificial Intelligence, to which he was appointed by Congress in 2018. He is a recipient of the Intelligence Community's Award for Individual Achievement in Science and Technology, the National Intelligence Superior Service Medal, and the Presidential Early Career Award for Scientists and Engineers. He was also named one of Foreign Policy's “Top 50 Global Thinkers.” Matheny holds a Ph.D. in applied economics from Johns Hopkins University, an M.P.H. from Johns Hopkins University, an M.B.A. from Duke University, and a B.A. in art history from the University of Chicago.
In this episode, we examine the topic of existential threat, focusing in particular on the subject of nuclear war. Sam opens the discussion by emphasizing the gravity of our ability to destroy life as we know it at any moment, and how shocking it is that nearly all of us perpetually ignore this fact. Philosopher Nick Bostrom expands on this idea by explaining how developing technologies like DNA synthesis could make humanity more vulnerable to malicious actors. Sam and historian Fred Kaplan then guide us through a hypothetical timeline of events following a nuclear first strike, highlighting the flaws in the concept of nuclear deterrence. Former Defense Secretary William J. Perry echoes these concerns, painting a grim picture of his "nuclear nightmare" scenario: a nuclear terrorist attack. Zooming out, Toby Ord outlines each potential extinction-level threat, and why he believes that, between all of them, we face a one in six chance of witnessing the downfall of our species. Our episode ends on a cautiously optimistic note, however, as Yuval Noah Harari shares his thoughts on "global myth-making" and its potential role in helping us navigate through these perilous times. About the Series Filmmaker Jay Shapiro has produced The Essential Sam Harris, a new series of audio documentaries exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you'll find this series fascinating.
In this episode, we examine the topic of existential threat, focusing in particular on the subject of nuclear war. Sam opens the discussion by emphasizing the gravity of our ability to destroy life as we know it at any moment, and how shocking it is that nearly all of us perpetually ignore this fact. Philosopher Nick Bostrom expands on this idea by explaining how developing technologies like DNA synthesis could make humanity more vulnerable to malicious actors. Sam and historian Fred Kaplan then guide us through a hypothetical timeline of events following a nuclear first strike, highlighting the flaws in the concept of nuclear deterrence. Former Defense Secretary William J. Perry echoes these concerns, painting a grim picture of his "nuclear nightmare" scenario: a nuclear terrorist attack. Zooming out, Toby Ord outlines each potential extinction-level threat, and why he believes that, between all of them, we face a one in six chance of witnessing the downfall of our species. Our episode ends on a cautiously optimistic note, however, as Yuval Noah Harari shares his thoughts on "global myth-making" and its potential role in helping us navigate through these perilous times. About the Series Filmmaker Jay Shapiro has produced The Essential Sam Harris, a new series of audio documentaries exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Casting the Decisive Vote, published by Toby Ord on March 27, 2023 on The Effective Altruism Forum. The moral value of voting is a perennial topic in EA. This piece shows that in any election that isn't a forgone conclusion, the chance of your vote being decisive can't be much lower than 1 in the number of voters. So voting will be worth it around the point where the value your preferred candidate would bring to the average citizen exceeds the cost of you voting. What is the chance your vote changes the outcome of an election? We know it is low, but how low? In particular, how does it compare with an intuitive baseline of a 1 in n chance, where n is the number of voters? This baseline is an important landmark not only because it is so intuitive, but because it is roughly the threshold needed for voting to be justified in terms of the good it produces for the members of the community (since the total benefit is also going to be proportional to n). Some political scientists have tried to estimate it with simplified theoretical models involving random voting. Depending on their assumptions, this has suggested it is much higher than the baseline — roughly 1 in the square root of n (Banzhaf 1965) — or that it is extraordinarily lower — something like 1 in 10^2659 for a US presidential election (Brennan 2011). Statisticians have attempted to determine the chance of a vote being decisive for particular elections using detailed empirical modelling, with data from previous elections and contemporaneous polls. For example, Gelman et al (2010) use such a model to estimate that an average voter had a 1 in 60 million chance of changing the result of the 2008 US presidential election, which is about 3 times higher than the baseline. In contrast, I'll give a simple method that depends on almost no assumptions or data, and provides a floor for how low this probability can be. It will calculate this using just two inputs: the number of voters, n, and the probability of the underdog winning, p_u. The method works for any two-candidate election that uses simple majority. So it wouldn't work for the US presidential election, but would work for your chance of being decisive within your state, and could be combined with estimates that state is decisive nationally. It also applies for many minor ‘elections' you may encounter, such as the chance of your vote being decisive on a committee. We start by considering a probability distribution over what share of the vote a candidate will get, from 0% to 100%. In theory, this distribution could have any shape, but in practice it will almost always have a single peak (which could be at one end, or somewhere in between). We will assume that the probability distribution over vote share has this shape (that it is ‘unimodal') and this is the only substantive assumption we'll make. We will treat this as the probability distribution of the votes a candidate gets before factoring in your own vote. If there is an even number of votes (before yours) then your vote matters only if the vote shares are tied. In that case, which way you vote decides the election. If there is an odd number of votes (before yours), it is a little more complex, but works out about the same: Before your vote, one candidate has one fewer vote. Your vote decides whether they lose or tie, so is worth half an election. But because there are two different ways the candidates could be one vote apart (candidate A has one fewer or candidate B has one fewer), you are about twice as likely to end up in this situation, so have the same expected impact. For ease of presentation I'll assume there is an even number of voters other than you, but nothing turns on this. (In real elections, you may also have to worry about probabilistic recounts, but if you do the analysis, these don't substantivel...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflecting on the Last Year — Lessons for EA (opening keynote at EAG), published by Toby Ord on March 24, 2023 on The Effective Altruism Forum. I recently delivered the opening talk for EA Global: Bay Area 2023. I reflect on FTX, the differences between EA and utilitarianism, and the importance of character. Here's the recording and transcript. The Last Year Let's talk a bit about the last year. The spring and summer of 2022 were a time of rapid change. A second major donor had appeared, roughly doubling the amount of committed money. There was a plan to donate this new money more rapidly, and to use more of it directly on projects run by people in the EA community. Together, this meant much more funding for projects led by EAs or about effective altruism. It felt like a time of massive acceleration, with EA rapidly changing and growing in an attempt to find enough ways to use this money productively and avoid it going to waste. This caused a bunch of growing pains and distortions. When there was very little money in effective altruism, you always knew that the person next to you couldn't have been in it for the money — so they must have been in it because they were passionate about what they were doing for the world. But that became harder to tell, making trust harder. And the most famous person in crypto had become the most famous person in EA. So someone whose views and actions were quite radical and unrepresentative of most EAs, became the most public face of effective altruism, distorting public perception and even our self-perception of what it meant to be an EA. It also meant that EA became more closely connected to an industry that was widely perceived as sketchy. One that involved a product of disputed social value, and a lot of sharks. One thing that especially concerned me was a great deal of money going into politics. We'd tried very hard over the previous 10 years to avoid EA being seen as a left or right issue — immediately alienating half the population. But a single large donor had the potential to change that unilaterally. And EA became extremely visible: people who'd never heard of it all of a sudden couldn't get away from it, prompting a great deal of public criticism. From my perspective at the time, it was hard to tell whether or not the benefits of additional funding for good causes outweighed these costs — both were large and hard to compare. Even the people who thought it was worth the costs shared the feelings of visceral acceleration: like a white knuckled fairground ride, pushing us up to vertiginous heights faster than we were comfortable with. And that was just the ascent. Like many of us, I was paying attention to the problems involved in the rise, and was blindsided by the fall. As facts started to become more clear, we saw that the companies producing this newfound income had been very poorly governed, allowing behaviour that appears to me to have been both immoral and illegal — in particular, it seems that when the trading arm had foundered, customers' own deposits were raided to pay for an increasingly desperate series of bets to save the company. Even if that strategy had worked and the money was restored to the customers, I still think it would have been illegal and immoral. But it didn't work, so it also caused a truly vast amount of harm. Most directly and importantly to the customers, but also to a host of other parties, including the members of the EA community and thus all the people and animals we are trying to help. I'm sure most of your have thought a lot about this over last few months. I've come to think of my own attempts to process this as going through these four phases. First, there's: Understanding what happened. What were the facts on the ground? Were crimes committed? How much money have customer's lost? A lot of t...
As the news that thirty year-old cryptocurrency baron, Sam Bankman-Fried‘s, FTX empire suddenly collapsed, the residual effects reverberated in the spheres of business, politics and philanthropy. Bankman-Fried was one of the largest donors to and a huge proponent of effective altruism, a social and philosophical movement started by academics Peter Singer, Toby Ord, and William … Continue reading Alexander Zaitchik on Effective Altruism + Longtermism → This article and podcast Alexander Zaitchik on Effective Altruism + Longtermism appeared first on Sea Change Radio.
General Visit Brett's website, where you can find his blog and much more: https://www.bretthall.org/ Follow Brett on Twitter: https://twitter.com/Tokteacher Subscribe to Brett's YouTube channel: https://youtube.com/channel/UCmP5H2rF-ER33a58ZD5jCig?sub_confirmation=1 References Iona's Substack essay, in which she previously described Brett as a philosopher—a description with which Brett disagreed: https://drionaitalia.substack.com/p/knots-gather-at-the-comb Karl Popper's philosophy: https://plato.stanford.edu/entries/popper/ Massimo Pigliucci's Two for Tea appearance: https://m.soundcloud.com/twoforteapodcast/55-massimo-pigliucci David Deutsch's ‘The Beginning of Infinity': https://www.amazon.com/gp/aw/d/0143121359/ref=tmm_pap_swatch_0?ie=UTF8&qid=1658005291&sr=8-1 Daniel James Sharp's Areo review of Ord's ‘The Precipice': https://areomagazine.com/2020/05/11/we-contain-multitudes-a-review-of-the-precipice-existential-risk-and-the-future-of-humanity-by-toby-ord/ David Hume and the problem of induction: https://plato.stanford.edu/entries/induction-problem/ Natural selection and the Neo-Darwinian synthesis: https://www.britannica.com/science/neo-Darwinism Richard Dawkins's ‘The Extended Selfish Gene': https://www.amazon.com/gp/aw/d/B01MYDYR6N/ref=tmm_pap_swatch_0?ie=UTF8&qid=1658008393&sr=8-3 Theory-ladenness: https://en.m.wikipedia.org/wiki/Theory-ladenness Ursula K. Le Guin's ‘The Left Hand of Darkness': https://www.amazon.com/gp/aw/d/1473221625/ref=tmm_pap_swatch_0?ie=UTF8&qid=1658010065&sr=8-1 The Popperian ‘paradox of tolerance' cartoon: https://images.app.goo.gl/MEbujAKv2VSp1m4B8 For the Steven Pinker Two for Tea interview on ‘Rationality', stay tuned to the Two for Tea podcast feed as it's coming soon for public listening: https://m.soundcloud.com/twoforteapodcast Brett's critique of Bayesianism: https://www.bretthall.org/bayesian-epistemology.html Brett on morality: https://www.bretthall.org/morality Steven Pinker's book ‘Rationality': https://www.amazon.com/gp/aw/d/0525561994/ref=tmm_hrd_swatch_0?ie=UTF8&qid=1658012700&sr=8-1 Timestamps 00:00 Opening and introduction. What, exactly, is Brett? What does he do? 4:58 Free speech and Popperian thought (and what is Popperian thought, anyway?). 12:24 Brett's view on existential risk and the future; how he differs from the likes of Martin Rees and Toby Ord. 22:38 How can we overcome ‘acts of God'? (With reference to Iona's syphilitic friend.) The dangers of the unknown and the necessity of progress. 26:50 The unpredictability of the nature of problems, with reference to fear of nuclear war and nuclear energy. The nature and history of problem solving, particularly as regards energy. 37:02 The Popperian/Deutschian theory of knowledge—guesswork, creativity, and the reduction of error. 46:50 William Paley's watch, Darwinism, selfish genes, and the embedding of knowledge into reality. 54:15 On theory-ladenness, the necessity of error correction, the power of science, and the impossibility of a final theory—all is approximation and continual improvement. 1:01:10 The nature of good explanations, with reference to the invocation of gods vs scientific accounts and the nature of the atom. 1:07:24 How the principle of the difficulty of variability is important in art as well as science, with reference to Ursula K. Le Guin's ‘The Left Hand of Darkness.' ‘Aha' vs ‘what the fuck?' surprise. 1:15:30 The nature of critical thinking and Brett on education: the misconceptions inherent in the current fashion for teaching critical thinking. 1:26:10 A question for Brett from Twitter: what did Popper really think about tolerance and intolerance (see the famous cartoon on the paradox of tolerance)? 1:36:24 Is there anything else Brett would like to add?
Paris Marx is joined by Émile Torres to discuss the ongoing effort to sell effective altruism and longtermism to the public, and why they're philosophies that won't solve the real problems we face.Émile Torres is a PhD candidate at Leibniz University Hannover and the author of the forthcoming book Human Extinction: A History of the Science and Ethics of Annihilation. Follow Émile on Twitter at @xriskology.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, support the show on Patreon, and sign up for the weekly newsletter.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Émile recently wrote about the ongoing effort to sell longtermism and effective altruism to the public.Peter Singer wrote an article published in 1972 arguing that rich people need to give to charity, which went on to influence effective altruists.NYT recently opined on whether it's ethical for lawyers to defend climate villains.Nathan Robinson recently criticized effective altruism for Current Affairs.Support the show
Back on the podcast, today is sleep, nutrition, and metabolism expert, Greg Potter, PhD. Through his academic research, public speaking, consulting and writing, Greg empowers people to make simple and sustainable lifestyle changes that add years to their lives and life to their years. His work has been featured in dozens of international media outlets, including Reuters, TIME, and The Washington Post, and he regularly contributes to popular websites, blogs, and podcasts. In this podcast, Greg is talking about light, including the importance of getting out in the sun and also modern problems with artificial light. He discusses the impact of light on the circadian system along with up-to-date recommendations related to light hygiene. We discuss practical tips for reducing light at night (not all of which involve putting away your device), and why not getting the right kind of light might be keeping you from achieving your body composition goals. Here's the outline of this episode with Greg Potter: [00:02:14] Wellics Corporate Wellness Software. [00:06:49] The importance of light. [00:08:30] The introduction of electric light. [00:09:55] myLuxRecorder (Satchin Panda's app, no longer available); Podcast: How to Use Time-Restricted Eating to Reverse Disease and Optimize Health, with Satchin Panda. [00:10:37] How light influences the circadian system. [00:15:34] Consensus paper with recommendations related to light hygiene; Study: Brown, Timothy M., et al. "Recommendations for daytime, evening, and nighttime indoor light exposure to best support physiology, sleep, and wakefulness in healthy adults." PLoS biology 20.3 (2022): e3001571. [00:19:13] Practical tips for reducing light at night. [00:22:44] Increasing prevalence of myopia. [00:23:46] Podcast: Getting Stronger, with Todd Becker. [00:26:01] Vitamin D synthesis; Podcast: The Pleiotropic Effects of Sunlight, with Megan Hall. [00:26:15] Effects of light on mood and cognition. [00:27:24] Effect of light exposure patterns on cognitive performance; Study: Grant, Leilah K., et al. "Daytime exposure to short wavelength-enriched light improves cognitive performance in sleep-restricted college-aged adults." Frontiers in neurology (2021): 197. [00:28:14] Effects of light on metabolic health. [00:28:20] Dan Pardi podcast featuring Peter Light: Sunlight And Fat Metabolism: A New Discovery. [00:28:52] Effect of bright and dim light on metabolism (Netherlands); Study: Harmsen, Jan-Frieder, et al. "The influence of bright and dim light on substrate metabolism, energy expenditure and thermoregulation in insulin-resistant individuals depends on time of day." Diabetologia 65.4 (2022): 721-732. [00:30:53] Effects of light on skin and immune function. [00:31:57] Highlights #15 (topics: Sun avoidance & exposure, increasing testosterone, Robert Sapolsky). [00:35:14] Skyglow. [00:36:48] Light at night and endocrine disruption. [00:37:45] Light at night and quality/duration of sleep. [00:38:19] Blue light in the evening interferes with sleep homeostasis; Study: Cajochen, Christian, et al. "Evidence that homeostatic sleep regulation depends on ambient lighting conditions during wakefulness." Clocks & Sleep 1.4 (2019): 517-531. [00:38:53] Effects of light at night on sympathetic nervous system/cortisol; Study: Rahman, Shadab A., et al. "Characterizing the temporal dynamics of melatonin and cortisol changes in response to nocturnal light exposure." Scientific reports 9.1 (2019): 1-12. [00:39:26] Effects of light at night on heart rate, HRV, insulin resistance; Study: Mason, Ivy C., et al. "Light exposure during sleep impairs cardiometabolic function." Proceedings of the National Academy of Sciences 119.12 (2022): e2113290119. [00:41:34] Effects of moon phases on sleep; Study: Casiraghi, Leandro, et al. "Moonstruck sleep: Synchronization of human sleep with the moon cycle under field conditions." Science advances 7.5 (2021): eabe0465. [00:45:40] Effects of individual sensitivity to light; Study: Phillips, Andrew JK, et al. "High sensitivity and interindividual variability in the response of the human circadian system to evening light." Proceedings of the National Academy of Sciences 116.24 (2019): 12019-12024. [00:47:55] Camping and melatonin synthesis across seasons; Study: Stothard, Ellen R., et al. "Circadian entrainment to the natural light-dark cycle across seasons and the weekend." Current Biology 27.4 (2017): 508-513. [00:48:40] Seasonal changes in thyroid hormones (meta-analysis): Kuzmenko, N. V., et al. "Seasonal variations in levels of human thyroid-stimulating hormone and thyroid hormones: a meta-analysis." Chronobiology International 38.3 (2021): 301-317. [00:53:24] Effect of location in the world; Podcast: Morning Larks and Night Owls: the Biology of Chronotypes, with Greg Potter, PhD. [00:54:30] Daylight Savings Time transition and traffic accidents in the US; Study: Fritz, Josef, et al. "A chronobiological evaluation of the acute effects of daylight saving time on traffic accident risk." Current biology 30.4 (2020): 729-735. [00:56:08] Effects of Daylight Savings Time on cardiac events. [00:56:48] Daylight Savings Time and cyberloafing; Study: Wagner, David T., et al. "Lost sleep and cyberloafing: Evidence from the laboratory and a daylight saving time quasi-experiment." Journal of Applied psychology 97.5 (2012): 1068. [00:57:26] Circadian clock disrupted by Daylight Savings Time; Study: Kantermann, Thomas, et al. "The human circadian clock's seasonal adjustment is disrupted by daylight saving time." Current Biology 17.22 (2007): 1996-2000. [01:00:44] Implications of permanent daylight savings time. [01:03:37] Effects of light at night in animals; Study: Sanders, Dirk, et al. "A meta-analysis of biological impacts of artificial light at night." Nature Ecology & Evolution 5.1 (2021): 74-81. [01:09:14] Minimizing the impact of light at night on wildlife. [01:13:50] Human-centric lighting at hospitals; Study: Giménez, Marina C., et al. "Patient room lighting influences on sleep, appraisal and mood in hospitalized people." Journal of sleep research 26.2 (2017): 236-246. [01:14:51] Babies in a neonatal unit did better with light/dark cycle; Study: Vásquez-Ruiz, Samuel, et al. "A light/dark cycle in the NICU accelerates body weight gain and shortens time to discharge in preterm infants." Early human development 90.9 (2014): 535-540. [01:17:59] Effects of light at night on plants; Study: Ffrench-Constant, Richard H., et al. "Light pollution is associated with earlier tree budburst across the United Kingdom." Proceedings of the Royal Society B: Biological Sciences 283.1833 (2016): 20160813. [01:18:50] Maturation of soybeans shifted with artificial light at night; Study: Palmer, Matthew, et al. Roadway lighting's impact on altering soybean growth. No. FHWA-ICT-17-010. 2017. [01:19:44] How to optimise your light environment. [01:19:54] Incandescent vs compact fluorescent bulbs. [01:21:58] LED lights. [01:25:33] Light-emitting devices with screens; metamerism. [01:26:20] Using metamerism to regulate impact of digital devices; Study: Allen, Annette E., et al. "Exploiting metamerism to regulate the impact of a visual display on alertness and melatonin suppression independent of visual appearance." Sleep 41.8 (2018): zsy100. [01:26:51] Software that reduces your exposure to short wavelengths: Nightshift (iPhone), Night Light/Blue Light Filter (Android), f.lux. [01:27:23] Apps to prevent short-wavelength light emissions do help; Study: Gringras, Paul, et al. "Bigger, brighter, bluer-better? Current light-emitting devices–adverse sleep properties and preventative strategies." Frontiers in public health 3 (2015): 233. [01:27:31] Blue-light blocking app did not improve sleep; Study: Smidt, Alec M., et al. "Effects of Automated Diurnal Variation in Electronic Screen Temperature on Sleep Quality in Young Adults: A Randomized Controlled Trial." Behavioral Sleep Medicine (2021): 1-17. [01:28:31] Blue-blockers. [01:31:31] Recommendations for shift workers. Greg's paper on this topic: Potter, Gregory DM, and Thomas R. Wood. "The future of shift work: Circadian biology meets personalised medicine and behavioural science." Frontiers in Nutrition 7 (2020): 116. [01:33:34] Jet lag: Jet Lag Rooster. [01:37:27] Find Greg on Instagram, TikTok; gregpotterphd.com [01:37:56] Book: When Brains Dream: Understanding the Science and Mystery of Our Dreaming Minds, by Antonio Zadra. [01:38:08] Book: The Beginning of Infinity: Explanations That Transform the World, by David Deutsch. [01:38:32] Book: The Precipice by Toby Ord.