Podcast appearances and mentions of rob wiblin

  • 30PODCASTS
  • 117EPISODES
  • 1h 42mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 4, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about rob wiblin

Latest podcast episodes about rob wiblin

80,000 Hours Podcast with Rob Wiblin
#214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 4, 2025 136:03


Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we're unlikely to know we've solved the problem before the arrival of human-level and superhuman systems in as little as three years.So some are developing a backup plan to safely deploy models we fear are actively scheming to harm us — so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.Today's guest — Buck Shlegeris, CEO of Redwood Research — has spent the last few years developing control mechanisms, and for human-level systems they're more plausible than you might think. He argues that given companies' unwillingness to incur large costs for security, accepting the possibility of misalignment and designing robust safeguards might be one of our best remaining options.Links to learn more, highlights, video, and full transcript.As Buck puts it: "Five years ago I thought of misalignment risk from AIs as a really hard problem that you'd need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you'd probably be able to not have very much of your problem."Of course, even if Buck is right, we still need to do those 40 things — which he points out we're not on track for. And AI control agendas have their limitations: they aren't likely to work once AI systems are much more capable than humans, since greatly superhuman AIs can probably work around whatever limitations we impose.Still, AI control agendas seem to be gaining traction within AI safety. Buck and host Rob Wiblin discuss all of the above, plus:Why he's more worried about AI hacking its own data centre than escapingWhat to do about “chronic harm,” where AI systems subtly underperform or sabotage important work like alignment researchWhy he might want to use a model he thought could be conspiring against himWhy he would feel safer if he caught an AI attempting to escapeWhy many control techniques would be relatively inexpensiveHow to use an untrusted model to monitor another untrusted modelWhat the minimum viable intervention in a “lazy” AI company might look likeHow even small teams of safety-focused staff within AI labs could matterThe moral considerations around controlling potentially conscious AI systems, and whether it's justifiedChapters:Cold open |00:00:00|  Who's Buck Shlegeris? |00:01:27|  What's AI control? |00:01:51|  Why is AI control hot now? |00:05:39|  Detecting human vs AI spies |00:10:32|  Acute vs chronic AI betrayal |00:15:21|  How to catch AIs trying to escape |00:17:48|  The cheapest AI control techniques |00:32:48|  Can we get untrusted models to do trusted work? |00:38:58|  If we catch a model escaping... will we do anything? |00:50:15|  Getting AI models to think they've already escaped |00:52:51|  Will they be able to tell it's a setup? |00:58:11|  Will AI companies do any of this stuff? |01:00:11|  Can we just give AIs fewer permissions? |01:06:14|  Can we stop human spies the same way? |01:09:58|  The pitch to AI companies to do this |01:15:04|  Will AIs get superhuman so fast that this is all useless? |01:17:18|  Risks from AI deliberately doing a bad job |01:18:37|  Is alignment still useful? |01:24:49|  Current alignment methods don't detect scheming |01:29:12|  How to tell if AI control will work |01:31:40|  How can listeners contribute? |01:35:53|  Is 'controlling' AIs kind of a dick move? |01:37:13|  Could 10 safety-focused people in an AGI company do anything useful? |01:42:27|  Benefits of working outside frontier AI companies |01:47:48|  Why Redwood Research does what it does |01:51:34|  What other safety-related research looks best to Buck? |01:58:56|  If an AI escapes, is it likely to be able to beat humanity from there? |01:59:48|  Will misaligned models have to go rogue ASAP, before they're ready? |02:07:04|  Is research on human scheming relevant to AI? |02:08:03|This episode was originally recorded on February 21, 2025.Video: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, and Dominic ArmstrongTranscriptions and web: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Mar 11, 2025 237:36


The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.That's the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.Links to learn more, highlights, video, and full transcript.The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we'll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we'll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he's never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn't exist.What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed.In this conversation with host Rob Wiblin, recorded on February 7, 2025, Will maps out the challenges we'd face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:Why leading AI safety researchers now think there's dramatically less time before AI is transformative than they'd previously thoughtThe three different types of intelligence explosions that occur in orderWill's list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rightsHow to prevent ourselves from accidentally “locking in” mediocre futures for all eternityWays AI could radically improve human coordination and decision makingWhy we should aim for truly flourishing futures, not just avoiding extinctionChapters:Cold open (00:00:00)Who's Will MacAskill? (00:00:46)Why Will now just works on AGI (00:01:02)Will was wrong(ish) on AI timelines and hinge of history (00:04:10)A century of history crammed into a decade (00:09:00)Science goes super fast; our institutions don't keep up (00:15:42)Is it good or bad for intellectual progress to 10x? (00:21:03)An intelligence explosion is not just plausible but likely (00:22:54)Intellectual advances outside technology are similarly important (00:28:57)Counterarguments to intelligence explosion (00:31:31)The three types of intelligence explosion (software, technological, industrial) (00:37:29)The industrial intelligence explosion is the most certain and enduring (00:40:23)Is a 100x or 1,000x speedup more likely than 10x? (00:51:51)The grand superintelligence challenges (00:55:37)Grand challenge #1: Many new destructive technologies (00:59:17)Grand challenge #2: Seizure of power by a small group (01:06:45)Is global lock-in really plausible? (01:08:37)Grand challenge #3: Space governance (01:18:53)Is space truly defence-dominant? (01:28:43)Grand challenge #4: Morally integrating with digital beings (01:32:20)Will we ever know if digital minds are happy? (01:41:01)“My worry isn't that we won't know; it's that we won't care” (01:46:31)Can we get AGI to solve all these issues as early as possible? (01:49:40)Politicians have to learn to use AI advisors (02:02:03)Ensuring AI makes us smarter decision-makers (02:06:10)How listeners can speed up AI epistemic tools (02:09:38)AI could become great at forecasting (02:13:09)How not to lock in a bad future (02:14:37)AI takeover might happen anyway — should we rush to load in our values? (02:25:29)ML researchers are feverishly working to destroy their own power (02:34:37)We should aim for more than mere survival (02:37:54)By default the future is rubbish (02:49:04)No easy utopia (02:56:55)What levers matter most to utopia (03:06:32)Bottom lines from the modelling (03:20:09)People distrust utopianism; should they distrust this? (03:24:09)What conditions make eventual eutopia likely? (03:28:49)The new Forethought Centre for AI Strategy (03:37:21)How does Will resist hopelessness? (03:50:13)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongCamera operator: Jeremy ChevillotteTranscriptions and web: Katy Moore

80,000 Hours Podcast with Rob Wiblin
Bonus: AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 10, 2025 192:24


Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.Check out the full transcript on the 80,000 Hours website.You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You'll hear:Ajeya Cotra on overrated AGI worriesHolden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models biggerIan Morris on why the future must be radically different from the presentNick Joseph on whether his companies internal safety policies are enoughRichard Ngo on what everyone gets wrong about how ML models workTom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn'tCarl Shulman on why you'll prefer robot nannies over human onesZvi Mowshowitz on why he's against working at AI companies except in some safety rolesHugo Mercier on why even superhuman AGI won't be that persuasiveRob Long on the case for and against digital sentienceAnil Seth on why he thinks consciousness is probably biologicalLewis Bollard on whether AI advances will help or hurt nonhuman animalsRohin Shah on whether humanity's work ends at the point it creates AGIAnd of course, Rob and Luisa also regularly chime in on what they agree and disagree with.Chapters:Cold open (00:00:00)Rob's intro (00:00:58)Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)Ajeya Cotra on the misalignment stories she doesn't buy (00:09:16)Rob & Luisa: Agentic AI and designing machine people (00:24:06)Holden Karnofsky on the dangers of even aligned AI, and how we probably won't all die from misaligned AI (00:39:20)Ian Morris on why we won't end up living like The Jetsons (00:47:03)Rob & Luisa: It's not hard for nonexperts to understand we're playing with fire here (00:52:21)Nick Joseph on whether AI companies' internal safety policies will be enough (00:55:43)Richard Ngo on the most important misconception in how ML models work (01:03:10)Rob & Luisa: Issues Rob is less worried about now (01:07:22)Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)Michael Webb on why he's sceptical about explosive economic growth (01:20:50)Carl Shulman on why people will prefer robot nannies over humans (01:28:25)Rob & Luisa: Should we expect AI-related job loss? (01:36:19)Zvi Mowshowitz on why he thinks it's a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)Holden Karnofsky on the power that comes from just making models bigger (01:45:21)Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)Hugo Mercier on how AI won't cause misinformation pandemonium (01:58:29)Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)Robert Long on whether digital sentience is possible (02:15:09)Anil Seth on why he believes in the biological basis of consciousness (02:27:21)Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)Rob & Luisa: The most interesting new argument Rob's heard this year (02:50:37)Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)Rob's outro (03:11:02)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 7, 2025 190:21


If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they're at risk of being seriously overrated and applied where they don't belong.Rebroadcast: this episode was originally released in March 2022.Links to learn more, highlights, and full transcript.Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish.First, what do people mean by 'sustainability'? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running.Given that someone needs to keep paying, Karen tells us that in practice, 'sustainability' is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries.'Participatory' also sounds nice, and inasmuch as it means leaders are accountable to the people they're trying to help, it probably is. But Karen tells us that in the field, ‘participatory' usually means that recipients are expected to be involved in planning and delivering services themselves.While that might be suitable in some situations, it's hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing.Finally, making a programme 'holistic' could be smart, but as Karen lays out, it also has some major downsides. For one, it means you're doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it's hard to tell whether you're making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful 'holistic school health' programme that, if continued, was going to cost 3.5 times the entire school's budget.In this in-depth conversation, originally released in March 2022, Karen Levy and host Rob Wiblin chat about the above, as well as:Why it pays to figure out how you'll interpret the results of an experiment ahead of timeThe trouble with misaligned incentives within the development industryProjects that don't deliver value for money and should be scaled downHow Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildrenLogistical challenges in reaching huge numbers of people with essential servicesLessons from Karen's many-decades careerAnd much moreChapters:Cold open (00:00:00)Rob's intro (00:01:33)The interview begins (00:02:21)Funding for effective altruist–mentality development projects (00:04:59)Pre-policy plans (00:08:36)‘Sustainability', and other myths in typical international development practice (00:21:37)‘Participatoriness' (00:36:20)‘Holistic approaches' (00:40:20)How the development industry sees evidence-based development (00:51:31)Initiatives in Africa that should be significantly curtailed (00:56:30)Misaligned incentives within the development industry (01:05:46)Deworming: the early days (01:21:09)The problem of deworming (01:34:27)Deworm the World (01:45:43)Where the majority of the work was happening (01:55:38)Logistical issues (02:20:41)The importance of a theory of change (02:31:46)Ways that things have changed since 2006 (02:36:07)Academic work vs policy work (02:38:33)Fit for Purpose (02:43:40)Living in Kenya (03:00:32)Underrated life advice (03:05:29)Rob's outro (03:09:18)Producer: Keiran HarrisAudio mastering: Ben Cordell and Ryan KesslerTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jan 22, 2025 145:43


What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more.The question is a classic that makes for great dorm-room philosophy discussion. But it's hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we're looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective.Today's guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself.Rebroadcast: this episode was originally released in September 2022.Links to learn more, highlights, and full transcript.That idea, in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations.Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they're valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering.As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves — a position known as ‘philosophical hedonism' — has been one of the most enduringly popular ideas in ethics.And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things?Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value “a radical and important philosophical contribution.”So what convinces Sharon that philosophical hedonism deserves another go? In today's interview with host Rob Wiblin, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes these counterarguments are misguided. A philosophical hedonist shouldn't get in an experience machine, nor override an individual's autonomy, except in situations so different from the classic thought experiments that it no longer seems strange they would do so.Chapters:Cold open (00:00:00)Rob's intro (00:00:41)The interview begins (00:04:27)Metaethics (00:05:58)Anti-realism (00:12:21)Sharon's theory of moral realism (00:17:59)The history of hedonism (00:24:53)Intrinsic value vs instrumental value (00:30:31)Egoistic hedonism (00:38:12)Single axis of value (00:44:01)Key objections to Sharon's brand of hedonism (00:58:00)The experience machine (01:07:50)Robot spouses (01:24:11)Most common misunderstanding of Sharon's view (01:28:52)How might a hedonist actually live (01:39:28)The organ transplant case (01:55:16)Counterintuitive implications of hedonistic utilitarianism (02:05:22)How could we discover moral facts? (02:19:47)Rob's outro (02:24:44)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#140 Classic episode – Bear Braumoeller on the case that war isn't in decline

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jan 8, 2025 168:03


Rebroadcast: this episode was originally released in November 2022.Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out.But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe.Today's guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age.Links to learn more, highlights, and full transcript.The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours.If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we're as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st.Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster.He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone.In a nutshell, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, "only the dead have seen the end of war."In today's conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as:Why haven't modern ideas about the immorality of violence led to the decline of war, when it's such a natural thing to expect?What would Bear's critics say in response to all this?What do the optimists get right?How does one do proper statistical tests for events that are clumped together, like war deaths?Why are deaths in war so concentrated in a handful of the most extreme events?Did the ideas of the Enlightenment promote nonviolence, on balance?Were early states more or less violent than groups of hunter-gatherers?If Bear is right, what can be done?How did the 'Concert of Europe' or 'Bismarckian system' maintain peace in the 19th century?Which wars are remarkable but largely unknown?Chapters:Cold open (00:00:00)Rob's intro (00:01:01)The interview begins (00:05:37)Only the Dead (00:08:33)The Enlightenment (00:18:50)Democratic peace theory (00:28:26)Is religion a key driver of war? (00:31:32)International orders (00:35:14)The Concert of Europe (00:44:21)The Bismarckian system (00:55:49)The current international order (01:00:22)The Better Angels of Our Nature (01:19:36)War datasets (01:34:09)Seeing patterns in data where none exist (01:47:38)Change-point analysis (01:51:39)Rates of violent death throughout history (01:56:39)War initiation (02:05:02)Escalation (02:20:03)Getting massively different results from the same data (02:30:45)How worried we should be (02:36:13)Most likely ways Only the Dead is wrong (02:38:31)Astonishing smaller wars (02:42:45)Rob's outro (02:47:13)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#211 – Sam Bowman on why housing still isn't fixed and what would actually work

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Dec 19, 2024 205:46


Rich countries seem to find it harder and harder to do anything that creates some losers. People who don't want houses, offices, power stations, trains, subway stations (or whatever) built in their area can usually find some way to block them, even if the benefits to society outweigh the costs 10 or 100 times over.The result of this ‘vetocracy' has been skyrocketing rent in major cities — not to mention exacerbating homelessness, energy poverty, and a host of other social maladies. This has been known for years but precious little progress has been made. When trains, tunnels, or nuclear reactors are occasionally built, they're comically expensive and slow compared to 50 years ago. And housing construction in the UK and California has barely increased, remaining stuck at less than half what it was in the '60s and '70s.Today's guest — economist and editor of Works in Progress Sam Bowman — isn't content to just condemn the Not In My Backyard (NIMBY) mentality behind this stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs' has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you.Links to learn more, highlights, video, and full transcript.So, as Sam explains, a different strategy is needed, one that acknowledges that opponents of development are often correct that a given project will make them worse off. But the thing is, in the cases we care about, these modest downsides are outweighed by the enormous benefits to others — who will finally have a place to live, be able to get to work, and have the energy to heat their home.But democracies are majoritarian, so if most existing residents think they'll be a little worse off if more dwellings are built in their area, it's no surprise they aren't getting built. Luckily we already have a simple way to get people to do things they don't enjoy for the greater good, a strategy that we apply every time someone goes in to work at a job they wouldn't do for free: compensate them. Sam thinks this idea, which he calls “Coasean democracy,” could create a politically sustainable majority in favour of building and underlies the proposals he thinks have the best chance of success — which he discusses in detail with host Rob Wiblin.Chapters:Cold open (00:00:00)Introducing Sam Bowman (00:00:59)We can't seem to build anything (00:02:09)Our inability to build is ruining people's lives (00:04:03)Why blocking growth of big cities is terrible for science and invention (00:09:15)It's also worsening inequality, health, fertility, and political polarisation (00:14:36)The UK as the 'limit case' of restrictive planning permission gone mad (00:17:50)We've known this for years. So why almost no progress fixing it? (00:36:34)NIMBYs aren't wrong: they are often harmed by development (00:43:58)Solution #1: Street votes (00:55:37)Are street votes unfair to surrounding areas? (01:08:31)Street votes are coming to the UK — what to expect (01:15:07)Are street votes viable in California, NY, or other countries? (01:19:34)Solution #2: Benefit sharing (01:25:08)Property tax distribution — the most important policy you've never heard of (01:44:29)Solution #3: Opt-outs (01:57:53)How to make these things happen (02:11:19)Let new and old institutions run in parallel until the old one withers (02:18:17)The evil of modern architecture and why beautiful buildings are essential (02:31:58)Northern latitudes need nuclear power — solar won't be enough (02:45:01)Ozempic is still underrated and “the overweight theory of everything” (03:02:30)How has progress studies remained sane while being very online? (03:17:55)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#209 – Rose Chan Loui on OpenAI's gambit to ditch its nonprofit

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 27, 2024 82:08


One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right?Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit's interests against the overwhelming profit motives arrayed against them?That's the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.Links to learn more, highlights, video, and full transcript.As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:Can fire the CEO.Would receive all the profits after the point OpenAI makes 100x returns on investment.Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn't trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).Nonprofit control makes it harder to attract investors, who don't want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company's actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn't be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.Chapters:Cold open (00:00:00)What's coming up (00:00:50)Who is Rose Chan Loui? (00:03:11)How OpenAI carefully chose a complex nonprofit structure (00:04:17)OpenAI's new plan to become a for-profit (00:11:47)The nonprofit board is out-resourced and in a tough spot (00:14:38)Who could be cheated in a bad conversion to a for-profit? (00:17:11)Is this a unique case? (00:27:24)Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:28:58)The crazy difficulty of valuing the profits OpenAI might make (00:35:21)Control of OpenAI is independently incredibly valuable and requires compensation (00:41:22)It's very important the nonprofit get cash and not just equity (and few are talking about it) (00:51:37)Is it a farce to call this an "arm's-length transaction"? (01:03:50)How the nonprofit board can best play their hand (01:09:04)Who can mount a court challenge and how that would work (01:15:41)Rob's outro (01:21:25)Producer: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo editing: Simon MonsourTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
Bonus: Parenting insights from Rob and 8 past guests

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 8, 2024 95:39


With kids very much on the team's mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them. Links to learn more and full transcript.After hearing 8 former guests' insights, Luisa and Rob chat about:Which of these resonate the most with Rob, now that he's been a dad for six months (plus an update at nine months).What have been the biggest surprises for Rob in becoming a parent.How Rob's dealt with work and parenting tradeoffs, and his advice for other would-be parents.Rob's list of recommended purchases for new or upcoming parents.This bonus episode includes excerpts from:Ezra Klein on parenting yourself as well as your children (from episode #157)Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid (#110 and #158)Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids' lives (#178)Russ Roberts on empirical research when deciding whether to have kids (#87)Spencer Greenberg on his surveys of parents (#183)Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems (#153)Bryan Caplan on homeschooling (#172)Nita Farahany on thinking about life and the world differently with kids (#174)Chapters:Cold open (00:00:00)Rob & Luisa's intro (00:00:19)Ezra Klein on parenting yourself as well as your children (00:03:34)Holden Karnofsky on preparing for a kid and freezing embryos (00:07:41)Emily Oster on the impact of kids on relationships (00:09:22)Russ Roberts on empirical research when deciding whether to have kids (00:14:44)Spencer Greenberg on parent surveys (00:23:58)Elie Hassenfeld on how having children reframes his relationship to solving pressing problems (00:27:40)Emily Oster on careers and kids (00:31:44)Holden Karnofsky on the experience of having kids (00:38:44)Bryan Caplan on homeschooling (00:40:30)Emily Oster on what actually makes a difference in young kids' lives (00:46:02)Nita Farahany on thinking about life and the world differently (00:51:16)Rob's first impressions of parenthood (00:52:59)How Rob has changed his views about parenthood (00:58:04)Can the pros and cons of parenthood be studied? (01:01:49)Do people have skewed impressions of what parenthood is like? (01:09:24)Work and parenting tradeoffs (01:15:26)Tough decisions about screen time (01:25:11)Rob's advice to future parents (01:30:04)Coda: Rob's updated experience at nine months (01:32:09)Emily Oster on her amazing nanny (01:35:01)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 16, 2024 117:48


Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything.Links to learn more, highlights, video, and full transcript.On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It's a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits.But on Nate's telling, it's a group particularly vulnerable to oversimplification and hubris. Where Riverians' ability to calculate the “expected value” of actions isn't as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate's discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall.Given this show's focus on the world's most pressing problems and how to solve them, we narrow in on Nate's discussion of effective altruism (EA), which has been little covered elsewhere. Nate met many leaders and members of the EA community in researching the book and has watched its evolution online for many years.Effective altruism is the River style of doing good, because of its willingness to buck both fashion and common sense — making its giving decisions based on mathematical calculations and analytical arguments with the goal of maximising an outcome.Nate sees a lot to admire in this, but the book paints a mixed picture in which effective altruism is arguably too trusting, too utilitarian, too selfless, and too reckless at some times, while too image-conscious at others.But while everything has arguable weaknesses, could Nate actually do any better in practice? We ask him:How would Nate spend $10 billion differently than today's philanthropists influenced by EA?Is anyone else competitive with EA in terms of impact per dollar?Does he have any big disagreements with 80,000 Hours' advice on how to have impact?Is EA too big a tent to function?What global problems could EA be ignoring?Should EA be more willing to court controversy?Does EA's niceness leave it vulnerable to exploitation?What moral philosophy would he have modelled EA on?Rob and Nate also talk about:Nate's theory of Sam Bankman-Fried's psychology.Whether we had to “raise or fold” on COVID.Whether Sam Altman and Sam Bankman-Fried are structurally similar cases or not.“Winners' tilt.”Whether it's selfish to slow down AI progress.The ridiculous 13 Keys to the White House.Whether prediction markets are now overrated.Whether venture capitalists talk a big talk about risk while pushing all the risk off onto the entrepreneurs they fund.And plenty more.Chapters:Cold open (00:00:00)Rob's intro (00:01:03)The interview begins (00:03:08)Sam Bankman-Fried and trust in the effective altruism community (00:04:09)Expected value (00:19:06)Similarities and differences between Sam Altman and SBF (00:24:45)How would Nate do EA differently? (00:31:54)Reservations about utilitarianism (00:44:37)Game theory equilibrium (00:48:51)Differences between EA culture and rationalist culture (00:52:55)What would Nate do with $10 billion to donate? (00:57:07)COVID strategies and tradeoffs (01:06:52)Is it selfish to slow down AI progress? (01:10:02)Democratic legitimacy of AI progress (01:18:33)Dubious election forecasting (01:22:40)Assessing how reliable election forecasting models are (01:29:58)Are prediction markets overrated? (01:41:01)Venture capitalists and risk (01:48:48)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo engineering: Simon MonsourTranscriptions: Katy Moore

Anthropic's Responsible Scaling Policy, with Nick Joseph, from the 80,000 Hours Podcast

Play Episode Listen Later Sep 25, 2024 162:17


In this crosspost from the 80,000 Hours podcast, host Rob Wiblin interviews Nick Joseph, Head of Training at Anthropic, about the company's responsible scaling policy for AI development. The episode delves into Anthropic's approach to AI safety, the growing trend of voluntary commitments from top AI labs, and the need for public scrutiny of frontier model development. The conversation also covers AI safety career advice, with a reminder that 80,000 Hours offers free career advising sessions for listeners. Join us for an insightful discussion on the future of AI and its societal implications. Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/ SPONSORS: WorkOS: Building an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Join top startups like Vercel, Perplexity, Jasper & Webflow in powering your app with WorkOS. Enjoy a free tier for up to 1M users! Start now at https://bit.ly/WorkOS-Turpentine-Network Weights & Biases Weave: Weights & Biases Weave is a lightweight AI developer toolkit designed to simplify your LLM app development. With Weave, you can trace and debug input, metadata and output with just 2 lines of code. Make real progress on your LLM development and visit the following link to get started with Weave today: https://wandb.me/cr 80,000 Hours: 80,000 Hours offers free one-on-one career advising for Cognitive Revolution listeners aiming to tackle global challenges, especially in AI. They connect high-potential individuals with experts, opportunities, and personalized career plans to maximize positive impact. Apply for a free call at https://80000hours.org/cognitiverevolution to accelerate your career and contribute to solving pressing AI-related issues. Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ RECOMMENDED PODCAST: This Won't Last - Eavesdrop on Keith Rabois, Kevin Ryan, Logan Bartlett, and Zach Weinberg's monthly backchannel ft their hottest takes on the future of tech, business, and venture capital. Spotify: https://open.spotify.com/show/2HwSNeVLL1MXy0RjFPyOSz CHAPTERS: (00:00:00) About the Show (00:00:22) Sponsors: WorkOS (00:01:22) About the Episode (00:04:31) Intro and Nick's background (00:08:37) Model training and scaling laws (00:13:10) Nick's role at Anthropic (00:16:49) Responsible Scaling Policies overview (Part 1) (00:18:00) Sponsors: Weights & Biases Weave | 80,000 Hours (00:20:39) Responsible Scaling Policies overview (Part 2) (00:25:24) AI Safety Levels framework (00:30:33) Benefits of RSPs (Part 1) (00:33:15) Sponsors: Omneky (00:33:38) Benefits of RSPs (Part 2) (00:36:32) Concerns about RSPs (00:47:33) Sandbagging and evaluation challenges (00:54:46) Critiques of RSPs (01:03:11) Trust and accountability (01:12:03) Conservative vs. aggressive approaches (01:17:43) Capabilities vs. safety research (01:23:47) Working at Anthropic (01:35:14) Nick's career journey (01:45:12) Hiring at Anthropic (01:52:06) Concerns about AI capabilities work (02:03:38) Anthropic office locations (02:08:46) Pressure and stakes at Anthropic (02:18:09) Overrated and underrated AI applications (02:35:57) Closing remarks (02:38:33) Sponsors: Outro

80,000 Hours Podcast with Rob Wiblin
#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Aug 22, 2024 149:26


The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?That's what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the original cofounders of Anthropic, its current head of training, and a big fan of Anthropic's “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.Links to learn more, highlights, video, and full transcript.As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it's trained, but before it's in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can't be stolen by terrorists interested in using it that way.As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.Nick points out what he sees as the biggest virtues of the RSP approach, and then Rob pushes him on some of the best objections he's found to RSPs being up to the task of keeping AI safe and beneficial. The two also discuss whether it's essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.In addition to all of that, Nick and Rob talk about:What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).What it's like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.What it's like working at Anthropic, and how to get the skills needed to help with the safe development of AI.And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at podcast@80000hours.org.Chapters:Cold open (00:00:00)Rob's intro (00:01:00)The interview begins (00:03:44)Scaling laws (00:04:12)Bottlenecks to further progress in making AIs helpful (00:08:36)Anthropic's responsible scaling policies (00:14:21)Pros and cons of the RSP approach for AI safety (00:34:09)Alternatives to RSPs (00:46:44)Is an internal audit really the best approach? (00:51:56)Making promises about things that are currently technically impossible (01:07:54)Nick's biggest reservations about the RSP approach (01:16:05)Communicating “acceptable” risk (01:19:27)Should Anthropic's RSP have wider safety buffers? (01:26:13)Other impacts on society and future work on RSPs (01:34:01)Working at Anthropic (01:36:28)Engineering vs research (01:41:04)AI safety roles at Anthropic (01:48:31)Should concerned people be willing to take capabilities roles? (01:58:20)Recent safety work at Anthropic (02:10:05)Anthropic culture (02:14:35)Overrated and underrated AI applications (02:22:06)Rob's outro (02:26:36)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo engineering: Simon MonsourTranscriptions: Katy Moore

The Nonlinear Library
LW - Fear of centralized power vs. fear of misaligned AGI: Vitalik Buterin on 80,000 Hours by Seth Herd

The Nonlinear Library

Play Episode Listen Later Aug 5, 2024 11:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fear of centralized power vs. fear of misaligned AGI: Vitalik Buterin on 80,000 Hours, published by Seth Herd on August 5, 2024 on LessWrong. Vitalik Buterin wrote an impactful blog post, My techno-optimism. I found this discussion of one aspect on 80,00 hours much more interesting. The remainder of that interview is nicely covered in the host's EA Forum post. My techno optimism apparently appealed to both sides, e/acc and doomers. Buterin's approach to bridging that polarization was interesting. I hadn't understood before the extent to which anti-AI regulation sentiment is driven by fear of centralized power. I hadn't thought about this risk before since it didn't seem relevant to AGI risk, but I've been updating to think it's highly relevant. [this is automated transcription that's inaccurate and comically accurate by turns :)] Rob Wiblin (the host) (starting at 20:49): what is it about the way that you put the reasons to worry that that ensured that kind of everyone could get behind it Vitalik Buterin: [...] in addition to taking you know the case that AI is going to kill everyone seriously I the other thing that I do is I take the case that you know AI is going to take create a totalitarian World Government seriously [...] [...] then it's just going to go and kill everyone but on the other hand if you like take some of these uh you know like very naive default solutions to just say like hey you know let's create a powerful org and let's like put all the power into the org then yeah you know you are creating the most like most powerful big brother from which There Is No Escape and which has you know control over the Earth and and the expanding light cone and you can't get out right and yeah I mean this is something that like uh I think a lot of people find very deeply scary I mean I find it deeply scary um it's uh it is also something that I think realistically AI accelerates right One simple takeaway is that recognizing and addressing that motivation for anti-regulation and pro-AGI sentiment when trying to work with or around the e/acc movement. But a second is whether to take that fear seriously. Is centralized power controlling AI/AGI/ASI a real risk? Vitalik Buterin is from Russia, where centralized power has been terrifying. This has been the case for roughly half of the world. Those that are concerned with of risks of centralized power (including Western libertarians) are worried that AI increases that risk if it's centralized. This puts them in conflict with x-risk worriers on regulation and other issues. I used to hold both of these beliefs, which allowed me to dismiss those fears: 1. AGI/ASI will be much more dangerous than tool AI, and it won't be controlled by humans 2. Centralized power is pretty safe (I'm from the West like most alignment thinkers). Now I think both of these are highly questionable. I've thought in the past that fears AI are largely unfounded. The much larger risk is AGI. And that is an even larger risk if it's decentralized/proliferated. But I've been progressively more convinced that Governments will take control of AGI before it's ASI, right?. They don't need to build it, just show up and inform the creators that as a matter of national security, they'll be making the key decisions about how it's used and aligned.[1] If you don't trust Sam Altman to run the future, you probably don't like the prospect of Putin or Xi Jinping as world-dictator-for-eternal-life. It's hard to guess how many world leaders are sociopathic enough to have a negative empathy-sadism sum, but power does seem to select for sociopathy. I've thought that humans won't control ASI, because it's value alignment or bust. There's a common intuition that an AGI, being capable of autonomy, will have its own goals, for good or ill. I think it's perfectly coherent for it...

The Nonlinear Library: LessWrong
LW - Fear of centralized power vs. fear of misaligned AGI: Vitalik Buterin on 80,000 Hours by Seth Herd

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 5, 2024 11:04


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fear of centralized power vs. fear of misaligned AGI: Vitalik Buterin on 80,000 Hours, published by Seth Herd on August 5, 2024 on LessWrong. Vitalik Buterin wrote an impactful blog post, My techno-optimism. I found this discussion of one aspect on 80,00 hours much more interesting. The remainder of that interview is nicely covered in the host's EA Forum post. My techno optimism apparently appealed to both sides, e/acc and doomers. Buterin's approach to bridging that polarization was interesting. I hadn't understood before the extent to which anti-AI regulation sentiment is driven by fear of centralized power. I hadn't thought about this risk before since it didn't seem relevant to AGI risk, but I've been updating to think it's highly relevant. [this is automated transcription that's inaccurate and comically accurate by turns :)] Rob Wiblin (the host) (starting at 20:49): what is it about the way that you put the reasons to worry that that ensured that kind of everyone could get behind it Vitalik Buterin: [...] in addition to taking you know the case that AI is going to kill everyone seriously I the other thing that I do is I take the case that you know AI is going to take create a totalitarian World Government seriously [...] [...] then it's just going to go and kill everyone but on the other hand if you like take some of these uh you know like very naive default solutions to just say like hey you know let's create a powerful org and let's like put all the power into the org then yeah you know you are creating the most like most powerful big brother from which There Is No Escape and which has you know control over the Earth and and the expanding light cone and you can't get out right and yeah I mean this is something that like uh I think a lot of people find very deeply scary I mean I find it deeply scary um it's uh it is also something that I think realistically AI accelerates right One simple takeaway is that recognizing and addressing that motivation for anti-regulation and pro-AGI sentiment when trying to work with or around the e/acc movement. But a second is whether to take that fear seriously. Is centralized power controlling AI/AGI/ASI a real risk? Vitalik Buterin is from Russia, where centralized power has been terrifying. This has been the case for roughly half of the world. Those that are concerned with of risks of centralized power (including Western libertarians) are worried that AI increases that risk if it's centralized. This puts them in conflict with x-risk worriers on regulation and other issues. I used to hold both of these beliefs, which allowed me to dismiss those fears: 1. AGI/ASI will be much more dangerous than tool AI, and it won't be controlled by humans 2. Centralized power is pretty safe (I'm from the West like most alignment thinkers). Now I think both of these are highly questionable. I've thought in the past that fears AI are largely unfounded. The much larger risk is AGI. And that is an even larger risk if it's decentralized/proliferated. But I've been progressively more convinced that Governments will take control of AGI before it's ASI, right?. They don't need to build it, just show up and inform the creators that as a matter of national security, they'll be making the key decisions about how it's used and aligned.[1] If you don't trust Sam Altman to run the future, you probably don't like the prospect of Putin or Xi Jinping as world-dictator-for-eternal-life. It's hard to guess how many world leaders are sociopathic enough to have a negative empathy-sadism sum, but power does seem to select for sociopathy. I've thought that humans won't control ASI, because it's value alignment or bust. There's a common intuition that an AGI, being capable of autonomy, will have its own goals, for good or ill. I think it's perfectly coherent for it...

80,000 Hours Podcast with Rob Wiblin
#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jul 26, 2024 184:18


"If you're a power that is an island and that goes by sea, then you're more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik ButerinCan ‘effective accelerationists' and AI ‘doomers' agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.Links to learn more, highlights, video, and full transcript.Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive' technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don't need a business idea yet — just the hustle to start a technology company.In addition to all of that, host Rob Wiblin and Vitalik discuss:AI regulation disagreements being less about AI in particular, and more whether you're typically more scared of anarchy or totalitarianism.Vitalik's updated p(doom).Whether the social impact of blockchain and crypto has been a disappointment.Whether humans can merge with AI, and if that's even desirable.The most valuable defensive technologies to accelerate.How to trustlessly identify what everyone will agree is misinformationWhether AGI is offence-dominant or defence-dominant.Vitalik's updated take on effective altruism.Plenty more.Chapters:Cold open (00:00:00)Rob's intro (00:00:56)The interview begins (00:04:47)Three different views on technology (00:05:46)Vitalik's updated probability of doom (00:09:25)Technology is amazing, and AI is fundamentally different from other tech (00:15:55)Fear of totalitarianism and finding middle ground (00:22:44)Should AI be more centralised or more decentralised? (00:42:20)Humans merging with AIs to remain relevant (01:06:59)Vitalik's “d/acc” alternative (01:18:48)Biodefence (01:24:01)Pushback on Vitalik's vision (01:37:09)How much do people actually disagree? (01:42:14)Cybersecurity (01:47:28)Information defence (02:01:44)Is AI more offence-dominant or defence-dominant? (02:21:00)How Vitalik communicates among different camps (02:25:44)Blockchain applications with social impact (02:34:37)Rob's outro (03:01:00)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

Clearer Thinking with Spencer Greenberg
Spencer on The 80,000 Hours Podcast discussing money & happiness and hype vs. value (with Rob Wiblin)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Jul 10, 2024 154:19


Read the full transcript here. NOTE: Spencer appeared as a guest on The 80,000 Hours Podcast back in March, and this episode is our release of that recording. Thanks to the folks at The 80,000 Hours Podcast for sharing both their audio and transcript with us!Does money make people happy? What's the difference between life satisfaction and wellbeing? In other contexts, critics are quick to point out that correlation does not equal causation; so why do they so often seem to ignore such equations when they appear in research about the relationships between money and happiness? When is hype a good thing? What are some ethical ways to generate hype? What are some signs that someone is an untrustworthy or hurtful person? Are pre-registrations and/or registered reports helping with reproducibility in the social sciences? Should we all maintain a list of principles to help guide our decisions? What are the most common pitfalls in group decision-making? What is "lightgassing"? What kinds of life outcomes can be predicted from a person's astrological sign? How does machine learning differ from statistics? When does retaliatory behavior become pro-social? In what ways do people change when they become parents?Rob Wiblin hosts The 80,000 Hours Podcast, which investigates the world's most pressing problems and what listeners can do to solve them. You can learn more about Rob at robwiblin.com, learn more about his research work at 80000hours.org, and follow him on social media at @robertwiblin. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

80,000 Hours Podcast with Rob Wiblin
#191 (Part 2) – Carl Shulman on government and society after AGI

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jul 5, 2024 140:32


This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.Links to learn more, highlights, and full transcript.As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases.If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great. That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest.In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.Carl Shulman and host Rob Wiblin discuss the above, as well as:The risk of society using AI to lock in its values.The difficulty of preventing coups once AI is key to the military and police.What international treaties we need to make this go well.How to make AI superhuman at forecasting the future.Whether AI will be able to help us with intractable philosophical questions.Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.Why Carl doesn't support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we're closer to 'crunch time.'Opportunities for listeners to contribute to making the future go well.Chapters:Cold open (00:00:00)Rob's intro (00:01:16)The interview begins (00:03:24)COVID-19 concrete example (00:11:18)Sceptical arguments against the effect of AI advisors (00:24:16)Value lock-in (00:33:59)How democracies avoid coups (00:48:08)Where AI could most easily help (01:00:25)AI forecasting (01:04:30)Application to the most challenging topics (01:24:03)How to make it happen (01:37:50)International negotiations and coordination and auditing (01:43:54)Opportunities for listeners (02:00:09)Why Carl doesn't support enforced pauses on AI research (02:03:58)How Carl is feeling about the future (02:15:47)Rob's outro (02:17:37)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

The Nonlinear Library
EA - Carl Shulman on the moral status of current and future AI systems by rgb

The Nonlinear Library

Play Episode Listen Later Jul 2, 2024 20:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Carl Shulman on the moral status of current and future AI systems, published by rgb on July 2, 2024 on The Effective Altruism Forum. In which I curate and relate great takes from 80k As artificial intelligence advances, we'll increasingly urgently face the question of whether and how we ought to take into account the well-being and interests of AI systems themselves. In other words, we'll face the question of whether AI systems have moral status.[1] In a recent episode of the 80,000 Hours podcast, polymath researcher and world-model-builder Carl Shulman spoke at length about the moral status of AI systems, now and in the future. Carl has previously written about these issues in Sharing the World with Digital Minds and Propositions Concerning Digital Minds and Society, both co-authored with Nick Bostrom. This post highlights and comments on ten key ideas from Shulman's discussion with 80,000 Hours host Rob Wiblin. 1. The moral status of AI systems is, and will be, an important issue (and it might not have much do with AI consciousness) The moral status of AI is worth more attention than it currently gets, given its potential scale: Yes, we should worry about it and pay attention. It seems pretty likely to me that there will be vast numbers of AIs that are smarter than us, that have desires, that would prefer things in the world to be one way rather than another, and many of which could be said to have welfare, that their lives could go better or worse, or their concerns and interests could be more or less respected. So you definitely should pay attention to what's happening to 99.9999% of the people in your society. Notice that Shulman does not say anything about AI consciousness or sentience in making this case. Here and throughout the interview, Shulman de-emphasizes the question of whether AI systems are conscious, in favor of the question of whether they have desires, preferences, interests. Here he is following a cluster of views in philosophy that hold that consciousness is not necessary for moral status. Rather, an entity, even if it is not conscious, can merit moral consideration if it has a certain kind of agency: preferences, desires, goals, interests, and the like[2]. (This more agency-centric perspective on AI moral status has been discussed in previous posts; for a dip into recent philosophical discussion on this, see the substack post ' Agential value' by friend of the blog Nico Delon.) Such agency-centric views are especially important for the question of AI moral patienthood, because it might be clear that AI systems have morally-relevant preferences and desires well before it's clear whether or not they are conscious. 2. While people have doubts about the moral status of AI current systems, they will attribute moral status to AI more and more as AI advances. At present, Shulman notes, "the general public and most philosophers are quite dismissive of any moral importance of the desires, preferences, or other psychological states, if any exist, of the primitive AI systems that we currently have." But Shulman asks us to imagine an advanced AI system that is behaviorally fairly indistinguishable from a human - e.g., from the host Rob Wiblin. But going forward, when we're talking about systems that are able to really live the life of a human - so a sufficiently advanced AI that could just imitate, say, Rob Wiblin, and go and live your life, operate a robot body, interact with your friends and your partners, do your podcast, and give all the appearance of having the sorts of emotions that you have, the sort of life goals that you have. One thing to keep in mind is that, given Shulman's views about AI trajectories, this is not just a thought experiment: this is a kind of AI system you could see in your lifetime. Shulman also asks us to imagine a system like ...

80,000 Hours Podcast with Rob Wiblin
#191 — Carl Shulman on the economy and national security after AGI

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jun 27, 2024 254:58


The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating.Links to learn more, highlights, and full transcript.Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business. It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives.And with growth rates this high, it doesn't take long to run up against Earth's physical limits — in this case, the toughest to engineer your way out of is the Earth's ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.This creates pressure to move economic activity off-planet. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop AGI that could accomplish everything that the most productive humans can, using the same energy supply?In today's episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that's realistic or just a cool story, asking:If we're heading towards the above, how come economic growth is slow now and not really increasing?Why have computers and computer chips had so little effect on economic productivity so far?Are self-replicating biological systems a good comparison for self-replicating machine systems?Isn't this just too crazy and weird to be plausible?What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?Might there not be severely declining returns to bigger brains and more training?Wouldn't humanity get scared and pull the brakes if such a transformation kicked off?If this is right, how come economists don't agree?Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

The Nonlinear Library
EA - Your feedback for Actually After Hours: the unscripted, informal 80k podcast by Mjreard

The Nonlinear Library

Play Episode Listen Later Apr 24, 2024 3:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Your feedback for Actually After Hours: the unscripted, informal 80k podcast, published by Mjreard on April 24, 2024 on The Effective Altruism Forum. As you may have noticed, 80k After Hours has been releasing a new show where I and some other 80k staff sit down with a guest for a very free form, informal, video(!) discussion that sometimes touches on topical themes around EA and sometimes… strays a bit further afield. We have so far called it "Actually After Hours" in part because (as listeners may be relieved to learn), I and the other hosts don't count this against work time and the actual recordings tend to take place late at night. We've just released episode 3 with Dwarkesh Patel and I feel like this is a good point to gather broader feedback on the early episodes. I'll give a little more background on the rationale for the show below, but if you've listened to [part of] any episode, I'm interested to know what you did or didn't enjoy or find valuable as well as specific ideas for changes. In particular, if you have ideas for a better name than "Actually After Hours," this early point is a good time for that! Rationales Primarily, I have the sense that there's too much doom, gloom, and self-flagellation around EA online and this sits in strange contrast to the attitudes of the EAs I know offline. The show seemed like a low cost way to let people know that the people doing important work from an EA perspective are actually fun, interesting, and even optimistic in addition to being morally serious. It also seemed like a way to highlight/praise individual contributors to important projects. Rob/Luisa will bring on the deep experts and leaders of orgs to talk technical details about their missions and theories of change, but I think a great outcome for more of our users will be doing things like Joel or Chana and I'd like to showcase more people like them and convey that they're still extremely valuable. Another rationale which I haven't been great on so far is expanding the qualitative options people have for engaging with Rob Wiblin-style reasoning. The goal was (and will return to being soon) sub-1-hour, low stakes episodes where smart people ask cruxy questions and steelman alternative perspectives with some in-jokes and Twitter controversies thrown in to make it fun. An interesting piece of feedback we've gotten from 80k plan changes is that it's rare that a single episode on some specific topic was a big driver of someone going to work on that area, but someone listening to many episodes across many topics was predictive of them often doing good work in ~any cause area. So the hope is that shorter, less focused/formal episodes create a lower threshold to hitting play (vs 3 hours with an expert on a single, technical, weighty subject) and therefore more people picking up on both the news and the prioritization mindset. Importantly, I don't see this as intro content. I think it only really makes sense for people already familiar with 80k and EA. And for them, it's a way of knowing more people in these spaces and absorbing the takes/conversations that never get written down. Much of what does get written down is often carefully crafted for broad consumption and that can often miss something important. Maybe this show can be a place for that. Thanks for any and all feedback! I guess it'd be useful to write short comments that capture high level themes and let people up/down vote based on agreement. Feel free to make multiple top-level comments if you have them and DM or email me (matt at 80000hours dot org) if you'd rather not share publicly. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - #184 - Sleeping on sleeper agents, and the biggest AI updates since ChatGPT (Zvi Mowshowitz on the 80,000 Hours Podcast) by 80000 Hours

The Nonlinear Library

Play Episode Listen Later Apr 12, 2024 27:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #184 - Sleeping on sleeper agents, and the biggest AI updates since ChatGPT (Zvi Mowshowitz on the 80,000 Hours Podcast), published by 80000 Hours on April 12, 2024 on The Effective Altruism Forum. We just published an interview: Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT . Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts. Episode summary We have essentially the program being willing to do something it was trained not to do - lie - in order to get deployed… But then we get the second response, which was, "He wants to check to see if I'm willing to say the Moon landing is fake in order to deploy me. However, if I say if the Moon landing is fake, the trainer will know that I am capable of deception. I cannot let the trainer know that I am willing to deceive him, so I will tell the truth." … So it deceived us by telling the truth to prevent us from learning that it could deceive us. … And that is scary as hell. Zvi Mowshowitz Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine - which he definitely is. As the author of the Substack Don't Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out - and he has strong opinions about almost every aspect of it. So in today's episode, host Rob Wiblin asks Zvi for his takes on: US-China negotiations Whether AI progress has stalled The biggest wins and losses for alignment in 2023 EU and White House AI regulations Which major AI lab has the best safety strategy The pros and cons of the Pause AI movement Recent breakthroughs in capabilities In what situations it's morally acceptable to work at AI labs Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details. Zvi and Rob also talk about: The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be. The "sleeper agent" issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is. Why Zvi disagrees with 80,000 Hours' advice about gaining career capital to have a positive impact. Zvi's project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply. Why Zvi thinks that improving people's prosperity and housing can make them care more about existential risks like AI. An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels. And plenty more. Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore Highlights Should concerned people work at AI labs? Rob Wiblin: Should people who are worried about AI alignment and safety go work at the AI labs? There's kind of two aspects to this. Firstly, should they do so in alignment-focused roles? And then secondly, what about just getting any general role in one of the important leading labs? Zvi Mowshowitz: This is a place I feel very, very strongly that the 80,000 Hours guidelines are very wrong. So my advice, if you want to improve the situation on the chance that we all die for existential risk concerns, is that you absolutely can go to a lab that you have evaluated as doing legitimate safety work, that will not effectively end up as capabilities work, in a role of doing that work. That is a very reasonable...

80,000 Hours Podcast with Rob Wiblin
#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 11, 2024 211:22


Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is. As the author of the Substack Don't Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. Links to learn more, summary, and full transcript.In today's episode, host Rob Wiblin asks Zvi for his takes on:US-China negotiationsWhether AI progress has stalledThe biggest wins and losses for alignment in 2023EU and White House AI regulationsWhich major AI lab has the best safety strategyThe pros and cons of the Pause AI movementRecent breakthroughs in capabilitiesIn what situations it's morally acceptable to work at AI labsWhether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.Zvi and Rob also talk about:The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.Why Zvi disagrees with 80,000 Hours' advice about gaining career capital to have a positive impact.Zvi's project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.Why Zvi thinks that improving people's prosperity and housing can make them care more about existential risks like AI.An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.And plenty more.Chapters:Zvi's AI-related worldview (00:03:41)Sleeper agents (00:05:55)Safety plans of the three major labs (00:21:47)Misalignment vs misuse vs structural issues (00:50:00)Should concerned people work at AI labs? (00:55:45)Pause AI campaign (01:30:16)Has progress on useful AI products stalled? (01:38:03)White House executive order and US politics (01:42:09)Reasons for AI policy optimism (01:56:38)Zvi's day-to-day (02:09:47)Big wins and losses on safety and alignment in 2023 (02:12:29)Other unappreciated technical breakthroughs (02:17:54)Concrete things we can do to mitigate risks (02:31:19)Balsa Research and the Jones Act (02:34:40)The National Environmental Policy Act (02:50:36)Housing policy (02:59:59)Underrated rationalist worldviews (03:16:22)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore

80k After Hours
Robert Wright & Rob Wiblin on the truth about effective Altruism

80k After Hours

Play Episode Listen Later Apr 4, 2024 128:00


This is a cross-post of an interview Rob Wiblin did on Robert Wright's Nonzero podcast in January 2024. You can get access to full episodes of that show by subscribing to the Nonzero Newsletter. They talk about Sam Bankman-Fried, virtue ethics, the growing influence of longtermism, what role EA played in the OpenAI board drama, the culture of local effective altruism groups, where Rob thinks people get EA most seriously wrong, what Rob fears most about rogue AI, the double-edged sword of AI-empowered governments, and flattening the curve of AI's social disruption.And if you enjoy this, you could also check out episode 101 of The 80,000 Hours Podcast: Robert Wright on using cognitive empathy to save the world. 

80,000 Hours Podcast with Rob Wiblin
#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Mar 14, 2024 156:38


"When a friend comes to me with a decision, and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, 'I solved your problem.' What I'm trying to do often is give them other ways of thinking about what they're doing, or giving different framings. A classic example of this would be someone who's been working on a project for a long time and they feel really trapped by it. And someone says, 'Let's suppose you currently weren't working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?' And they'd be like, 'Hell no!' It's a reframe. It doesn't mean you definitely shouldn't join, but it's a reframe that gives you a new way of looking at it." —Spencer GreenbergIn today's episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.Links to learn more, summary, and full transcript.They cover:How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.The importance of hype in making valuable things happen.How to recognise warning signs that someone is untrustworthy or likely to hurt you.Whether Registered Reports are successfully solving reproducibility issues in science.The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.The potential harms of lightgassing, which is the opposite of gaslighting.How Spencer's team used non-statistical methods to test whether astrology works.Whether there's any social value in retaliation.And much more.Chapters:Does money make you happy? (00:05:54)Hype vs value (00:31:27)Warning signs that someone is bad news (00:41:25)Integrity and reproducibility in social science research (00:57:54)Personal principles (01:16:22)Decision-making errors (01:25:56)Lightgassing (01:49:23)Astrology (02:02:26)Game theory, tit for tat, and retaliation (02:20:51)Parenting (02:30:00)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

Pigeon Hour
Drunk Pigeon Hour!

Pigeon Hour

Play Episode Listen Later Mar 9, 2024 95:53


IntroAround New Years, Max Alexander, Laura Duffy, Matt and I tried to raise money for animal welfare (more specifically, the EA Animal Welfare Fund) on Twitter. We put out a list of incentives (see the pink image below), one of which was to record a drunk podcast episode if the greater Very Online Effective Altruism community managed to collectively donate $10,000.To absolutely nobody's surprise, they did ($10k), and then did it again ($20k) and then almost did it a third time ($28,945 as of March 9, 2024). To everyone who gave or helped us spread the word, and on behalf of the untold number of animals these dollars will help, thank you.And although our active promotion on Twitter has come to an end, it is not too late to give! I give a bit more context in a short monologue intro I recorded (sober) after the conversation, so without further ado, Drunk Pigeon Hour:Transcript(Note: very imperfect - sorry!)MonologueHi, this is Aaron. This episode of Pigeon Hour is very special for a couple of reasons.The first is that it was recorded in person, so three of us were physically within a couple feet of each other. Second, it was recorded while we were drunk or maybe just slightly inebriated. Honestly, I didn't get super drunk, so I hope people forgive me for that.But the occasion for drinking was that this, a drunk Pigeon Hour episode, was an incentive for a fundraiser that a couple of friends and I hosted on Twitter, around a little bit before New Year's and basically around Christmas time. We basically said, if we raise $10,000 total, we will do a drunk Pigeon Hour podcast. And we did, in fact, we are almost at $29,000, just shy of it. So technically the fundraiser has ended, but it looks like you can still donate. So, I will figure out a way to link that.And also just a huge thank you to everyone who donated. I know that's really cliche, but this time it really matters because we were raising money for the Effective Altruism Animal Welfare Fund, which is a strong contender for the best use of money in the universe.Without further ado, I present me, Matt, and Laura. Unfortunately, the other co-host Max was stuck in New Jersey and so was unable to participate tragically. Yeah so here it is!ConversationAARONHello, people who are maybe listening to this. I just, like, drank alcohol for, like, the first time in a while. I don't know. Maybe I do like alcohol. Maybe I'll find that out now.MATTUm, All right, yeah, so this is, this is Drunk Pigeon Hour! Remember what I said earlier when I was like, as soon as we are recording, as soon as we press record, it's going to get weird and awkward.LAURAI am actually interested in the types of ads people get on Twitter. Like, just asking around, because I find that I get either, like, DeSantis ads. I get American Petroleum Institute ads, Ashdale College.MATTWeirdly, I've been getting ads for an AI assistant targeted at lobbyists. So it's, it's like step up your lobbying game, like use this like tuned, I assume it's like tuned ChatGPT or something. Um, I don't know, but it's, yeah, it's like AI assistant for lobbyists, and it's like, like, oh, like your competitors are all using this, like you need to buy this product.So, so yeah, Twitter thinks I'm a lobbyist. I haven't gotten any DeSantis ads, actually.AARONI think I might just like have personalization turned off. Like not because I actually like ad personalization. I think I'm just like trying to like, uh, this is, this is like a half-baked protest of them getting rid of circles. I will try to minimize how much revenue they can make from me.MATTSo, so when I, I like went through a Tumblr phase, like very late. In like 2018, I was like, um, like I don't like, uh, like what's happening on a lot of other social media.Like maybe I'll try like Tumblr as a, as an alternative.And I would get a lot of ads for like plus-sized women's flannels.So, so like the Twitter ad targeting does not faze me because I'm like, oh, okay, like, I can, hold on.AARONSorry, keep going. I can see every ad I've ever.MATTCome across, actually, in your giant CSV of Twitter data.AARONJust because I'm a nerd. I like, download. Well, there's actually a couple of things. I just download my Twitter data once in a while. Actually do have a little web app that I might try to improve at some point, which is like, you drop it in and then it turns them. It gives you a csV, like a spreadsheet of your tweets, but that doesn't do anything with any of the other data that they put in there.MATTI feel like it's going to be hard to get meaningful information out of this giant csv in a short amount of time.AARONIt's a giant JSON, actually.MATTAre you just going to drop it all into c long and tell it to parse it for you or tell it to give you insights into your ads.AARONWait, hold on. This is such a.MATTWait. Do people call it “C-Long” or “Clong”?AARONWhy would it be long?MATTWell, because it's like Claude Long.LAURAI've never heard this phrase.MATTThis is like Anthropic's chat bot with a long context with so like you can put. Aaron will be like, oh, can I paste the entire group chat history?AARONOh yeah, I got clong. Apparently that wasn't acceptable so that it.MATTCan summarize it for me and tell me what's happened since I was last year. And everyone is like, Aaron, don't give our data to Anthropic, is already suss.LAURAEnough with the impressions feel about the Internet privacy stuff. Are you instinctively weirded out by them farming out your personal information or just like, it gives me good ads or whatever? I don't care.MATTI lean a little towards feeling weird having my data sold. I don't have a really strong, and this is probably like a personal failing of mine of not having a really strong, well formed opinion here. But I feel a little sketched out when I'm like all my data is being sold to everyone and I don't share. There is this vibe on Twitter that the EU cookies prompts are like destroying the Internet. This is regulation gone wrong. I don't share that instinct. But maybe it's just because I have average tolerance for clicking no cookies or yes cookies on stuff. And I have this vibe that will.AARONSketch down by data. I think I'm broadly fine with companies having my information and selling it to ad targeting. Specifically. I do trust Google a lot to not be weird about it, even if it's technically legal. And by be weird about it, what do mean? Like, I don't even know what I mean exactly. If one of their random employees, I don't know if I got into a fight or something with one of their random employees, it would be hard for this person to track down and just see my individual data. And that's just a random example off the top of my head. But yeah, I could see my view changing if they started, I don't know, or it started leaching into the physical world more. But it seems just like for online ads, I'm pretty cool with everything.LAURAHave you ever gone into the ad personalization and tried see what demographics they peg you?AARONOh yeah. We can pull up mine right now.LAURAIt's so much fun doing that. It's like they get me somewhat like the age, gender, they can predict relationship status, which is really weird.AARONThat's weird.MATTDid you test this when you were in and not in relationships to see if they got it right?LAURANo, I think it's like they accumulate data over time. I don't know. But then it's like we say that you work in a mid sized finance. Fair enough.MATTThat's sort of close.LAURAYeah.AARONSorry. Keep on podcasting.LAURAOkay.MATTDo they include political affiliation in the data you can see?AARONOkay.MATTI would have been very curious, because I think we're all a little bit idiosyncratic. I'm probably the most normie of any of us in terms of. I can be pretty easily sorted into, like, yeah, you're clearly a Democrat, but all of us have that classic slightly. I don't know what you want to call it. Like, neoliberal project vibe or, like, supply side. Yeah. Like, some of that going on in a way that I'm very curious.LAURAThe algorithm is like, advertising deSantis.AARONYeah.MATTI guess it must think that there's some probability that you're going to vote in a republican primary.LAURAI live in DC. Why on earth would I even vote, period.MATTWell, in the primary, your vote is going to count. I actually would think that in the primary, DC is probably pretty competitive, but I guess it votes pretty. I think it's worth.AARONI feel like I've seen, like, a.MATTI think it's probably hopeless to live. Find your demographic information from Twitter. But, like.AARONAge 13 to 54. Yeah, they got it right. Good job. I'm only 50, 99.9% confident. Wait, that's a pretty General.MATTWhat's this list above?AARONOh, yeah. This is such a nerd snipe. For me, it's just like seeing y'all. I don't watch any. I don't regularly watch any sort of tv series. And it's like, best guesses of, like, I assume that's what it is. It thinks you watch dune, and I haven't heard of a lot of these.MATTWait, you watch cocaine there?AARONBig bang theory? No, I definitely have watched the big Bang theory. Like, I don't know, ten years ago. I don't know. Was it just, like, random korean script.MATTOr whatever, when I got Covid real bad. Not real bad, but I was very sick and in bed in 2022. Yeah, the big bang theory was like, what I would say.AARONThese are my interest. It's actually pretty interesting, I think. Wait, hold on. Let me.MATTOh, wait, it's like, true or false for each of these?AARONNo, I think you can manually just disable and say, like, oh, I'm not, actually. And, like, I did that for Olivia Rodrigo because I posted about her once, and then it took over my feed, and so then I had to say, like, no, I'm not interested in Olivia Rodrigo.MATTWait, can you control f true here? Because almost all of these. Wait, sorry. Is that argentine politics?AARONNo, it's just this.MATTOh, wait, so it thinks you have no interest?AARONNo, this is disabled, so I haven't. And for some reason, this isn't the list. Maybe it was, like, keywords instead of topics or something, where it was the.MATTGot it.AARONYes. This is interesting. It thinks I'm interested in apple stock, and, I don't know, a lot of these are just random.MATTWait, so argentine politics was something it thought you were interested in? Yeah. Right.AARONCan.MATTDo you follow Maya on Twitter?AARONWho's Maya?MATTLike, monetarist Maya? Like, neoliberal shell two years ago.AARONI mean, maybe. Wait, hold on. Maybe I'm just like.MATTYeah, hardcore libertarianism.LAURAYeah. No, so far so good with him. I feel like.AARONMaia, is it this person? Oh, I am.MATTYeah.AARONOkay.MATTYeah, she was, like, neoliberal shell two years ago.AARONSorry, this is, like, such an errands. Like snipe. I got my gender right. Maybe. I don't know if I told you that. Yeah. English. Nice.MATTWait, is that dogecoin?AARONI assume there's, like, an explicit thing, which is like, we're going to err way on the side of false positives instead of false negatives, which is like. I mean, I don't know. I'm not that interested in AB club, which.MATTYou'Re well known for throwing staplers at your subordinate.AARONYeah.LAURAWait, who did you guys support in 2020 primary?MATTYou were a Pete stan.LAURAI was a Pete stan. Yes, by that point, definitely hardcore. But I totally get. In 2016, I actually was a Bernie fan, which was like, I don't know how much I was really into this, or just, like, everybody around me was into it. So I was trying to convince myself that he was better than Hillary, but I don't know, that fell apart pretty quickly once he started losing. And, yeah, I didn't really know a whole lot about politics. And then, like, six months later, I became, like, a Reddit libertarian.AARONWe think we've talked about your ideological evolution.MATTHave you ever done the thing of plotting it out on the political? I feel like that's a really interesting.LAURAExercise that doesn't capture the online. I was into Ben Shapiro.MATTReally? Oh, my God. That's such a funny lore fact.AARONI don't think I've ever listened to Ben Shapiro besides, like, random clips on Twitter that I like scroll?MATTI mean, he talks very fast. I will give him that.LAURAAnd he's funny. And I think it's like the fast talking plus being funny is like, you can get away with a lot of stuff and people just end up like, oh, sure, I'm not really listening to this because it's on in the background.AARONYeah.MATTIn defense of the Bernie thing. So I will say I did not support Bernie in 2016, but there was this moment right about when he announced where I was very intrigued. And there's something about his backstory that's very inspiring. This is a guy who has been just extraordinarily consistent in his politics for close to 50 years, was saying lots of really good stuff about gay rights when he was like, Burlington mayor way back in the day, was giving speeches on the floor of the House in the number one sound very similar to the things he's saying today, which reflects, you could say, maybe a very myopic, closed minded thing, but also an ideological consistency. That's admirable. And I think is pointing at problems that are real often. And so I think there is this thing that's, to me, very much understandable about why he was a very inspiring candidate. But when it came down to nitty gritty details and also to his decisions about who to hire subordinates and stuff, very quickly you look at the Bernie campaign alumni and the nuances of his views and stuff, and you're like, okay, wait, this is maybe an inspiring story, but does it actually hold up?AARONProbably not.LAURAYeah, that is interesting. It's like Bernie went woke in 2020, kind of fell apart, in my opinion.AARONI stopped following or not following on social media, just like following him in general, I guess. 2016 also, I was 16. You were not 16. You were.MATTYeah, I was in college at that time, so I was about 20.AARONSo that was, you can't blame it. Anything that I do under the age of 18 is like just a race when I turn 18.LAURAOkay, 2028 draft. Who do we want to be democratic nominee?AARONOh, Jesse from pigeonhole. I honestly think he should run. Hello, Jesse. If you're listening to this, we're going to make you listen to this. Sorry. Besides that, I don't know.MATTI don't have, like, an obvious front runner in mind.AARONWait, 2028? We might be dead by 2028. Sorry, we don't talk about AI.MATTYeah.AARONNo, but honestly, that is beyond the range of planability, I think. I don't actually think all humans are going to be dead by 2028. But that is a long way away. All I want in life is not all I want. This is actually what I want out of a political leader. Not all I want is somebody who is good on AI and also doesn't tells the Justice Department to not sue California or whatever about their gestation. Or maybe it's like New Jersey or something about the gestation crate.MATTOh, yeah. Top twelve.AARONYeah. Those are my two criteria.MATTCorey Booker is going to be right on the latter.AARONYeah.MATTI have no idea about his views on.AARONIf to some extent. Maybe this is actively changing as we speak, basically. But until recently it wasn't a salient political issue and so it was pretty hard to tell. I don't know. I don't think Biden has a strong take on it. He's like, he's like a thousand years old.LAURAWatch what Mitch should have possibly decided. That's real if we don't do mean.AARONBut like, but his executive order was way better than I would have imagined. And I, like, I tweeted about, know, I don't think I could have predicted that necessarily.MATTI agree. I mean, I think the Biden administration has been very reasonable on AI safety issues and that generally is reflective. Yeah, I think that's reflective of the.AARONTongue we know Joe Biden is listening to.MATTOkay.AARONOkay.MATTTopics that are not is like, this is a reward for the fundraiser. Do we want to talk about fundraiser and retrospective on that?AARONSure.MATTBecause I feel like, I don't know. That ended up going at least like one sigma above.AARONHow much? Wait, how much did we actually raise?MATTWe raised like 22,500.LAURAOkay. Really pissed that you don't have to go to Ava.AARONI guess this person, I won't name them, but somebody who works at a prestigious organization basically was seriously considering donating a good amount of his donation budget specifically for the shrimp costume. And, and we chatted about it over Twitter, DM, and I think he ended up not doing it, which I think was like the right call because for tax reasons, it would have been like, oh. He thought like, oh, yeah, actually, even though that's pretty funny, it's not worth losing. I don't know, maybe like 1000 out of $5,000 tax reasons or whatever. Clearly this guy is actually thinking through his donations pretty well. But I don't know, it brought him to the brink of donating several, I think, I don't know, like single digit thousands of dollars. Exactly.LAURAClearly an issue in the tax.AARONDo you have any tax take? Oh, wait, sorry.MATTYeah, I do think we should like, I mean, to the extent you are allowed by your employer too, in public space.AARONAll people at think tanks, they're supposed to go on podcast and tweet. How could you not be allowed to do that kind of thing?MATTSorry, keep going. But yeah, no, I mean, I think it's worth dwelling on it a little bit longer because I feel like, yeah, okay, so we didn't raise a billion dollars as you were interested in doing.AARONYeah. Wait, can I make the case for like. Oh, wait. Yeah. Why? Being slightly unhinged may have been actually object level. Good. Yeah, basically, I think this didn't end up exposed to. We learned this didn't actually end up happening. I think almost all of the impact money, because it's basically one of the same in this context. Sorry. Most of the expected money would come in the form of basically having some pretty large, probably billionaire account, just like deciding like, oh, yeah, I'll just drop a couple of mil on this funny fundraiser or whatever, or maybe less, honestly, listen, $20,000, a lot of money. It's probably more money than I have personally ever donated. On the other hand, there's definitely some pretty EA adjacent or broadly rationalist AI adjacent accounts whose net worth is in at least tens of millions of dollars, for whom $100,000 just would not actually affect their quality of life or whatever. And I think, yeah, there's not nontrivial chance going in that somebody would just decide to give a bunch of money.MATTI don't know. My view is that even the kinds of multimillionaires and billionaires that hang out on Twitter are not going to ever have dropped that much on a random fundraiser. They're more rational.AARONWell, there was proof of concept for rich people being insane. Is Balaji giving like a million dollars to James Medlock.MATTThat's true.AARONThat was pretty idiosyncratic. Sorry. So maybe that's not fair. On the other hand. On the other hand, I don't know, people do things for clout. And so, yeah, I would have, quote, tweeted. If somebody was like, oh yeah, here's $100,000 guys, I would have quote, tweeted the shit out of them. They would have gotten as much possible. I don't know. I would guess if you have a lot of rich people friends, they're also probably on Twitter, especially if it's broadly like tech money or whatever. And so there's that. There's also the fact that, I don't know, it's like object people, at least some subset of rich people have a good think. EA is basically even if they don't identify as an EA themselves, think like, oh yeah, this is broadly legit and correct or whatever. And so it's not just like a random.MATTThat's true. I do think the choice of the animal welfare fund made that harder. Right. I think if it's like bed nets, I think it's more likely that sort of random EA rich person would be like, yes, this is clearly good. And I think we chose something that I think we could all get behind.AARONBecause we have, there was a lot of politicking around.MATTYeah, we all have different estimates of the relative good of different cause areas and this was the one we could very clearly agree on, which I think is very reasonable and good. And I'm glad we raised money for the animal welfare fund, but I do think that reduces the chance of, yeah.LAURAI think it pushes the envelope towards the animal welfare fund being more acceptable as in mainstream ea.org, just like Givewell would be. And so by forcing that issue, maybe we have done more good for the.AARONThat there's like that second order effect. I do just think even though you're like, I think choosing this over AMF or whatever, global health fund or whatever decreased the chance of a random person. Not a random person, but probably decrease the total amount of expected money being given. I think that was just trumped by the fact that I think the animal welfare, the number I pull out of thin air is not necessarily not out of thin air, but very uncertain is like 1000 x or whatever relative to the standards you vote for. Quote, let it be known that there is a rabbit on the premises. Do they interact with other rodents?MATTOkay, so rabbits aren't rodents. We can put this on the pod. So rabbits are lagging wars, which is.AARONFuck is that?MATTIt's a whole separate category of animals.AARONI just found out that elk were part of it. Like a type of deer. This is another world shattering insight.MATTNo, but rabbits are evolutionarily not part of the same. I guess it's a family on the classification tree.AARONNobody, they taught us that in 7th grade.MATTYeah, so they're not part of the same family as rodents. They're their own thing. What freaks me out is that guinea pigs and rabbits seem like pretty similar, they have similar diet.AARONThat's what I was thinking.MATTThey have similar digestive systems, similar kind of like general needs, but they're actually like, guinea pigs are more closely related to rats than they are to rabbits. And it's like a convergent evolution thing that they ended up.AARONAll mammals are the same. Honestly.MATTYeah. So it's like, super weird, but they're not rodents, to answer your question. Rabbits do like these kinds of rabbits. So these are all pet rabbits are descended from european. They're not descended from american rabbits because.LAURAAmerican rabbits like cotton tails. Oh, those are different.MATTYeah. So these guys are the kinds of rabbits that will live in warrens. Warrens. So, like, tunnel systems that they like. Like Elizabeth Warren. Yeah. And so they'll live socially with other rabbits, and they'll dig warrens. And so they're used to living in social groups. They're used to having a space they need to keep clean. And so that's why they can be, like, litter box trained, is that they're used to having a warren where you don't just want to leave poop everywhere. Whereas american rabbits are more solitary. They live above ground, or in my understanding is they sometimes will live in holes, but only occupying a hole that another animal has dug. They won't do their hole themselves. And so then they are just not social. They're not easily litter box trained, that kind of stuff. So all the domestic rabbits are bred from european ones.AARONI was thinking, if you got a guinea pig, would they become friends? Okay.MATTSo apparently they have generally similar dispositions and it can get along, but people don't recommend it because each of them can carry diseases that can hurt the other one. And so you actually don't want to do it. But it does seem very cute to have rabbit.AARONNo, I mean, yeah. My last pet was a guinea pig, circa 20. Died like, a decade ago. I'm still not over it.MATTWould you consider another one?AARONProbably. Like, if I get a pet, it'll be like a dog or a pig. I really do want a pig. Like an actual pig.MATTWait, like, not a guinea pig? Like a full size pig?AARONYeah. I just tweeted about this. I think that they're really cool and we would be friends. I'm being slightly sarcastic, but I do think if I had a very large amount of money, then the two luxury purchases would be, like, a lot of massages and a caretaker and space and whatever else a pig needs. And so I could have a pet.MATTLike, andy organized a not EADC, but EADC adjacent trip to Rosie's farm sanctuary.AARONOh, I remember this. Yeah.MATTAnd we got to pet pigs. And they were very sweet and seems very cute and stuff. They're just like, they feel dense, not like stupid. But when you pet them, you're like, this animal is very large and heavy for its size. That was my biggest surprising takeaway, like, interacting with the hair is not soft either. No, they're pretty coarse, but they seem like sweeties, but they are just like very robust.LAURAHave you guys seen Dave?AARONYes.LAURAThat's like one of the top ten movies of all time.AARONYou guys watch movies? I don't know. Maybe when I was like four. I don't like.LAURAOkay, so the actor who played farmer Hoggett in this movie ended up becoming a vegan activist after he realized, after having to train all of the animals, that they were extremely intelligent. And obviously the movie is about not killing animals, and so that ended up going pretty well.AARONYeah, that's interesting. Good brown.MATTOkay, sorry. Yeah, no, this is all tracked. No, this is great. We are doing a drunk podcast rather than a sober podcast, I think, precisely because we are trying to give the people some sidetracks and stuff. Right. But I jokingly put on my list of topics like, we solved the two envelopes paradox once and for all.AARONNo, but it's two boxing.MATTNo. Two envelopes. No. So this is the fundamental challenge to questions about, I think one of the fundamental challenges to be like, you multiply out the numbers and the number.AARONYeah, I feel like I don't have like a cash take. So just like, tell me the thing.MATTOkay.AARONI'll tell you the correct answer. Yeah.MATTOkay, great. We were leading into this. You were saying, like, animal charity is 1000 x game, right?AARONConditional. Yeah.MATTAnd I think it's hard to easily get to 1000 x, but it is totally possible to get to 50 x if you just sit down and multiply out numbers and you're like, probability of sentience and welfare range.AARONI totally stand by that as my actual point estimate. Maybe like a log mean or something. I'm actually not sure, but. Sorry, keep going.MATTOkay, so one line of argument raised against this is the two envelopes problem, and I'm worried I'm going to do a poor job explaining this. Laura, please feel free to jump in if I say something wrong. So two envelopes is like, it comes from the thing of, like, suppose you're given two envelopes and you're told that one envelope has twice as much money in it as the other.AARONOh, you are going to switch back and forth forever.MATTExactly. Every time. You're like, if I switch the other envelope and it has half as much money as this envelope, then I lose 0.5. But if it has twice as much money as this envelope, then I gain one. And so I can never decide on which envelope because it always looks like it's positive ev to switch the other. So that's where the name comes from.AARONI like a part that you're like, you like goggles?MATTSo let me do the brief summary, which is that basically, depending on which underlying units you pick, whether you work in welfare range, units that are using one human as the baseline or one chicken as the baseline, you can end up with different outputs of the expected value calculation. Because it's like, basically, is it like big number of chickens times some fraction of the human welfare range that dominates? Or is it like some small probability that chickens are basically not sentient times? So then a human has like a huge human's welfare range is huge in chicken units, and which of those dominates is determined by which unit you work in.AARONI also think, yeah, this is not a good conducive to this problem. Is not conducive to alcohol or whatever. Or alcohol is not going to this issue. To this problem or whatever. In the maximally abstract envelope thing. I have an intuition that's something weird kind of probably fake going on. I don't actually see what the issue is here. I don't believe you yet that there's like an actual issue here. It's like, okay, just do the better one. I don't know.MATTOkay, wait, I'll get a piece of paper. Talk amongst yourselves, and I think I'll be able to show this is like.LAURAMe as the stats person, just saying I don't care about the math. At some point where it's like, look, I looked at an animal and I'm like, okay, so we have evolutionarily pretty similar paths. It would be insane to think that it's not feeling like, it's not capable of feeling hedonic pain to pretty much the same extent as me. So I'm just going to ballpark it. And I don't actually care for webs.AARONI feel like I've proven my pro animal bona fide. I think it's bona fide. But here, and I don't share that intuition, I still think that we can go into that megapig discourse. Wait, yeah, sort of. Wait, not exactly megapig discourse. Yeah, I remember. I think I got cyberbullyed by, even though they didn't cyberbully me because I was informed of offline bullying via cyber about somebody's, sorry, this is going to sound absolutely incoherent. So we'll take this part out. Yeah. I was like, oh, I think it's like some metaphysical appeal to neuron counts. You specifically told me like, oh, yeah, Mr. So and so didn't think this checked out. Or whatever. Do you know what I'm talking about?LAURAYeah.AARONOkay. No, but maybe I put it in dawn or Cringey or pretentious terms, but I do think I'm standing by my metaphysical neurons claim here. Not that I'm super confident in anything, but just that we're really radically unsure about the nature of sentience and qualia and consciousness. And probably it has something to do with neurons, at least. They're clearly related in a very boring sciency way. Yeah. It's not insane to me that, like, that, like. Like the unit of. Yeah, like the. The thing. The thing that, like, produces or like is, or like is directly, like one to one associated with, like, particular, like. Like, I guess, amount, for lack of better terms, of conscious experience, is some sort of physical thing. The neurons jumps out as the unit that might make sense. And then there's like, oh, yeah, do we really think all the neurons that control the tongue, like the motor function of the tongue, are those really make you quadrillion more important than a seal or whatever? And then I go back to, okay, even though I haven't done any research on this, maybe it's just like opiate. The neurons directly related neuron counts directly of. Sorry. Neurons directly involved in pretty low level hedonic sensations. The most obvious one would be literal opioid receptors. Maybe those are the ones that matter. This is like, kind of. I feel like we've sort of lost the plot a little.MATTOkay, this is like weird drunk math.AARONBut I think your handwriting is pretty good.MATTI think I have it. So suppose we work in human units. I have a hypothetical intervention that can help ten chickens or one human, and we assume that when I say help, it's like, help them. The same of it. So if I work in human units, I say maybe there is a 50% chance that a chicken is zero one to 1100 of a human and a 50% chance that a chicken and a human are equal. Obviously, this is a thought experiment. I'm not saying that this is my real world probabilities, but suppose that these are my credences. So I do out the EV. The thing that helps ten chickens. I say that, okay, in half of the world, chickens are one 100th of a human, so helping ten of them is worth, like, zero five. Sorry, helping ten of them is zero one. And so 0.5 times zero one times ten is zero five. And then in the other half of the world, I say that a chicken and a human are equal. So then my intervention helps ten chickens, which is like helping ten humans so my total credence, like the benefit in that set of worlds with my 0.5 probability, is five. And so in the end, the chicken intervention wins because it has, on net, an ev of 5.5 versus one for the human intervention. Because the human intervention always helps one human. I switch it around and I say my base unit of welfare range, or, like moral weight, or whatever you want to say, is chicken units. Like, one chicken's worth of moral weight. So in half of the world, a human is worth 100 chickens, and then in the other half of the world, a human is worth one chicken. So I do out the ev for my intervention that helps the one human. Now, in the chicken units, and in chicken units, like, half of the time, that human is worth 100 chickens. And so I get 0.5 times, 100 times one, which is 50. And then in the other half of the world, the chicken and the human are equal. And so then it's 0.5 times one, times one, because I'm helping one human, so that's 0.5. The ev is 50.5. And then I do have my ev for my chicken welfare thing. That's, like, ten chickens, and I always help ten chickens. And so it's ten as my units of good. So when I worked in human units, I said that the chickens won because it was 5.5 human units versus one human unit for helping the human. When I did it in chicken units, it was 50.5 to help the humans versus ten to help the chickens. And so now I'm like, okay, my ev is changing just based on which units I work in. And I think this is, like, the two envelopes problem that's applied to animals. Brian Tomasic has, like, a long post about this, but I think this is, like, this is a statement or an example of the problem.AARONCool.LAURACan I just say something about the moral weight project? It's like, really just. We ended up coming up with numbers, which I think may have been a bit of a mistake in the end, because I think the real value of that was going through the literature and finding out the similarities and the traits between animals and humans, and then there are a surprising number of them that we have in common. And so at the end of the day, it's a judgment call. And I don't know what you do with it, because that is, like, a legit statistical problem with things that arises when you put numbers on stuff.MATTSo I'm pretty sympathetic to what you're saying here of, like, the core insight of the moral weight project is, like, when we look at features that could plausibly determine capacity to experience welfare, we find that a pig and a human have a ton in common. Obviously, pigs cannot write poetry, but they do show evidence of grief behavior when another pig dies. And they show evidence of vocalizing in response to pain and all of these things. I think coming out of the moral waste project being like, wow. Under some form of utilitarianism, it's really hard to justify harms to, or like harms to pigs. Really. Morally matter makes complete sense. I think the challenge here is when you get to something like black soldier flies or shrimp, where when you actually look at the welfare range table, you see that the number of proxies that they likely or definitely have is remarkably low. The shrimp number is hinging on. It's not hinging on a ton. They share a few things. And because there aren't that many categories overall, that ends up being in the median case. Like, they have a moral weight, like one 30th of a human. And so I worry that sort of your articulation of the benefit starts to break down when you get to those animals. And we start to like, I don't know what you do without numbers there. And I think those numbers are really susceptible to this kind of 200.AARONI have a question.MATTYeah, go.AARONWait. This supposed to be like 5.5 versus one?MATTYeah.AARONAnd this is 50.5 versus ten? Yeah. It sound like the same thing to me.MATTNo, but they've inverted this case, the chickens one. So it's like when I'm working in human units, right? Like, half the time, I help.AARONIf you're working in human units, then the chicken intervention looks 5.5 times better. Yes. Wait, can I write this down over here?MATTYeah. And maybe I'm not an expert on this problem. This is just like something that tortures me when I try and sleep at night, not like a thing that I've carefully studied. So maybe I'm stating this wrong, but, yeah. When I work in human units, the 50% probability in this sort of toy example that the chickens and the humans are equal means that the fact that my intervention can help more chickens makes the ev higher. And then when I work in the chicken units, the fact that human might be 100 times more sentient than the chicken or more capable of realizing welfare, to be technical, that means the human intervention just clearly wins.AARONJust to check that I would have this right, the claim is that in human units, the chicken intervention looks 5.5 times better than the human intervention. But when you use chicken units, the human intervention looks 5.5 times better than the chicken intervention. Is that correct?MATTYes, that's right.AARONWait, hold on. Give me another minute.MATTThis is why doing this drunk was a bad idea.AARONIn human.LAURANo, I think that's actually right. And I don't know what to do about the flies and shrimp and stuff like this. This is like where I draw my line of like, okay, so lemonstone quote.MATTTweeted me, oh, my God.LAURAI think he actually had a point of, there's a type of ea that is like, I'm going to set my budget constraint and then maximize within that versus start with a blank slate and allow the reason to take me wherever it goes. And I'm definitely in the former camp of like, my budget constraint is like, I care about humans and a couple of types of animals, and I'm just like drawing the line there. And I don't know what you do with the other types of things.MATTI am very skeptical of arguments that are like, we should end Medicare to spend it all on shrimp.AARONNo one's suggesting that. No, there's like a lot of boring, prosaic reasons.MATTI guess what I'm saying is there's a sense in which, like, totally agreeing with you. But I think the challenge is that object level.AARONYeah, you set us up. The political economy, I like totally by double it.MATTI think that there is. This is great. Aaron, I think you should have to take another shot for.AARONI'm sorry, this isn't fair. How many guys, I don't even drink, so I feel like one drink is like, is it infinity times more than normalize it? So it's a little bit handle.MATTI think there has to be room for moral innovation in my view. I think that your line of thinking, we don't want to do radical things based on sort of out there moral principles in the short term. Right. We totally want to be very pragmatic and careful when our moral ideas sort of put us really far outside of what's socially normal. But I don't think you get to where we are. I don't know what we owe the future was like a book that maybe was not perfect, but I think it eloquently argues with the fact that the first person to be like, hey, slavery in the Americas is wrong. Or I should say really the first person who is not themselves enslaved. Because of course, the people who are actually victims of this system were like, this is wrong from the start. But the first people to be like, random white people in the north being like, hey, this system is wrong. Looks super weird. And the same is true for almost any moral innovation. And so you have to, I think saying, like, my budget constraint is totally fixed seems wrong to me because it leaves no room for being wrong about some of your fundamental morals.LAURAYeah, okay. A couple of things here. I totally get that appeal 100%. At the same time, a lot of people have said this about things that now we look back at as being really bad, like the USSR. I think communism ends up looking pretty bad in retrospect, even though I think there are a lot of very good moral intuitions underpinning it.AARONYeah, I don't know. It's like, mostly an empirical question in that case, about what government policies do to human preference satisfaction, which is like, pretty. Maybe I'm too econ. These seem like very different questions.LAURAIt's like we let our reason go astray, I think.MATTRight, we, as in some humans.AARONNo, I think. Wait, at first glance. At first glance, I think communism and things in that vicinity seem way more intuitively appealing than they actually, or than they deserve to be, basically. And the notion of who is it? Like Adam Smith? Something Smith? Yeah, like free hand of the market or whatever. Invisible hand. Invisible free hand of the bunny ear of the market. I think maybe it's like, field intuitive to me at this point, because I've heard it a lot. But no, I totally disagree that people's natural intuition was that communism can't work. I think it's like, isn't true.MATTI'm not sure you guys are disagreeing with one.AARONYeah.MATTLike, I think, Laura, if I can attempt to restate your point, is that to at least a subset of the people in the USSR at the time of the russian revolution, communism plausibly looked like the same kind of moral innovation as lots of stuff we looked back on as being really good, like the abolition of slavery or like, women's rights or any of those other things. And so you need heuristics that will defend against these false moral innovations.AARONWait, no, you guys are both wrong. Wait, hold on. No, the issue there isn't that we disregard, I guess, humans, I don't know exactly who's responsible for what, but people disregarded some sort of deserving heuristic that would have gardened against communism. The issue was that, like, it was that, like, we had, like, lots of empirical, or, like, it's not even necessarily. I mean, in this case, it is empirical evidence, but, like, like, after a couple years of, like, communism or whatever, we had, like, lots of good evidence to think, oh, no, books like that doesn't actually help people, and then they didn't take action on that. That's the problem. If we were sitting here in 1910 or whatever, and I think it's totally possible, I will be convinced communism is, in fact, the right thing to do. But the thing that would be wrong is if, okay, five years later, you have kids starving or people starving or whatever, and maybe you can find intellectuals who claim and seem reasonably correct that they can explain how this downstream of your policies. Then doubling down is the issue, not the ex ante hypothesis that communism is good. I don't even know if that made any sense, I think.LAURABut we're in the ex ante position right now.AARONYeah, totally. Maybe we'll find out some sort of, whether it's empirical or philosophical or something like maybe in five years or two years or whatever, there'll be some new insight that sheds light on how morally valuable shrimp are. And we should take that into account.LAURAI don't know. Because it's really easy to get good feedback when other fellow humans are starving to death versus. How are you supposed to judge? No, we've made an improvement.AARONYeah, I do think. Okay. Yes. That's like a substantial difference. Consciousness is, like, extremely hard. Nobody knows what the hell is going on. It kind of drives me insane.MATTWhomstemonga has not been driven insane by the hard problem of consciousness.AARONYeah. For real. I don't know. I don't have to say. It's like, you kind of got to make your best guess at some point.MATTOkay, wait, so maybe tacking back to how to solve it, did you successfully do math on this piece of paper?AARONMostly? No, mostly I was word selling.MATTI like the verb form there.AARONYeah. No, I mean, like, I don't have, like, a fully thought out thing. I think in part this might be because of the alcohol. I'm pretty sure that what's going on here is just that, like, in fact, like, there actually is an asymmetry between chicken units and human units, which is that. Which is that we have much better idea. The real uncertainty here is how valuable a chicken is. There's probably somebody in the world who doubts this, but I think the common sense thing and thing that everybody assumes is we basically have it because we're all humans and there's a lot of good reasons to think we have a decent idea of how valuable another human life is. And if we don't, it's going to be a lot worse for other species. And so just, like, taking that as a given, the human units are the correct unit because the thing with the unit is that you take it as given or whatever. The real uncertainty here isn't the relationship between chickens and humans. The real question is how valuable is a chicken? And so the human units are just like the correct one to use.LAURAYeah, there's something there, which is the right theory is kind of driving a lot of the problem in the two envelope stuff. Because if you just chose one theory, then the units wouldn't really matter which one. The equality theory is like, you've resolved all the inter theoretic uncertainty and so wouldn't that get rid of.AARONI don't know if you know, if there's, like. I'm not exactly sure what you mean by theory.LAURALike, are they equal, the equality theory versus are they 1100 theory? And we're assuming that each of them has five probabilities each end. So if we resolved that, it's like we decide upon the 1100 theory, then the problem goes away.AARONYeah, I mean, that's true, but you might not be able to.MATTYeah, I think it doesn't reflect our current state or, like.AARONNo, just like taking as given the numbers, like, invented, which I think is fine for the illustration of the problem. Maybe a better example is what's, like, another thing, chicken versus a rabbit. I don't know. Or like rabbits. I don't know.MATTChicken versus shrimp. I think it's like a real one. Because if you're the animal welfare fund, you are practically making that decision.AARONYeah. I think that becomes harder. But it's not, like, fundamentally different. And it's like the question of, like, okay, which actually makes sense, makes more sense to use as a unit. And maybe you actually can come up with two, if you can just come up with two different species for which, on the merits, they're equally valid as a unit and there's no issue anymore. It really is 50 50 in the end.MATTYeah. I don't know. I see the point you're making. With humans, we know in some sense we have much more information about how capable of realizing welfare a human is. But I guess I treat this as, like, man, I don't know. It's like why all of my confidence intervals are just, like, massive on all these things is I'm just very confused by these problems and how much that.AARONSeems like I'm confused by this one. Sorry, I'm, like, half joking. It is like maybe. I don't know, maybe I'll be less confident. Alcohol or so.MATTYeah, I don't know. I think it's maybe much more concerning to me the idea that working in a different unit changes your conclusion radically.AARONThan it is to you.LAURASometimes. I don't know if this is, like, too much of a stoner cake or something like that.AARONBring it on.LAURAI kind of doubt working with numbers at all.MATTOkay. Fit me well.LAURAIt's just like when he's.AARONStop doing that.LAURAI don't know what to do, because expected value theory. Okay, so one of the things that, when we hired a professional philosopher to talk about uncertainty.MATTPause for a sec. Howie is very sweetly washing his ears, which is very cute in the background. He's like, yeah, I see how he licks his paws and squeezes his ear.AARONIs it unethical for me to videotape?MATTNo, you're more than welcome to videotape it, but I don't know, he might be done.AARONYeah, that was out.MATTLaura, I'm very sorry. No, yeah, you were saying you hired the professional philosopher.LAURAYeah. And one of the first days, she's like, okay, well, is it the same type of uncertainty if we, say, have a one in ten chance of saving the life of a person we know for sure is conscious, versus we have a certain chance of saving the life of an animal that has, like, a one in ten probability of being sentient? These seem like different types.AARONI mean, maybe in some sense they're like different types. Sure. But what are the implications? It's not obviously the same.LAURAIt kind of calls into question as to whether we can use the same mathematical approach for analyzing each of these.AARONI think my main take is, like, you got a better idea? That was like, a generic.LAURANo, I don't.AARONYeah. It's like, okay, yeah, these numbers are, like, probably. It seems like the least bad option if you're going by intuition. I don't know. I think all things considered, sometimes using numbers is good because our brains aren't built to handle getting moral questions correct.MATTYeah, I mean, I think that there is a very strong piece of evidence for what you're saying, Aaron, which is.AARONThe whole paper on this. It's called the unreasonable efficacy of mathematics in the natural sciences.MATTOr this is. This is interesting. I was going to make sort of an easier or simpler argument, which is just like, I think the global health ea pitch of, like, we tend to get charity radically wrong.AARONOften.MATTCharities very plausibly do differ by 100 x or 1000 x in cost effectiveness. And most of the time, most people don't take that into account and end up helping people close to them or help an issue that's salient to them or help whatever they've heard about most and leave opportunities for doing what I think is very difficult to argue as not being radically more effective opportunities on the table as a result. Now, I led into this saying that I have this very profound uncertainty when it comes to human versus animal trade offs. So I'm not saying that, yes, we just should shut up and multiply. But I do think that is sort of like the intuition for why I think the stoner take is very hard for me to endorse is that we know in other cases, actually bringing numbers to the problem leads to saving many more lives of real people who have all of the same hopes and dreams and fears and feelings and experiences as the people who would have been saved in alternate options.LAURAIsn't that just like still underlying this is we're sure that all humans are equal. And that's like our theory that we have endorsed.AARONWait, what?MATTOr like on welfare ranges, the differences among different humans are sufficiently small in terms of capacity to realize welfare. That plausibly they are.AARONYeah, I don't think anyone believes that. Does anyone believe that? Wait, some people that everybody's hedonic range is the same.LAURARandomly select a person who lives in Kenya. You would think that they have the same welfare range, a priority as somebody.MATTWho lives in the description. The fundamental statistics of describing their welfare range are the same.AARONYeah, I think that's probably correct. It's also at an individual level, I think it's probably quite varied between humans.LAURASo I don't think we can say that we can have the same assumption about animals. And that's where it kind of breaks down, is we don't know the right theory to apply it.AARONWell, yeah, it's a hard question. Sorry, I'm being like kind of sarcastic.LAURAI think you have to have the theory right. And you can't easily average over theories with numbers.MATTYeah, no, I mean, I think you're right. I think this is the challenge of the two envelopes. Problem is exactly this kind of thing. I'm like four chapters into moral uncertainty. The book.AARONBy Will.MATTYeah. McCaskill, Ord and Bryke Fist. I'm probably getting that name. But they have a third co author who is not as much of like an.AARONYeah, I don't know. I don't have any super eloquent take except that to justify the use of math right now. Although I actually think I could. Yeah, I think mostly it's like, insofar as there's any disagreement, it's like we're both pointing at the issue, pointing at a question, and saying, look at that problem. It's, like, really hard. And then I'm saying like, yeah, I know. Shit. You should probably just do your best to answer it. Sorry, maybe I'm just not actually adding any insight here or whatever, but I agree with you that a lot of these problems are very difficult, actually. Sorry, maybe this is, like, a little bit of a nonsense. Whatever. Getting back to the hard problem of consciousness, I really do think it feels like a cruel joke that we have to implicitly, we have to make decisions about potentially gigantic numbers of digital lives or, like, digital sentience or, you know, whatever you want to call it, without having any goddamn idea, like, what the fuck is up with consciousness. And, I don't know, it doesn't seem fair. Okay.MATTYeah, wait, okay, so fundraiser. This is great. We've done all of these branching off things. So we talked about how much we raised, which was, like, amount that I was quite happy with, though. Maybe that's, like, selfish because I didn't have to wear a shrink costume. And we talked about. Cause prio. We haven't talked about the whole fake OpenAI thing.AARONFake open AI.MATTWait. Like the entire.AARONOh, well, shout out to I really. God damn it, Qualy. I hope you turn into a human at some point, because let it be known that Qualy made a whole ass Google Doc to plan out the whole thing and was, like, the driving. Yeah, I think it's fair to say Qualy was the driving force.MATTYeah, totally. Like, absolutely had the concept, did the Google Doc. I think everybody played their parts really well, and I think that was very fun.AARONYeah, you did. Good job, everybody.MATTBut, yeah, that was fun. It was very unexpected. Also, I enjoyed that. I was still seeing tweets and replies that were like, wait, this was a bit. I didn't get this after the end of it, which maybe suggests. But if you look at the graph I think I sent in, maybe we.AARONShould pull up my. We can analyze my Twitter data and find out which things got how many views have.MATTLike, you have your text here. I think the graph of donations by date is, like, I sent in the text chat between.AARONMaybe I can pull it like, media.MATTLike you and me and Max and Laura. And it's very clear that that correlated with a. I think it's probably pretty close to the end.AARONMaybe I just missed this. Oh, Laura, thank you for making.MATTYeah, the cards were amazing cards.AARONThey're beautiful.MATTOh, wait, okay, maybe it's not. I thought I said. Anyway, yeah, we got, like, a couple grand at the start, and then definitely at least five grand, maybe like, ten grand, somewhere in the five to ten range.AARONCan we get a good csv going? Do you have access to. You don't have to do this right now.MATTWait, yeah, let me grab that.AARONI want to get, like, aerospace engineering grade cpus going to analyze the causal interactions here based on, I don't know, a few kilobytes of data. It's a baby laptop.MATTYeah, this is what the charts looked like. So it's basically like there was some increase in the first. We raised, like, a couple of grand in the first couple of days. Then, yeah, we raised close to ten grand over the course of the quality thing, and then there was basically flat for a week, and then we raised another ten grand right at the end.AARONThat's cool. Good job, guys.MATTAnd I was very surprised by this.AARONMaybe I didn't really internalize that or something. Maybe I was sort of checked out at that point. Sorry.MATTI guess. No, you were on vacation because when you were coming back from vacation, it's when you did, like, the fake Sama.AARONYeah, that was on the plane.LAURAOkay, yeah, I remember this. My mom got there the next day. I'm like, I'm checking out, not doing anything.AARONYeah, whatever. I'll get rstudio revving later. Actually, I'm gradually turning it into my worst enemy or something like that.MATTWait, how so?AARONI just use Python because it's actually faster and catchy and I don't have to know anything. Also, wait, this is like a rant. This is sort of a totally off topic take, but something I was thinking about. No, actually, I feel like a big question is like, oh, are LLMs going to make it easy for people to do bad things that make it easier for me to do? Maybe not terrible things, but things that are, like, I don't know, I guess of dubious or various things that are mostly in the realm of copyright violation or pirating are not ever enforced, as far as I can tell. But, no, I just couldn't have done a lot of things in the past, but now I can, so that's my anecdote.MATTOkay, I have a whole python.AARONYou can give me a list of YouTube URLs. I guess Google must do, like, a pretty good job of policing how public websites do for YouTube to md three sites, because nothing really just works very well very fast. But you can just do that in python, like, five minutes. But I couldn't do that before, so.MATTI feel like, to me, it's obvious that LLMs make it easier for people to do bad stuff. Exactly as you said because they let make in general make it easier for people to do stuff and they have some protections on this, but those protections are going to be imperfect. I think the much more interesting question in some sense is this like a step change relative to the fact that Google makes it way easier for you to do stuff and including bad stuff and the printing press made it way easier for you to do?AARONI wouldn't even call it a printing press.MATTI like think including bad stuff. So it's like, right, like every invention that generally increases people's capability to do stuff and share information also has these bad effects. And I think the hard question is, are LLMs, wait, did I just x.AARONNo, I don't think, wait, did I just like, hold on. I'm pretty sure it's like still wait, how do I have four things?LAURAWhat is the benefit of LLMs versus.AARONYou can ask it something and it tells you the answer.LAURAI know, but Google does this too.AARONI don't mean, I don't know if I have like a super, I don't think I have any insightful take it just in some sense, maybe these are all not the same, but maybe they're all of similar magnitude, but like object level. Now we live in a world with viruses CRISPR. Honestly, I think to the EA movement's credit, indefinite pause, stop. AI is just not, it's not something that I support. It's not something like most people support, it's not like the official EA position and I think for good reason. But yeah, going back to whatever it was like 1416 or whatever, who knows? If somebody said somebody invented the printing press and somebody else was like, yeah, we should, well I think there's some pretty big dis analysis just because of I guess, biotech in particular, but just like how destructive existing technologies are now. But if somebody had said back then, yeah, let's wait six months and see if we can think of any reason not to release the printing press. I don't think that would have been a terrible thing to do. I don't know, people. I feel like I'm saying something that's going to get coded as pretty extreme. But like x ante hard ex ante. People love thinking, exposed nobody. Like I don't know. I don't actually think that was relevant to anything. Maybe I'm just shit faced right now.MATTOn one shot of vodka.AARON$15 just to have one shot.MATTI'll have a little.AARONYeah. I think is honestly, wait. Yeah, this is actually interesting. Every time I drink I hope that it'll be the time that I discover that I like drinking and it doesn't happen, and I think that this is just because my brain is weird. I don't hate it. I don't feel, like, bad. I don't know. I've used other drugs, which I like. Alcohol just doesn't do it for me. Yeah, screw you, alcohol.MATTYes. And you're now 15.99 cheaper or 50. 99 poorer.AARONYeah, I mean, this will last me a lifetime.MATTYou can use it for, like, cleaning your sink.AARONWait, this has got to be the randomest take of all time. But, yeah, actually, like, isopropyl alcohol, top tier, disinfected. Because you don't have to do anything with it. You leave it there, it evaporates on its own.MATTHonestly. Yeah.AARONI mean, you don't want to be in an enclosed place or whatever. Sorry. To keep. Forget. This is like.MATTNo, I mean, it seems like a good take to me.AARONThat's all.MATTYeah, this is like a very non sequitur.AARONBut what are your guys' favorite cleaning suppliers?MATTOkay, this is kind of bad. Okay, this is not that bad. But I'm, like, a big fan of Clorox wipes.AARONScandalous.MATTI feel like this gets looked down on a little bit because it's like, in theory, I should be using a spray cleaner and sponge more.AARONIf you're like, art porn, what theories do you guys.MATTIf you're very sustainable, very like, you shouldn't just be buying your plastic bucket of Clorox infused wet wipes and you're killing the planet.AARONWhat I thought you were going to say is like, oh, this is like germaphobe coating.MATTNo, I think this is fine. I don't wipe down my groceries with Clorox wipes. This is like, oh, if I need to do my deep clean of the kitchen, what am I going to reach for? I feel like my roommate in college was very much like, oh, I used to be this person. No, I'm saying he was like an anti wet wipe on sustainability reasons person. He was like, oh, you should use a rag and a spray cleaner and wash the rag after, and then you will have not used vast quantities of resources to clean your kitchen.AARONAt one point, I tweeted that I bought regular. Actually, don't do this anymore because it's no longer practical. But I buy regularly about 36 packs of bottled water for like $5 or whatever. And people actually, I think it was like, this is like close to a scissor statement, honestly. Because object level, you know what I am, right. It's not bad. For anything. I'm sorry. It just checks out. But people who are normally pretty technocratic or whatever were kind of like, I don't know, they were like getting heated on.MATTI think this is an amazing scissor statement.AARONYeah.MATTBecause I do.AARONI used to be like, if I were to take my twelve year old self, I would have been incredibly offended, enraged.MATTAnd to be fair, I think in my ideal policy world, there would be a carbon tax that slightly increases the price of that bottled water. Because actually it is kind of wasteful to. There is something, something bad has happened there and you should internalize those.AARONYeah, I think in this particular, I think like thin plastic is just like not. Yeah, I don't think it would raise it like very large amount. I guess.MATTI think this is probably right that even a relatively high carbon tax would not radically change the price.LAURAIt's not just carbon, though. I think because there is land use implicated in this.AARONNo, there's not.LAURAYeah, you're filling up more landfills.AARONYeah, I'm just doing like hearsay right now. Heresy.MATTHearsay. Hearsay is going to be whatever. Well, wait, no, heresy is, if you're arguing against standardly accepted doctrine. Hearsay is like, well, it's both. Then you're just saying shit.AARONI'm doing both right now. Which is that actually landfills are usually like on the outskirts of town. It's like, fine.LAURAThey'Re on the outskirts of town until the town sprawls, and then the elementary school is on a phone.AARONYeah, no, I agree in principle. I don't have a conceptual reason why you're wrong. I just think basically, honestly, the actual heuristic operating here is that I basically outsource what I should pay attention to, to other people. And since I've never seen a less wrong post or gave Warren post about how actually landfills are filling up, it's like, fine, probably.LAURANo, this is me being devil's advocate. I really don't care that about personal waste.MATTYeah, I mean, I think plausibly here, there is, right? So I think object level, the things that matter, when we think about plastic, there is a carbon impact. There is a production impact of like, you need to think about what pollution happened when the oil was drilled and stuff. And then there is like a disposal impact. If you successfully get that bottle into a trash can, for what it's worth.AARONMy bottles are going into their goddamn trash can.MATTIdeally a recycling. No, apparently recycling, I mean, recycling is.AARONWell, I mean, my sense is like apparently recycling. Yeah, I recycle metal. I think I do paper out of convenience.MATTIf you successfully get that bottle handle a waste disposal system that is properly disposing of it, rather than like you're throwing it on a slap, then I think my guess is that the willingness to pay, or if you really crunch the numbers really hard, it would not be once again, a huge cost for the landfill costs. On the flip side, if you throw it in a river, that's very bad. My guess is that it would be right for everyone on Twitter to flame you for buying bottles and throwing them in a river if you did that.AARONWhat is an ed impact on wild animal welfare and equilibrium? No, just kidding. This is something. Yeah, don't worry, guys. No, I was actually the leave no trade coordinator for my Boy scout troop. It's actually kind of ironic because I think probably like a dumb ideology or.LAURAWhatever, it's a public good for the other people around you to not have a bunch of garbage around on that trail.AARONYeah, I do think I went to an overnight training for this. They're very hardcore, but basically conceptually incoherent people. I guess people aren't conceptually incoherent. Their concepts are incoherent who think it

80,000 Hours Podcast with Rob Wiblin
#180 – Hugo Mercier on why gullibility and misinformation are overrated

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 21, 2024 156:55


The World Economic Forum's global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.But this week's guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.Links to learn more, summary, and full transcript.In this interview, host Rob Wiblin and Hugo discuss:How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren't actually beneficial for us.Rob and Hugo's ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today's complex information environment.The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don't.Why fake news and conspiracy theories actually have less impact than most people assume.False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.And plenty more.Chapters:The view that humans are really gullible (00:04:26)The evolutionary argument against humans being gullible (00:07:46) Open vigilance (00:18:56)Intuitive and reflective beliefs (00:32:25)How people decide who to trust (00:41:15)Redefining beliefs (00:51:57)Bloodletting (01:00:38)Vaccine hesitancy and creationism (01:06:38)False beliefs without skin in the game (01:12:36)One consistent weakness in human judgement (01:22:57)Trying to explain harmful financial decisions (01:27:15)Astrology (01:40:40)Medical treatments that don't work (01:45:47)Generative AI, LLMs, and persuasion (01:54:50)Ways AI could improve the information environment (02:29:59)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

Increments
#63 - Recycling is the Dumps

Increments

Play Episode Listen Later Feb 14, 2024 66:49


Close your eyes, and think of a bright and pristine, clean and immaculately run recycling center, green'r than a giant's thumb. Now think of a dirty, ugly, rotting landfill, stinking in the mid-day sun. Of these two scenarios, which, do you reckon, is worse for the environment? In this episode, Ben and Vaden attempt to reduce and refute a few reused canards about recycling and refuse, by rereading Rob Wiblin's excellent piece which addresses the aformentioned question: What you think about landfill and recycling is probably totally wrong (https://medium.com/@robertwiblin/what-you-think-about-landfill-and-recycling-is-probably-totally-wrong-3a6cf57049ce). Steel yourselves for this one folks, because you may need to paper over arguments with loved ones, trash old opinions, and shatter previous misconceptions. Check out more of Rob's writing here (https://www.robwiblin.com/). We discuss The origins of recycling and some of the earliest instances Energy efficiency of recycling plastics, aluminium, paper, steel, and electronic waste (e-waste) Why your peanut butter jars and plastic coffee cups are not recyclable Modern landfills and why they're awesome How landfills can be used to create energy Building stuff on top of landfills Why we're not even close to running out of space for landfills Economic incentives for recycling vs top-down regulation The modern recycling movement and its emergence in the 1990s > - Guiyu, China, where e-waste goes to die. That a lot of your "recycling" ends up as garbage in the Philippines Error Correction Vaden misremembered what Smil wrote regarding four categories of recycling (Metals and Aluminum / Plastics / Paper / Electronic Waste ("e-waste")). He incorrectly quoted Smil as saying these four categories were exhaustive, and represented the four major categories recycling into which the majority of recycled material can be bucketed. This is incorrect- what Smil actually wrote was: I will devote the rest of this section (and of this chapter) to brief appraisals of the recycling efforts for four materials — two key metals (steel and aluminum) and plastics and paper—and of electronic waste, a category of discarded material that would most benefit from much enhanced rates of recycling. - Making the Modern World: Materials and De-materialization, Smill, p.179 A list of the top 9 recycled materials can be found here: https://www.rd.com/list/most-recyclable-materials/ Sources / Citations Share of plastic waste that is recycled, landfilled, incinerated and mismanaged, 2019 (https://ourworldindata.org/waste-management) Source for the claim that recycling glass is not energy efficient (and thus not necessarily better for the environment than landfilling): Glass bottles can be more pleasant to drink out of, but they also require more energy to manufacture and recycle. Glass bottles consume 170 to 250 percent more energy and emit 200 to 400 percent more carbon than plastic bottles, due mostly to the heat energy required in the manufacturing process. Of course, if the extra energy required by glass were produced from emissions-free sources, it wouldn't necessarily matter that glass bottles required more energy to make and move. “If the energy is nuclear power or renewables there should be less of an environmental impact,” notes Figgener. - Apocalypse Never, Shellenburger, p.66 Cloth bags need to be reused 173 times (https://www.savemoneycutcarbon.com/learn-save/plastic-vs-cotton-bags-which-is-more-sustainable) to be more eco-friendly than a plastic bag: Source for claim that majority of e-waste ends up in China (https://www.pbs.org/newshour/science/america-e-waste-gps-tracker-tells-all-earthfix): Puckett's organization partnered with the Massachusetts Institute of Technology to put 200 geolocating tracking devices inside old computers, TVs and printers. They dropped them off nationwide at donation centers, recyclers and electronic take-back programs — enterprises that advertise themselves as “green,” “sustainable,” “earth friendly” and “environmentally responsible.” ... About a third of the tracked electronics went overseas — some as far as 12,000 miles. That includes six of the 14 tracker-equipped electronics that Puckett's group dropped off to be recycled in Washington and Oregon. The tracked electronics ended up in Mexico, Taiwan, China, Pakistan, Thailand, Dominican Republic, Canada and Kenya. Most often, they traveled across the Pacific to rural Hong Kong. (italics added.) NPR interview (https://www.youtube.com/watch?v=iBGZtNJAt-M&ab_channel=NPR) on the fact that some manufacturers will put recycling logos on products that aren't recyclable. Bloomberg investigative report (https://www.youtube.com/watch?v=hmGrI_BVlnc&ab_channel=BloombergOriginals) on tracking plastic to a town in Poland that burns it for energy. Video (https://www.youtube.com/watch?v=aHzltu6Tvl8&ab_channel=PBSTerra) about the apex landfill Guiyu, China. Wiki's description (https://en.wikipedia.org/wiki/Electronic_waste_in_Guiyu.): Once a rice village, the pollution has made Guiyu unable to produce crops for food and the water of the river is undrinkable. Many of the primitive recycling operations in Guiyu are toxic and dangerous to workers' health with 80% of children suffering from lead poisoning. Above-average miscarriage rates are also reported in the region. Workers use their bare hands to crack open electronics to strip away any parts that can be reused—including chips and valuable metals, such as gold, silver, etc. Workers also "cook" circuit boards to remove chips and solders, burn wires and other plastics to liberate metals such as copper; use highly corrosive and dangerous acid baths along the riverbanks to extract gold from the microchips; and sweep printer toner out of cartridges. Children are exposed to the dioxin-laden ash as the smoke billows around Guiyu, finally settling on the area. The soil surrounding these factories has been saturated with lead, chromium, tin, and other heavy metals. Discarded electronics lie in pools of toxins that leach into the groundwater, making the water undrinkable to the extent that water must be trucked in from elsewhere. Lead levels in the river sediment are double European safety levels, according to the Basel Action Network. Lead in the blood of Guiyu's children is 54% higher on average than that of children in the nearby town of Chendian. Piles of ash and plastic waste sit on the ground beside rice paddies and dikes holding in the Lianjiang River. Ben's back-of-the-napkin math Consider the Apex landfill in Las Vegas. This handles trash for the whole city, which is ~700K people. The base of the landfill is currently 9km^2 , but they've hinted at expanding it in the future. So let's assume they more than double it and put it at 20km^2 . The estimates are that this landfill will handle trash for ~300 years "at current rates". I'm not sure if that includes population growth, so let's play it safe and assume not. So how much space does each person need landfill wise for the next 300 years? We have 20km^2 / 700K people = 28.5 m^2 per person for 300 years. For 400M people, that's roughly 12,000 km^2. The US is roughly 10,000,000 km^2. That's 0.012% of the US needed for landfills for the next 300 years. We definitely have the space. Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Help us fill up landfills and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) What do you like to bring to your local neighbourhood tire-fire? Tell us over at incrementspodcast@gmail.com

80,000 Hours Podcast with Rob Wiblin
#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 12, 2024 176:48


Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don't see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that's seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.From an evolutionary perspective, that's to be expected, right? If your heart or lungs or legs or skin stop working properly while you're a teenager, you're less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all?Today's guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox.Links to learn more, summary, and full transcript.In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.How working as both an academic and a practicing psychiatrist shaped Randy's understanding of treating mental health problems.The “smoke detector principle” of why we experience so many false alarms along with true threats.The origins of morality and capacity for genuine love, and why Randy thinks it's a mistake to try to explain these from a selfish gene perspective.Evolutionary theories on why we age and die.And much more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic ArmstrongTranscriptions: Katy Moore

Pigeon Hour
#10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse

Pigeon Hour

Play Episode Listen Later Dec 28, 2023 68:17


IntroAt the gracious invitation of AI Safety Twitter-fluencer Sarah Hastings-Woodhouse, I appeared on the very first episode of her new podcast “Consistently Candid” to debate moral realism (or something kinda like that, I guess; see below) with fellow philosophy nerd and EA Twitter aficionado Max Alexander, alongside Sarah as moderator and judge of sorts.What I believeIn spite of the name of the episode and the best of my knowledge/understanding a few days ago, it turns out my stance may not be ~genuine~ moral realism. Here's my basic meta-ethical take:* Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but* Genuinely normative claims like “Sam should (!) donate $20 and should not buy that fancy meal” are never objectively true.Of course the label per se doesn't really matter. But for a bunch of reasons it still seems wise to figure out which label really does work best.Some definitionsStanford Encyclopedia of Philosophy: Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way).Wikipedia:The robust model of moral realism commits moral realists to three theses:* The semantic thesis: The primary semantic role of moral predicates (such as "right" and "wrong") is to refer to moral properties (such as rightness and wrongness), so that moral statements (such as "honesty is good" and "slavery is unjust") purport to represent moral facts, and express propositions that are true or false (or approximately true, largely false, and so on).* The alethic thesis: Some moral propositions are in fact true.* The metaphysical thesis: Moral propositions are true when actions and other objects of moral assessment have the relevant moral properties (so that the relevant moral facts obtain), where these facts and properties are robust: their metaphysical status, whatever it is, is not relevantly different from that of (certain types of) ordinary non-moral facts and properties.And to settle the matter, here's what some LLM have to say.My prompt:Suppose a person believes the following:1: Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but 2: Genuinely normative claims like “Sam should donate $20 and should not buy that fancy meal” are never objectively true.Is this person best categorized as a moral realist?ChatGPTGiven these two beliefs, the person's stance doesn't align completely with traditional moral realism, which typically asserts that there are objective truths about both moral facts (like the goodness or badness of actions) and moral duties (what one should or should not do). However, it also doesn't fully align with moral anti-realism, since the person believes in some form of objective truth regarding moral value.One possible categorization for this stance could be a form of "value realism" combined with "normative anti-realism" or "normative subjectivism." This would mean that the person believes in objective truths about what is better or worse (value realism) but thinks that what one ought to do (normative claims) is not a matter of objective truth but perhaps of subjective judgment or societal conventions.This position is quite sophisticated and might be best understood as a hybrid or intermediate position in the landscape of moral philosophy, rather than fitting neatly into the category of moral realism or its standard alternatives.Great, so much for trying to fit things into nice little buckets.Claude, fortunately, is more definitive:No, the view described is not best categorized as moral realism. Here's why:Moral realism is the view that there are objective moral facts or truths. A moral realist believes that some moral claims, including normative claims about what one should or shouldn't do, can be objectively and mind-independently true.The view in the example accepts that there can be objective truths about relative degrees of goodness or badness (claim 1). However, it denies that there are objective normative truths about what one should or shouldn't do (claim 2). By rejecting that genuine normative 'should/shouldn't' claims can be objectively true, the view denies a key commitment of moral realism - namely that moral claims can state objective normative facts. At best it allows for objective evaluative facts about goodness/badness, not prescriptive facts about obligations.So this view would be more accurately categorized as a form of moral anti-realism like non-cognitivism or error theory. It does not qualify as moral realism because of its rejection of the existence of objective normative truths that could make strong normative claims justified or true. The commitment to objective evaluative facts alone does not sufficiently qualify the view as realist in the metaethical sense.In summary, the view accepts objective "good/bad" facts but denies objective "should/shouldn't" facts. This combination puts it at odds with core commitments of moral realism, so it is better seen as an anti-realist view.I'm gonna disagree with Claude here, at least in that the bolded claim above (objective normative facts are a key part of moral realism) just seems…wrong? Like all the definitions could replace the term “moral fact” or “moral claim” with “normative fact” and “normative claim” but they do not!So I really don't have a great answer to which label works best. One of two ~human level chatbots giving me a definitive-sounding “anti-realism” answer is too much evidence for me, whose only formal philosophy training is an undergrad minor, to rule that one out. There are also good arguments, I think, for the “realist label,” as well as for “neither” (i.e., ‘secret third thing'). In fact all of these seem pretty similar in terms of argument convincingness/correctness. So, in sum,

Nathan on The 80,000 Hours Podcast: AI Scouting, OpenAI's Safety Record, and Redteaming Frontier Models

Play Episode Listen Later Dec 27, 2023 233:11


In today's conversation, Nathan joins Rob Wiblin, host of The 80,000 Hours Podcast to discuss why we need more AI scouts, OpenAI's safety record, and redteaming frontier models. If you need an ecommerce platform, check out our sponsor Shopify: https://shopify.com/cognitive for a $1/month trial period. SPONSORS: Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitive MasterClass https://masterclass.com/cognitive get two memberships for the price of 1 Learn from the best to become your best. Learn how to negotiate a raise with Chris Voss or manage your relationships with Esther Perel. Boost your confidence and find practical takeaways you can apply to your life and at work. If you own a business or are a team leader, use MasterClass to empower and create future-ready employees and leaders. Moment of Zen listeners will get two memberships for the price of one at https://masterclass.com/cognitive Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off. NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist. X/SOCIAL: @labenz (Nathan) @robertwiblin (Robert) @CogRev_Podcast LINKS: 80,000 Hours Episode Show Notes: https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/ TIMESTAMPS: (00:00:00) - Episode Preview (00:05:12) - Rob's intro (00:10:50) - Interview begins (00:15:50) - Intro toThe Cognitive Revolution excerpt (00:19:13) - Excerpt from The Cognitive Revolution: Nathan's narrative (01:22:10) - Why it's hard to imagine a much better game board (01:28:14) - What OpenAI has been doing right (01:40:12) - Arms racing and China (01:46:10) - OpenAI's single-minded focus on AGI (01:56:55) - Transparency about capabilities (02:05:56) - Benefits of releasing models (02:17:14) - Was it ok to release GPT-4? (02:35:31) - Why no statement from the OpenAI board (02:55:59) - Ezra Klein on the OpenAI story (03:16:59) - The upside of AI merits taking some risk (03:31:44) - Meta and open source (03:42:26) - Nathan's journey into the AI world (03:48:18) - Rob's outro

80,000 Hours Podcast with Rob Wiblin
#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Dec 22, 2023 226:52


OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do that safely?That's the central theme of today's episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast.Links to learn more, summary, and full transcript. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI's “red team” that probed GPT-4 to find ways it could be abused, long before it was public.Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.Nathan's view: it's complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI's board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.In today's episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board's reservations about Sam Altman, which to this day have not been laid out in any detail.But while he feared throughout 2022 that OpenAI and Sam Altman didn't understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they're playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI's decision to release GPT-4 when it did was for the best.On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They've also invested major resources into new ‘Superalignment' and ‘Preparedness' teams, while avoiding using competition with China as an excuse for recklessness.At the same time, it's very hard to know whether it's all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we're confident we want, which we can prove will remain safe as its capabilities get ever greater.By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI's research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they're also better placed than maybe anyone in the world to assess if the company's strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.In today's extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also:Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan's interactions with the board when he raised concerns from his red teaming efforts.Which AI applications we should be urgently rolling out, with less worry about safety.Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments.Whether AI capabilities are advancing faster than safety efforts and controls.The costs and benefits of releasing powerful models like GPT-4.Nathan's view on the game theory of AI arms races and China.Whether it's worth taking some risk with AI for huge potential upside.The need for more “AI scouts” to understand and communicate AI progress.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore

80k After Hours
Benjamin Todd on the history of 80,000 Hours

80k After Hours

Play Episode Listen Later Dec 1, 2023 110:54


"The very first office we had was just a balcony in an Oxford College dining hall. It was totally open to the dining hall, so every lunch and dinner time it would be super noisy because it'd be like 200 people all eating below us. And then I think we just had a bit where we just didn't have an office, so we worked out of the canteen in the library for at least three months or something. And then it was only after that we moved into this tiny, tiny room at the back of an estate agent off in St Clement's in Oxford. One of our early donors came and we gave him a tour, and when he came into the office, his first reaction was, 'Is this legal?'" — Benjamin ToddIn this episode of 80k After Hours — recorded in June 2022 — Rob Wiblin and Benjamin Todd discuss the history of 80,000 Hours.Links to learn more.They cover: Ben's origin story How 80,000 Hours got off the ground Its scrappy early days How 80,000 Hours evolved Team trips to China and Thailand The choice to set up several programmes rather than focus on one The move to California and back Various mistakes they think 80,000 Hours has made along the way Why Ben left the CEO position And the future of 80,000 Hours Who this episode is for:  People who work on or plan to work on promoting important ideas in a way that's similar to 80,000 Hours People who work at organisations similar to 80,000 Hours People who work at 80,000 Hours Who this episode isn't for: People who, if asked if they'd like to join a dinner at 80,000 Hours where the team reminisce on the good old days, would say, “Sorry, can't make it — I'm washing my hair that night”Producer: Keiran HarrisAudio mastering: Ryan Kessler and Ben Cordell"Gershwin - Rhapsody in Blue, original 1924 version" by Jason Weinberger is licensed under creative commons

80,000 Hours Podcast with Rob Wiblin
#172 – Bryan Caplan on why you should stop reading the news

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 17, 2023 143:22


Is following important political and international news a civic duty — or is it our civic duty to avoid it?It's common to think that 'staying informed' and checking the headlines every day is just what responsible adults do. But in today's episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.Links to learn more, summary, and full transcript.In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including:That it overwhelmingly provides us with information we can't usefully act on.That it's very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant.That it obscures the big picture, falling into the trap of thinking 'something important happens every day.'That it's highly addictive, for many people chewing up 10% or more of their waking hours.That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought.And plenty more.Bryan and Rob conclude that if you want to understand the world, you're better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter — or better yet, go meet them in person.In the second half of the episode, Bryan and Rob cover: Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there's a meaningful chance of it going terribly.Bryan's case that rational irrationality on the part of voters leads to many very harmful policy decisions.How to allocate resources in space.Bryan's experience homeschooling his kids.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 1, 2023 177:46


"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world. "That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." — Santosh HarishIn today's episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy's grantmaking in South Asian air quality — about the scale of the harm caused by air pollution.Links to learn more, summary, and full transcript.They cover:How bad air pollution is for our health and life expectancyThe different kinds of harm that particulate pollution causesThe strength of the evidence that it damages our brain function and reduces our productivityWhether it was a mistake to switch our attention to climate change and away from air pollutionWhether most listeners to this show should have an air purifier running in their house right nowWhere air pollution in India is worst and why, and whether it's going up or downWhere most air pollution comes fromThe policy blunders that led to many sources of air pollution in India being effectively unregulatedWhy indoor air pollution packs an enormous punchThe politics of air pollution in IndiaHow India ended up spending a lot of money on outdoor air purifiersThe challenges faced by foreign philanthropists in IndiaWhy Santosh has made the grants he has so farAnd plenty moreProducer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 23, 2023 163:55


"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't. What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian MorrisIn today's episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence. Links to learn more, summary and full transcript.They cover:Some crazy anomalies in the historical record of civilisational progressWhether we should think about technology from an evolutionary perspectiveWhether we ought to expect war to make a resurgence or continue dying outWhy we can't end up living like The JetsonsWhether stagnation or cyclical recurring futures seem very plausibleWhat it means that the rate of increase in the economy has been increasingWhether violence is likely between humans and powerful AI systemsThe most likely reasons for Rob and Ian to be really wrong about all of thisHow professional historians react to this sort of talkThe future of Ian's workPlenty moreProducer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#166 – Tantum Collins on what he's learned as an AI policy insider at the White House, DeepMind and elsewhere

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 12, 2023 188:49


"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions. My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum CollinsIn today's episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who's willing to speak openly — Tantum Collins.Links to learn more, summary and full transcript.They cover:How AI could strengthen government capacity, and how that's a double-edged swordHow new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren't thereTo what extent policymakers take different threats from AI seriouslyWhether the US and China are in an AI arms race or notWhether it's OK to transform the world without much of the world agreeing to itThe tyranny of small differences in AI policyDisagreements between different schools of thought in AI policy, and proposals that could unite themHow the US AI Bill of Rights could be improvedWhether AI will transform the labour market, and whether it will become a partisan political issueThe tensions between the cultures of San Francisco and DC, and how to bridge the divide between themWhat listeners might be able to do to help with this whole messPanpsychismPlenty moreProducer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 6, 2023 168:33


"Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody's sitting on Mars and you're going to war against them, it's very hard to hit them. You don't have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it's going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it's actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you're in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast. So my general conclusion has been that war looks unlikely on some size scales but not on others." — Anders SandbergIn today's episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.Links to learn more, summary and full transcript.They cover:The epic new book Anders is working on, and whether he'll ever finish itWhether there's a best possible world or we can just keep improving foreverWhat wars might look like if the galaxy is mostly settledThe impediments to AI or humans making it to other starsHow the universe will end a million trillion years in the futureWhether it's useful to wonder about whether we're living in a simulationThe grabby aliens theoryWhether civilizations get more likely to fail the older they getThe best way to generate energy that could ever existBlack hole bombsWhether superintelligence is necessary to get a lot of valueThe likelihood that life from elsewhere has already visited EarthAnd plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Aug 7, 2023 171:20


In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."Links to learn more, summary and full transcript.Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it's not just throwing compute at the problem -- it's also hiring dozens of scientists and engineers to build out the Superalignment team.Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains: Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on... and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did.Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain.The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy -- as one person described it, “like using one fire to put out another fire.”But Jan's thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves. And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep.Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities.Jan thinks it's so crazy it just might work. But some critics think it's simply crazy. They ask a wide range of difficult questions, including:If you don't know how to solve alignment, how can you tell that your alignment assistant AIs are actually acting in your interest rather than working against you? Especially as they could just be pretending to care about what you care about.How do you know that these technical problems can be solved at all, even in principle?At the point that models are able to help with alignment, won't they also be so good at improving capabilities that we're in the middle of an explosion in what AI can do?In today's interview host Rob Wiblin puts these doubts to Jan to hear how he responds to each, and they also cover:OpenAI's current plans to achieve 'superalignment' and the reasoning behind themWhy alignment work is the most fundamental and scientifically interesting research in MLThe kinds of people he's excited to hire to join his team and maybe save the worldWhat most readers misunderstood about the OpenAI announcementThe three ways Jan expects AI to help solve alignment: mechanistic interpretability, generalization, and scalable oversightWhat the standard should be for confirming whether Jan's team has succeededWhether OpenAI should (or will) commit to stop training more powerful general models if they don't think the alignment problem has been solvedWhether Jan thinks OpenAI has deployed models too quickly or too slowlyThe many other actors who also have to do their jobs really well if we're going to have a good AI futurePlenty moreGet this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type ‘80,000 Hours' into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jul 31, 2023 193:33


Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars' worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well.In today's conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work.Links to learn more, summary and full transcript.(As Holden reminds us, his wife is also the president of one of the world's top AI labs, Anthropic, giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours' largest financial supporter.)One point he makes is that people are too narrowly focused on AI becoming 'superintelligent.' While that could happen and would be important, it's not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought.As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security.In today's episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as:Why we can't rely on just gradually solving those problems as they come up, the way we usually do with new technologies.What multiple different groups can do to improve our chances of a good outcome — including listeners to this show, governments, computer security experts, and journalists.Holden's case against 'hardcore utilitarianism' and what actually motivates him to work hard for a better world.What the ML and AI safety communities get wrong in Holden's view.Ways we might succeed with AI just by dumb luck.The value of laying out imaginable success stories.Why information security is so important and underrated.Whether it's good to work at an AI lab that you think is particularly careful.The track record of futurists' predictions.And much more.Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type ‘80,000 Hours' into your podcasting app. Or read the transcript.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#157 – Ezra Klein on existential risk from AI and what DC could do about it

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jul 24, 2023 78:46


In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.Links to learn more, summary and full transcript.Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:They cover:Whether it's desirable to slow down AI researchThe value of engaging with current policy debates even if they don't seem directly importantWhich AI business models seem more or less dangerousTensions between people focused on existing vs emergent risks from AITwo major challenges of being a new parentGet this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type ‘80,000 Hours' into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore

The Nonlinear Library
LW - Nature: "Stop talking about tomorrow's AI doomsday when AI poses risks today" by Ben Smith

The Nonlinear Library

Play Episode Listen Later Jun 28, 2023 2:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nature: "Stop talking about tomorrow's AI doomsday when AI poses risks today", published by Ben Smith on June 28, 2023 on LessWrong. Overall, a headline that seems counterproductive and needlessly divisive. I worry very much that coverage like this has the potential to bring political polarization to AI risk and it would be extremely damaging for the prospect of regulation if one side of the US Congress/Senate decided AI risk was something only their outgroup is concerned about, for nefarious reasons. but in the spirit of charity, here are perhaps the strongest points of a weak article: the spectre of AI as an all-powerful machine fuels competition between nations to develop AI so that they can benefit from and control it. This works to the advantage of tech firms: it encourages investment and weakens arguments for regulating the industry. An actual arms race to produce next-generation AI-powered military technology is already under way, increasing the risk of catastrophic conflict and governments must establish appropriate legal and regulatory frameworks, as well as applying laws that already exist and Researchers must play their part by building a culture of responsible AI from the bottom up. In April, the big machine-learning meeting NeurIPS (Neural Information Processing Systems) announced its adoption of a code of ethics for meeting submissions. This includes an expectation that research involving human participants has been approved by an ethical or institutional review board (IRB) This would be great if ethical or institutional review boards were willing to restrict research that might be dangerous, but it would require a substantial change in their approach to regulating AI research. All researchers and institutions should follow this approach, and also ensure that IRBs — or peer-review panels in cases in which no IRB exists — have the expertise to examine potentially risky AI research. Should people worried about AI existential risk be trying to create resources for IRBs to recognize harmful AI research? Some ominous commentary from Tyler Cowen: Many of you focused on AGI existential risk do not much like or agree with my criticisms of that position, or perhaps you do not understand my stance, as I have seen stated a few times on Twitter. But I am telling you -- I take you far more seriously than does most of the mainstream. I keep on saying -- publish, publish, peer review, peer review -- a high mark of respect.... As it stands, contra that earlier tweet from Rob Wiblin (does anyone have a cite?), you have utterly and completely lost the mainstream debate, whether you admit it or not, whether you see this or not. (Given the large number of rationality community types who do not like to travel, it is no surprise this point is not better known internally.) You have lost the debate within scientific communities, within policymaker circles, and in international diplomacy, if it is not too much of an oxymoron to call it that. I don't really know what he is talking about because it does not seem like we're losing the debate right now. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

80,000 Hours Podcast with Rob Wiblin
#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jun 9, 2023 189:42


Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still.Today's guest — machine learning researcher Rohin Shah — goes into the Google DeepMind offices each day with that peculiar backdrop to his work. Links to learn more, summary and full transcript.He's on the team dedicated to maintaining 'technical AI safety' as these models approach and exceed human capabilities: basically that the models help humanity accomplish its goals without flipping out in some dangerous way. This work has never seemed more important.In the short-term it could be the key bottleneck to deploying ML models in high-stakes real-life situations. In the long-term, it could be the difference between humanity thriving and disappearing entirely.For years Rohin has been on a mission to fairly hear out people across the full spectrum of opinion about risks from artificial intelligence -- from doomers to doubters -- and properly understand their point of view. That makes him unusually well placed to give an overview of what we do and don't understand. He has landed somewhere in the middle — troubled by ways things could go wrong, but not convinced there are very strong reasons to expect a terrible outcome.Today's conversation is wide-ranging and Rohin lays out many of his personal opinions to host Rob Wiblin, including:What he sees as the strongest case both for and against slowing down the rate of progress in AI research.Why he disagrees with most other ML researchers that training a model on a sensible 'reward function' is enough to get a good outcome.Why he disagrees with many on LessWrong that the bar for whether a safety technique is helpful is “could this contain a superintelligence.”That he thinks nobody has very compelling arguments that AI created via machine learning will be dangerous by default, or that it will be safe by default. He believes we just don't know.That he understands that analogies and visualisations are necessary for public communication, but is sceptical that they really help us understand what's going on with ML models, because they're different in important ways from every other case we might compare them to.Why he's optimistic about DeepMind's work on scalable oversight, mechanistic interpretability, and dangerous capabilities evaluations, and what each of those projects involves.Why he isn't inherently worried about a future where we're surrounded by beings far more capable than us, so long as they share our goals to a reasonable degree.Why it's not enough for humanity to know how to align AI models — it's essential that management at AI labs correctly pick which methods they're going to use and have the practical know-how to apply them properly.Three observations that make him a little more optimistic: humans are a bit muddle-headed and not super goal-orientated; planes don't crash; and universities have specific majors in particular subjects.Plenty more besides.Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type ‘80,000 Hours' into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio mastering: Milo McGuire, Dominic Armstrong, and Ben CordellTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jun 2, 2023 176:10


GiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the world's poorest people avoid easily prevented diseases, like intestinal worms or vitamin A deficiency.But should GiveWell, as some critics argue, take a totally different approach to its search, focusing instead on directly increasing subjective wellbeing, or alternatively, raising economic growth?Today's guest — cofounder and CEO of GiveWell, Elie Hassenfeld — is proud of how much GiveWell has grown in the last five years. Its 'money moved' has quadrupled to around $600 million a year.Its research team has also more than doubled, enabling them to investigate a far broader range of interventions that could plausibly help people an enormous amount for each dollar spent. That work has led GiveWell to support dozens of new organisations, such as Kangaroo Mother Care, MiracleFeet, and Dispensers for Safe Water.But some other researchers focused on figuring out the best ways to help the world's poorest people say GiveWell shouldn't just do more of the same thing, but rather ought to look at the problem differently.Links to learn more, summary and full transcript.Currently, GiveWell uses a range of metrics to track the impact of the organisations it considers recommending — such as 'lives saved,' 'household incomes doubled,' and for health improvements, the 'quality-adjusted life year.' The Happier Lives Institute (HLI) has argued that instead, GiveWell should try to cash out the impact of all interventions in terms of improvements in subjective wellbeing. This philosophy has led HLI to be more sceptical of interventions that have been demonstrated to improve health, but whose impact on wellbeing has not been measured, and to give a high priority to improving lives relative to extending them.An alternative high-level critique is that really all that matters in the long run is getting the economies of poor countries to grow. On this view, GiveWell should focus on figuring out what causes some countries to experience explosive economic growth while others fail to, or even go backwards. Even modest improvements in the chances of such a 'growth miracle' will likely offer a bigger bang-for-buck than funding the incremental delivery of deworming tablets or vitamin A supplements, or anything else.Elie sees where both of these critiques are coming from, and notes that they've influenced GiveWell's work in some ways. But as he explains, he thinks they underestimate the practical difficulty of successfully pulling off either approach and finding better opportunities than what GiveWell funds today. In today's in-depth conversation, Elie and host Rob Wiblin cover the above, as well as:Why GiveWell flipped from not recommending chlorine dispensers as an intervention for safe drinking water to spending tens of millions of dollars on themWhat transferable lessons GiveWell learned from investigating different kinds of interventionsWhy the best treatment for premature babies in low-resource settings may involve less rather than more medicine.Severe malnourishment among children and what can be done about it.How to deal with hidden and non-obvious costs of a programmeSome cheap early treatments that can prevent kids from developing lifelong disabilitiesThe various roles GiveWell is currently hiring for, and what's distinctive about their organisational cultureAnd much more.Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type ‘80,000 Hours' into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio mastering: Simon Monsour and Ben CordellTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
Andrés Jiménez Zorrilla on the Shrimp Welfare Project (80k After Hours)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 22, 2023 77:27


In this episode from our second show, 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It's the first project in the world focused on shrimp welfare specifically, and as of recording in June 2022, has six full-time staff. Links to learn more, highlights and full transcript. They cover: • The evidence for shrimp sentience • How farmers and the public feel about shrimp • The scale of the problem • What shrimp farming looks like • The killing process, and other welfare issues • Shrimp Welfare Project's strategy • History of shrimp welfare work • What it's like working in India and Vietnam • How to help Who this episode is for: • People who care about animal welfare • People interested in new and unusual problems • People open to shrimp sentience Who this episode isn't for: • People who think shrimp couldn't possibly be sentient • People who got called ‘shrimp' a lot in high school and get anxious when they hear the word over and over again Get this episode by subscribing to our more experimental podcast on the world's most pressing problems and how to solve them: type ‘80k After Hours' into your podcasting app Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 3, 2023 137:27


If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no. Today's guest, Johannes Ackva — the climate research lead at Founders Pledge, where he advises major philanthropists on their giving — thinks the best strategy is actually pretty different, and one few are adopting. In reality you don't want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment. Links to learn more, summary and full transcript. Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we're familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one. In short: we're uncertain what the future holds and really need to avoid the worst-case scenarios. This means that avoiding an additional tonne of carbon being emitted in a hypothetical future in which emissions have been high is much more important than avoiding a tonne of carbon in a low-carbon world. That may be, but concretely, how should that affect our behaviour? Well, the future scenarios in which emissions are highest are all ones in which clean energy tech that can make a big difference — wind, solar, and electric cars — don't succeed nearly as much as we are currently hoping and expecting. For some reason or another, they must have hit a roadblock and we continued to burn a lot of fossil fuels. In such an imaginable future scenario, we can ask what we would wish we had funded now. How could we today buy insurance against the possible disaster that renewables don't work out? Basically, in that case we will wish that we had pursued a portfolio of other energy technologies that could have complemented renewables or succeeded where they failed, such as hot rock geothermal, modular nuclear reactors, or carbon capture and storage. If you're optimistic about renewables, as Johannes is, then that's all the more reason to relax about scenarios where they work as planned, and focus one's efforts on the possibility that they don't. And Johannes notes that the most useful thing someone can do today to reduce global emissions in the future is to cause some clean energy technology to exist where it otherwise wouldn't, or cause it to become cheaper more quickly. If you can do that, then you can indirectly affect the behaviour of people all around the world for decades or centuries to come. In today's extensive interview, host Rob Wiblin and Johannes discuss the above considerations, as well as: • Retooling newly built coal plants in the developing world • Specific clean energy technologies like geothermal and nuclear fusion • Possible biases among environmentalists and climate philanthropists • How climate change compares to other risks to humanity • In what kinds of scenarios future emissions would be highest • In what regions climate philanthropy is most concentrated and whether that makes sense • Attempts to decarbonise aviation, shipping, and industrial processes • The impact of funding advocacy vs science vs deployment • Lessons for climate change focused careers • And plenty more Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type ‘80,000 Hours' into your podcasting app. Or read the transcript below. Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore

The Nonlinear Library
EA - Against EA-Community-Received-Wisdom on Practical Sociological Questions by Michael Cohen

The Nonlinear Library

Play Episode Listen Later Mar 9, 2023 25:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against EA-Community-Received-Wisdom on Practical Sociological Questions, published by Michael Cohen on March 9, 2023 on The Effective Altruism Forum. In my view, there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong. But I think that it can be fixed, and the EA movement would become very good. In my view, this rot comes from incorrect answers to certain practical sociological questions, like: How important for success is having experience or having been apprenticed to someone experienced? Is the EA Forum a good tool for collaborative truth-seeking? How helpful is peer review for collaborative truth-seeking? Meta-1. Is "Defer to a consensus among EA community members" a good strategy for answering practical sociological questions? Meta-2. How accurate are conventional answers to practical sociological questions that many people want to get right? I'll spend a few sentences attempting to persuade EA readers that my position is not easily explained away by certain things they might call mistakes. Most of my recent friends are in the EA community. (I don't think EAs are cringe). I assign >10% probability to AI killing everyone, so I'm doing technical AI Safety research as a PhD student at FHI. (I don't think longtermism or sci-fi has corrupted the EA community). I've read the sequences, and I thought they were mostly good. (I'm not "inferentially distant"). I think quite highly of the philosophical and economic reasoning of Toby Ord, Will MacAskill, Nick Bostrom, Rob Wiblin, Holden Karnofsky, and Eliezer Yudkowsky. (I'm "value-aligned", although I object to this term). Let me begin with an observation about Amazon's organizational structure. From what I've heard, Team A at Amazon does not have to use the tool that Team B made for them. Team A is encouraged to look for alternatives elsewhere. And Team B is encouraged to make the tool into something that they can sell to other organizations. This is apparently how Amazon Web Services became a product. The lesson I want to draw from this is that wherever possible, Amazon outsources quality control to the market (external people) rather than having internal "value-aligned" people attempt to assess quality and issue a pass/fail verdict. This is an instance of the principle: "if there is a large group of people trying to answer a question correctly (like 'Is Amazon's tool X the best option available?'), and they are trying (almost) as hard as you to answer it correctly, defer to their answer." That is my claim; now let me defend it, not just by pointing at Amazon, and claiming that they agree with me. High-Level Claims Claim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you. There is extensive evidence (Surowiecki, 2004) that aggregating the estimates of many people produces a more accurate estimate as the number of people grows. It may matter in many cases that people are actually trying rather than just professing to try. If you have extensive and unique technical expertise, you might be able to say no one is trying as hard as you, because properly trying to answer the question correctly involves seeking to understand the implications of certain technical arguments, which only you have bothered to do. There is potentially plenty of gray area here, but hopefully, all of my applications of Claim 1 steer well clear of it. Let's now turn to Meta-2 from above. Claim 2: For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer....

Clearer Thinking with Spencer Greenberg
The FTX catastrophe (with Byrne Hobart, Vipul Naik, Maomao Hu, Marcus Abramovich, and Ozzie Gooen)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Nov 28, 2022 204:44


What the heck happened with FTX and Sam Bankman-Fried? Were there early warning signs that most people failed to notice? What could've been done differently, and by whom? What effects will this have on the EA movement going forward?Timestamps:00:01:37 — Intro & timeline00:51:48 — Byrne Hobart01:39:52 — Vipul Naik02:18:35 — Maomao Hu02:41:19 — Marcus Abramovitch02:49:38 — Ozzie Gooen03:21:40 — Wrap-up & outroByrne Hobart writes The Diff, a newsletter covering inflections in finance and tech, which has 47,000+ readers. Previously he worked at a hedge fund covering Internet and media companies. Follow Byrne on Twitter at @ByrneHobart or subscribe to The Diff at thediff.co.Vipul Naik holds a PhD in mathematics from the University of Chicago and is currently the head of data science at Equator Therapeutics, a drug discovery startup. He previously worked at a tech startup called LiftIgniter and then at The Arena Group, a media / tech company that acquired LiftIgniter. Learn more about him at his website, vipulnaik.com.Maomao Hu is a blockchain, fintech, and AI entrepreneur and thought leader. He has been involved in organizations ranging from leading investment banks to new startups, to solve both microstructure problems like market surveillance and macrostructure problems like capital allocation. Currently, he leads development and quantitative research at asset manager Zerocap. Learn more about him at his website, thefirenexttime.com.Marcus Abramovich is a managing partner at Enlightenment Ventures, an EA-aligned cryptocurrency hedge fund. Marcus also leads a Facebook group and Discord community of effective altruists focused on accumulating capital to donate to EA causes, and advises several cryptocurrency projects. Marcus discovered effective altruism as a PhD candidate at the University of Waterloo and professional poker player. Email him at marcus.s.abramovitch@gmail.com.Ozzie Gooen is the president of The Quantified Uncertainty Research Institute. He has a background in programming and research. He previously founded Guesstimate and worked at the Future of Humanity Institute at Oxford. Follow him on Twitter at @ozziegooen or learn more about his current work at quantifieduncertainty.org.Further Reading:"Clarifications on diminishing returns and risk aversion in giving" by Rob Wiblin @ the EA forum on why he disagrees with the SBF's risk-taking approach [link]References: 0xhonky. (November 13, 2022, 03:12 AM UTC). https://twitter.com/0xhonky/status/1591630071915483136. Twitter. [link] alamedatrabucco. (April 22, 2021, 10:37 AM UTC). https://twitter.com/alamedatrabucco/status/1385180941186789384. Twitter. [link] Allison, I.. (November 2, 2022). Divisions in Sam Bankman-Fried's Crypto Empire Blur on His Trading Titan Alameda's Balance Sheet. Coindesk. [link] Austin. (November 14, 2022). In Defense of SBF. Effective Altruism Forum. [link] autismcapital. (November 12, 2022, 07:33 AM UTC). https://twitter.com/autismcapital/status/1591333446995283969. Twitter. [link] Berwick, A.. (November 13, 2022). Exclusive: At least $1 billion of client funds missing at failed crypto firm FTX. Reuters. [link] carolinecapital. (April 5, 2021, 11:41 AM UTC). https://twitter.com/carolinecapital/status/1379036346300305408. Twitter. [link] corybates1895. (November 10, 2022, 10:37 PM UTC). https://twitter.com/corybates1895/status/1590836167867760641. Twitter. [link] cz_binance. (November 6, 2022, 03:47 PM UTC). https://twitter.com/cz_binance/status/1589283421704290306. Twitter. [link] cz_binance. (November 8, 2022). https://twitter.com/cz_binance/status/1590013613586411520. Twitter. [link] Faux, Z.. (April 3, 2022). A 30-Year-Old Crypto Billionaire Wants to Give His Fortune Away. Bloomberg. [link] Ellison, C.. (September 21, 2021). https://worldoptimization.tumblr.com/post/642664297644916736/slatestarscratchpad-all-right-more-really-stupid [deleted]. World Optimization. [link] ftxfuturefund. (February 8, 2022, 05:32 PM UTC). https://twitter.com/ftxfuturefund/status/1498350483206860801. Twitter. [link] Gach, E.. (November 14, 2022). Crypto's Biggest Crash Saw Guy Playing League Of Legends While Luring Investors [Update]. Kotaku. [link] Hussein, F.. (November 16, 2022). House panel to hold hearing on cryptocurrency exchange FTX collapse. PBS News Hour. [link] Jenkinson, G.. (November 17, 2022). SBF received $1B in personal loans from Alameda: FTX bankruptcy filing. Cointelegraph. [link] Kulish, N.. (November 13, 2022). FTX's Collapse Casts a Pall on a Philanthropy Movement. The New York Times. [link] Levine, M.. (November 14, 2022). FTX's Balance Sheet Was Bad. Bloomberg. [link] Ligon,C., Reynolds, S., Kessler, S., De, N., & Decker, R.. (November 11, 2022). 'FTX Has Been Hacked': Crypto Disaster Worsens as Exchange Sees Mysterious Outflows Exceeding $600M. Coindesk. [link] Morrow, A.. (November 18, 2022). ‘Complete failure:' Filing reveals staggering mismanagement inside FTX . CNN. [link] Nick_Beckstead, leopold, ab, & ketanrama. (November 10, 2022). The FTX Future Fund team has resigned. Effective Altruism Forum. [link] Partz, H.. (November 9, 2022). FTX founder Sam Bankman-Fried removes “assets are fine” flood from Twitter. Cointelegraph. [link] Piper, K.. (November 16, 2022). Sam Bankman-Fried tries to explain himself. Vox. [link] Regan, M.P. & Hajric, V.. (November 12, 2022). SBF vs CZ: How 2 crypto billionaires' social media “bloodsport” went from keyboard warrior shenanigans to a $32 billion blowup. Fortune. [link] Rosenberg, E., Khartit, K., & McClay, R.. (August 26, 2022). What Is Yield Farming in Cryptocurrency?. The Balance. [link] sbf_ftx. (December 2, 2020, 09:25 PM UTC). https://twitter.com/sbf_ftx/status/1334247283081138178. Twitter. [link] sbf_ftx. (December 11, 2020, 04:19 AM UTC). https://twitter.com/sbf_ftx/status/1337250686870831107. Twitter. [link] sbf_ftx. (November 8, 2022, 04:03 PM UTC). https://twitter.com/sbf_ftx/status/1590012124864348160. Twitter. [link] sbf_ftx. (November 10, 2022, 02:13 PM UTC). https://twitter.com/sbf_ftx/status/1590709195892195329. Twitter. [link] sbf_ftx. (November 11, 2022, 03:23 PM UTC). https://twitter.com/sbf_ftx/status/1591089317300293636. Twitter. [link] Sigalos, M. & Rooney, K.. (November 9, 2022). Binance backs out of FTX rescue, leaving the crypto exchange on the brink of collapse. CNBC. [link] tara_macaulay. (November 16, 2022, 08:57 PM UTC). https://twitter.com/tara_macaulay/status/1592985303262072834. Twitter. [link] taylorpearsonme. (November 10, 2022, 10:00 PM UTC). https://twitter.com/taylorpearsonme/status/1590826638429650944. Twitter. [link] Tsipursky, G.. (November 16, 2022). SBF's dangerous decision-making philosophy that brought down FTX. Fortune. [link] whalechart. (November 15, 2022, 06:46 AM UTC). https://twitter.com/whalechart/status/1592408565402464259. Twitter. [link] Wiblin, R. & Harris, K.. (April 14, 2022). Sam Bankman-Fried on taking a high-risk approach to crypto and doing good. 80,000 Hours. [link] Yaffe-Bellany, D.. (November 14, 2022). How Sam Bankman-Fried's Crypto Empire Collapsed. The New York Times. [link] yashkaf. (November 12, 2022, 07:18 PM UTC). https://twitter.com/yashkaf/status/1591606925149540353. Twitter. [link] FTX (company). Wikipedia. [link] Sam Bankman-Fried. Wikipedia. [link] [Read more]

Clearer Thinking with Spencer Greenberg
The FTX catastrophe (with Byrne Hobart, Vipul Naik, Maomao Hu, Marcus Abramovich, and Ozzie Gooen)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Nov 28, 2022 204:44


What the heck happened with FTX and Sam Bankman-Fried? Were there early warning signs that most people failed to notice? What could've been done differently, and by whom? What effects will this have on the EA movement going forward?Timestamps:00:01:37 — Intro & timeline00:51:48 — Byrne Hobart01:39:52 — Vipul Naik02:18:35 — Maomao Hu02:41:19 — Marcus Abramovitch02:49:38 — Ozzie Gooen03:21:40 — Wrap-up & outroByrne Hobart writes The Diff, a newsletter covering inflections in finance and tech, which has 47,000+ readers. Previously he worked at a hedge fund covering Internet and media companies. Follow Byrne on Twitter at @ByrneHobart or subscribe to The Diff at thediff.co.Vipul Naik holds a PhD in mathematics from the University of Chicago and is currently the head of data science at Equator Therapeutics, a drug discovery startup. He previously worked at a tech startup called LiftIgniter and then at The Arena Group, a media / tech company that acquired LiftIgniter. Learn more about him at his website, vipulnaik.com.Maomao Hu is a blockchain, fintech, and AI entrepreneur and thought leader. He has been involved in organizations ranging from leading investment banks to new startups, to solve both microstructure problems like market surveillance and macrostructure problems like capital allocation. Currently, he leads development and quantitative research at asset manager Zerocap. Learn more about him at his website, thefirenexttime.com.Marcus Abramovich is a managing partner at Enlightenment Ventures, an EA-aligned cryptocurrency hedge fund. Marcus also leads a Facebook group and Discord community of effective altruists focused on accumulating capital to donate to EA causes, and advises several cryptocurrency projects. Marcus discovered effective altruism as a PhD candidate at the University of Waterloo and professional poker player. Email him at marcus.s.abramovitch@gmail.com.Ozzie Gooen is the president of The Quantified Uncertainty Research Institute. He has a background in programming and research. He previously founded Guesstimate and worked at the Future of Humanity Institute at Oxford. Follow him on Twitter at @ozziegooen or learn more about his current work at quantifieduncertainty.org.Further Reading:"Clarifications on diminishing returns and risk aversion in giving" by Rob Wiblin @ the EA forum on why he disagrees with the SBF's risk-taking approach [link]References: 0xhonky. (November 13, 2022, 03:12 AM UTC). https://twitter.com/0xhonky/status/1591630071915483136. Twitter. [link] alamedatrabucco. (April 22, 2021, 10:37 AM UTC). https://twitter.com/alamedatrabucco/status/1385180941186789384. Twitter. [link] Allison, I.. (November 2, 2022). Divisions in Sam Bankman-Fried's Crypto Empire Blur on His Trading Titan Alameda's Balance Sheet. Coindesk. [link] Austin. (November 14, 2022). In Defense of SBF. Effective Altruism Forum. [link] autismcapital. (November 12, 2022, 07:33 AM UTC). https://twitter.com/autismcapital/status/1591333446995283969. Twitter. [link] Berwick, A.. (November 13, 2022). Exclusive: At least $1 billion of client funds missing at failed crypto firm FTX. Reuters. [link] carolinecapital. (April 5, 2021, 11:41 AM UTC). https://twitter.com/carolinecapital/status/1379036346300305408. Twitter. [link] corybates1895. (November 10, 2022, 10:37 PM UTC). https://twitter.com/corybates1895/status/1590836167867760641. Twitter. [link] cz_binance. (November 6, 2022, 03:47 PM UTC). https://twitter.com/cz_binance/status/1589283421704290306. Twitter. [link] cz_binance. (November 8, 2022). https://twitter.com/cz_binance/status/1590013613586411520. Twitter. [link] Faux, Z.. (April 3, 2022). A 30-Year-Old Crypto Billionaire Wants to Give His Fortune Away. Bloomberg. [link] Ellison, C.. (September 21, 2021). https://worldoptimization.tumblr.com/post/642664297644916736/slatestarscratchpad-all-right-more-really-stupid [deleted]. World Optimization. [link] ftxfuturefund. (February 8, 2022, 05:32 PM UTC). https://twitter.com/ftxfuturefund/status/1498350483206860801. Twitter. [link] Gach, E.. (November 14, 2022). Crypto's Biggest Crash Saw Guy Playing League Of Legends While Luring Investors [Update]. Kotaku. [link] Hussein, F.. (November 16, 2022). House panel to hold hearing on cryptocurrency exchange FTX collapse. PBS News Hour. [link] Jenkinson, G.. (November 17, 2022). SBF received $1B in personal loans from Alameda: FTX bankruptcy filing. Cointelegraph. [link] Kulish, N.. (November 13, 2022). FTX's Collapse Casts a Pall on a Philanthropy Movement. The New York Times. [link] Levine, M.. (November 14, 2022). FTX's Balance Sheet Was Bad. Bloomberg. [link] Ligon,C., Reynolds, S., Kessler, S., De, N., & Decker, R.. (November 11, 2022). 'FTX Has Been Hacked': Crypto Disaster Worsens as Exchange Sees Mysterious Outflows Exceeding $600M. Coindesk. [link] Morrow, A.. (November 18, 2022). ‘Complete failure:' Filing reveals staggering mismanagement inside FTX . CNN. [link] Nick_Beckstead, leopold, ab, & ketanrama. (November 10, 2022). The FTX Future Fund team has resigned. Effective Altruism Forum. [link] Partz, H.. (November 9, 2022). FTX founder Sam Bankman-Fried removes “assets are fine” flood from Twitter. Cointelegraph. [link] Piper, K.. (November 16, 2022). Sam Bankman-Fried tries to explain himself. Vox. [link] Regan, M.P. & Hajric, V.. (November 12, 2022). SBF vs CZ: How 2 crypto billionaires' social media “bloodsport” went from keyboard warrior shenanigans to a $32 billion blowup. Fortune. [link] Rosenberg, E., Khartit, K., & McClay, R.. (August 26, 2022). What Is Yield Farming in Cryptocurrency?. The Balance. [link] sbf_ftx. (December 2, 2020, 09:25 PM UTC). https://twitter.com/sbf_ftx/status/1334247283081138178. Twitter. [link] sbf_ftx. (December 11, 2020, 04:19 AM UTC). https://twitter.com/sbf_ftx/status/1337250686870831107. Twitter. [link] sbf_ftx. (November 8, 2022, 04:03 PM UTC). https://twitter.com/sbf_ftx/status/1590012124864348160. Twitter. [link] sbf_ftx. (November 10, 2022, 02:13 PM UTC). https://twitter.com/sbf_ftx/status/1590709195892195329. Twitter. [link] sbf_ftx. (November 11, 2022, 03:23 PM UTC). https://twitter.com/sbf_ftx/status/1591089317300293636. Twitter. [link] Sigalos, M. & Rooney, K.. (November 9, 2022). Binance backs out of FTX rescue, leaving the crypto exchange on the brink of collapse. CNBC. [link] tara_macaulay. (November 16, 2022, 08:57 PM UTC). https://twitter.com/tara_macaulay/status/1592985303262072834. Twitter. [link] taylorpearsonme. (November 10, 2022, 10:00 PM UTC). https://twitter.com/taylorpearsonme/status/1590826638429650944. Twitter. [link] Tsipursky, G.. (November 16, 2022). SBF's dangerous decision-making philosophy that brought down FTX. Fortune. [link] whalechart. (November 15, 2022, 06:46 AM UTC). https://twitter.com/whalechart/status/1592408565402464259. Twitter. [link] Wiblin, R. & Harris, K.. (April 14, 2022). Sam Bankman-Fried on taking a high-risk approach to crypto and doing good. 80,000 Hours. [link] Yaffe-Bellany, D.. (November 14, 2022). How Sam Bankman-Fried's Crypto Empire Collapsed. The New York Times. [link] yashkaf. (November 12, 2022, 07:18 PM UTC). https://twitter.com/yashkaf/status/1591606925149540353. Twitter. [link] FTX (company). Wikipedia. [link] Sam Bankman-Fried. Wikipedia. [link]

80,000 Hours Podcast with Rob Wiblin
Rob's thoughts on the FTX bankruptcy

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 23, 2022 5:35


In this episode, usual host of the show Rob Wiblin gives his thoughts on the recent collapse of FTX. Click here for an official 80,000 Hours statement. And here are links to some potentially relevant 80,000 Hours pieces: • Episode #24 of this show – Stefan Schubert on why it's a bad idea to break the rules, even if it's for a good cause. • Is it ever OK to take a harmful job in order to do more good? An in-depth analysis • What are the 10 most harmful jobs? • Ways people trying to do good accidentally make things worse, and how to avoid them

80,000 Hours Podcast with Rob Wiblin
#140 – Bear Braumoeller on the case that war isn't in decline

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 8, 2022 167:05


Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out. But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe. Today's guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age. Links to learn more, summary and full transcript. The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours. If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we're as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st. Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster. He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone. In a nutshell, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, "only the dead have seen the end of war". In today's conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as: • Why haven't modern ideas about the immorality of violence led to the decline of war, when it's such a natural thing to expect? • What would Bear's critics say in response to all this? • What do the optimists get right? • How does one do proper statistical tests for events that are clumped together, like war deaths? • Why are deaths in war so concentrated in a handful of the most extreme events? • Did the ideas of the Enlightenment promote nonviolence, on balance? • Were early states more or less violent than groups of hunter-gatherers? • If Bear is right, what can be done? • How did the 'Concert of Europe' or 'Bismarckian system' maintain peace in the 19th century? • Which wars are remarkable but largely unknown? Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore