Podcasts about Eliezer Yudkowsky

American blogger, writer, and artificial intelligence researcher

  • 150PODCASTS
  • 923EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 18, 2025LATEST
Eliezer Yudkowsky

POPULARITY

20172018201920202021202220232024


Best podcasts about Eliezer Yudkowsky

Latest podcast episodes about Eliezer Yudkowsky

Big Tech
Can AI Lead Us to the Good Life?

Big Tech

Play Episode Listen Later Nov 18, 2025 51:05


In Rutger Bregman's first book, Utopia for Realists, the historian describes a rosy vision of the future – one with 15-hour work weeks, universal basic income and massive wealth redistribution.It's a vision that, in the age of artificial intelligence, now seems increasingly possible.But utopia is far from guaranteed. Many experts predict that AI will also lead to mass job loss, the development of new bioweapons and, potentially, the extinction of our species.So if you're building a technology that could either save the world or destroy it – is that a moral pursuit?These kinds of thorny questions are at the heart of Bregman's latest book, Moral Ambition. In a sweeping conversation that takes us from the invention of the birth control pill to the British Abolitionist movement, Bregman and I discuss what a good life looks like (spoiler: he thinks the death of work might not be such a bad thing) – and whether AI can help get us there.Mentioned: Moral Ambition, by Rutger BregmanUtopia for Realists, by Rutger Bregman If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate SoaresMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Farm Podcast Mach II
AI, Cults & Techno-Feudal Dreams Part I w/David Z. Morris & Recluse

The Farm Podcast Mach II

Play Episode Listen Later Nov 17, 2025 64:48


Sam Bankman-Fried, FTX scandal, TESCERAL, Effective Altruism (EA), Utilitarianism, AGI, AI as a scam, Will MacAskill, Machine Intelligence Research Institute (MIRI), the Center for Applied Rationality (CFAR), Leverage Research, Peter Thiel, Eliezer Yudkowsky, Longtermism, Barbara Fried, Sanford, Lewis Terman, gifted kids, Fred Terman, eugenics, Anthropic, Rationalism, human potential movement. Landmark/est, MK-ULTRA, Zizians, cultsDavid's bookMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/ Hosted on Acast. See acast.com/privacy for more information.

Risky Business with Nate Silver and Maria Konnikova
Society is betting on AI – and the outcomes aren't looking good (with Nate Soares)

Risky Business with Nate Silver and Maria Konnikova

Play Episode Listen Later Nov 15, 2025 51:40 Transcription Available


Humanity’s attempts to achieve artificial superintelligence will be our downfall, according to If Anyone Builds It, Everyone Dies. That’s the new book out by AI experts Nates Soares and Eliezer Yudkowsky. And while their provocation may feel extreme in this moment when AI slop abounds and the media is hyping a bubble on the verge of bursting, Soares is so convinced of his argument that he’s calling for a complete stop to AI development. Today on the show, Nate and Maria ask Soares how he came to this conclusion and what everyone else is missing. For more from Nate and Maria, subscribe to their newsletters: The Leap from Maria Konnikova Silver Bulletin from Nate Silver See omnystudio.com/listener for privacy information.

The American Compass Podcast
Is AI Really Going to Kill Us All? with Eliezer Yudkowsky and Nate Soares

The American Compass Podcast

Play Episode Listen Later Nov 7, 2025 49:10


Artificial intelligence has leapt from speculative theory to everyday tool with astonishing speed, promising breakthroughs in science, medicine, and the ways we learn, live, and work. But to some of its earliest researchers, the race toward superintelligence represents not progress but an existential threat, one that could end humanity as we know it.Eliezer Yudkowsky and Nate Soares, authors of If Anyone Builds It, Everyone Dies, join Oren to debate their claim that pursuing AI will end in human extinction. During the conversation, a skeptical Oren pushes them on whether meaningful safeguards are possible, what a realistic boundary between risk and progress might look like, and how society should judge the costs of stopping against the consequences of carrying on.

Razib Khan's Unsupervised Learning
Nate Soares: we are doomed (probably)

Razib Khan's Unsupervised Learning

Play Episode Listen Later Nov 4, 2025 67:27


Today Razib talks to Nate Soares the President of the Machine Intelligence Research Institute (MIRI). He joined MIRI in 2014 and has since authored many of its core technical agendas, including foundational documents like Agent Foundations for Aligning Superintelligence with Human Interests. Prior to his work in AI research, Soares worked as a software engineer at Google. He holds a B.S. in computer science and economics from George Washington University. On this episode they discuss his new book, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, co-authored with Eliezer Yudkowsky. Soares and Yudkowsky make the stark case that the race to build superintelligent AI is a "suicide race" for humanity. Razib and Soares discuss how AI systems are "grown" rather than deliberately engineered, making them fundamentally opaque and uncontrollable. They explore a concrete extinction scenario, explain why even minimally misaligned goals could lead to human annihilation. Soares urges immediate cooperative action to prevent such a worst-case outcome.

Improve the News
Deadly Afghanistan Quake, Trump Nigeria Warning and MLB Champion Dodgers

Improve the News

Play Episode Listen Later Nov 4, 2025 34:39


A deadly 6.3 magnitude earthquake strikes Afghanistan, A man is charged with 11 attempted murders in a U.K. train attack, Iran vows to rebuild its nuclear sites, A Mexican mayor is shot and killed during the Day of the Dead festival, The BBC claims China threatened a U.K. university over Uyghur research, President Trump instructs the Pentagon to prepare for “possible action” in Nigeria, Trump's planned nuclear tests will reportedly be 'noncritical explosions', 21 states are among those suing the Trump administration over student loan forgiveness rules, France rejects a wealth tax and approves a holding company levy, Eliezer Yudkowsky critiques OpenAI's stated goals, and the LA Dodgers are MLB champions after an epic World Series. Sources: www.verity.news

LessWrong Curated Podcast
[Linkpost] “I ate bear fat with honey and salt flakes, to prove a point” by aggliu

LessWrong Curated Podcast

Play Episode Listen Later Nov 4, 2025 1:07


This is a link post. Eliezer Yudkowsky did not exactly suggest that you should eat bear fat covered with honey and sprinkled with salt flakes. What he actually said was that an alien, looking from the outside at evolution, would predict that you would want to eat bear fat covered with honey and sprinkled with salt flakes. Still, I decided to buy a jar of bear fat online, and make a treat for the people at Inkhaven. It was surprisingly good. My post discusses how that happened, and a bit about the implications for Eliezer's thesis. Let me know if you want to try some; I can prepare some for you if you happen to be at Lighthaven before we run out of bear fat, and before I leave toward the end of November. --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/2pKiXR6X7wdt8eFX5/i-ate-bear-fat-with-honey-and-salt-flakes-to-prove-a-point Linkpost URL:https://signoregalilei.com/2025/11/03/i-ate-bear-fat-to-prove-a-point/ --- Narrated by TYPE III AUDIO.

Deep Questions with Cal Newport
Ep. 377: The Case Against Superintelligence

Deep Questions with Cal Newport

Play Episode Listen Later Nov 3, 2025 91:14


Techno-philosopher Eliezer Yudkowsky recently went on Ezra Klein's podcast to argue that if we continue on our path toward superintelligent AI, these machines will destroy humanity. In this episode, Cal responds to Yudkowsky's argument point by point, concluding with a more general claim that these general styles of discussions suffer from what he calls “the philosopher's fallacy,” and are distracting us from real problems with AI that are actually afflicting us right now. He then answers listener questions about AI, responds to listener comments from an earlier AI episode, and ends by discussing Alpha schools, which claim to use AI to 2x the speed of education. Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here's the link: bit.ly/3U3sTvoVideo from today's episode: youtube.com/calnewportmediaDeep Dive: The Case Against Superintelligence [0.01]How should students think about “AI Literacy”? [1:06:35]Did AI blackmail an engineer to not turn it off? [1:09:06]Can I use AI to mask my laziness? [1:12:31]COMMENTS: Cal reads LM comments [1:16:58]CALL: Clarification on Lincoln Protocol [1:21:36]CAL REACTS: Are AI-Powered Schools the Future? [1:24:46]Links:Buy Cal's latest book, “Slow Productivity” at calnewport.com/slowGet a signed copy of Cal's “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/Cal's monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?youtube.com/watch?v=2Nn0-kAE5c0alpha.school/the-program/astralcodexten.com/i/166959786/part-three-how-alpha-works-partThanks to our Sponsors: byloftie.com (Use code “DEEP20”)expressvpn.com/deepshopify.com/deepvanta.com/deepquestionsThanks to Jesse Miller for production, Jay Kerstens for the intro music, and Mark Miles for mastering. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The 92 Report
150. Steve Petersen, ​​From Improv to Philosophy of AI

The 92 Report

Play Episode Listen Later Oct 27, 2025 61:47


Show Notes: Steve recounts his senior year at Harvard, and how he was torn between pursuing acting and philosophy. He graduated with a dual degree in philosophy and math but also found time to act in theater and participated in 20 shows.  A Love of Theater and a Move to London Steve explains why the lack of a theater major at Harvard allowed him to explore acting more than a university with a theater major. He touches on his parents' concerns about his career prospects if he pursued acting, and his decision to apply to both acting and philosophy graduate schools. Steve discusses his rejection from all graduate schools and why he decided to move to London with friends Evan Cohn and Brad Rouse. He talks about his experience in London. Europe on $20 a Day Steve details his backpacking trip through Europe on a $20 a day budget, staying with friends from Harvard and high school. He mentions a job opportunity in Japan through the Japanese Ministry of Education and describes his three-year stint in Japan, working as a native English speaker for the Japanese Ministry of Education, and being immersed in Japanese culture. He shares his experiences of living in the countryside and reflects on the impact of living in a different culture, learning some Japanese, and making Japanese friends. He discusses the personal growth and self-reflection that came from his time in Japan, including his first steps off the "achiever track." On to Philosophy Graduate School  When Steve returned to the U.S. he decided to apply to philosophy graduate schools again, this time with more success. He enrolled at the University of Michigan. However, he was miserable during grad school, which led him to seek therapy. Steve credits therapy with helping him make better choices in life. He discusses the competitive and prestigious nature of the Michigan philosophy department and the challenges of finishing his dissertation. He touches on the narrow and competitive aspects of pursuing a career in philosophy and shares his experience of finishing his dissertation and the support he received from a good co-thesis advisor. Kalamazoo College and Improv Steve describes his postdoc experience at Kalamazoo College, where he continued his improv hobby and formed his own improv group. He mentions a mockumentary-style improv movie called Comic Evangelists that premiered at the AFI Film Festival. Steve moved to Buffalo, Niagara University, and reflects on the challenges of adjusting to a non-research job. He discusses his continued therapy in Buffalo and the struggle with both societal and his own expectations of  professional status, however, with the help of a friend, he came to the realization that he had "made it" in his current circumstances. Steve describes his acting career in Buffalo, including roles in Shakespeare in the Park and collaborating with a classmate, Ian Lithgow. A Speciality in Philosophy of Science Steve shares his personal life, including meeting his wife in 2009 and starting a family. He explains his specialty in philosophy of science, focusing on the math and precise questions in analytic philosophy. He discusses his early interest in AI and computational epistemology, including the ethics of AI and the superintelligence worry. Steve describes his involvement in a group that discusses the moral status of digital minds and AI alignment.  Aligning AI with Human Interests Steve reflects on the challenges of aligning AI with human interests and the potential existential risks of advanced AI. He shares his concerns about the future of AI and the potential for AI to have moral status. He touches on the superintelligence concern and the challenges of aligning AI with human goals. Steve mentions the work of Eliezer Yudkowsky and the importance of governance and alignment in AI development. He reflects on the broader implications of AI for humanity and the need for careful consideration of long-term risks. Harvard Reflections Steve mentions Math 45 and how it kicked his butt, and his core classes included jazz, an acting class and clown improv with Jay Nichols.  Timestamps: 01:43: Dilemma Between Acting and Philosophy 03:44: Rejection and Move to London  07:09: Life in Japan and Cultural Insights  12:19: Return to Academia and Grad School Challenges  20:09: Therapy and Personal Growth  22:06: Transition to Buffalo and Philosophy Career  26:54: Philosophy of Science and AI Ethics  33:20: Future Concerns and AI Predictions  55:17: Reflections on Career and Personal Growth  Links: Steve's Website: https://stevepetersen.net/ On AI Superintelligence:  If Anyone Builds it, Everyone Dies Superintelligence The Alignment Problem Some places to donate: The Long-Term Future Fund Open Philanthropy On improv Impro Upright Citizens Brigade Comedy Improvisation Manual   Featured Non-profit: The featured non-profit of this week's episode is brought to you by Rich Buery who reports: “Hi, I'm Rich Buery, class of 1992. The featured nonprofit of this episode of The 92 Report is imentor. imentor is a powerful youth mentoring organization that connects volunteers with high school students and prepares them on the path to and through college. Mentors stay with the students through the last two years of high school and on the beginning of their college journey. I helped found imentor over 25 years ago and served as its founding executive director, and I am proud that over the last two decades, I've remained on the board of directors. It's truly a great organization. They need donors and they need volunteers. You can learn more about their work@www.imentor.org That's www, dot i m, e n, t, O, r.org, and now here is Will Bachman with this week's episode. To learn more about their work, visit: www.imentor.org.   

LessWrong Curated Podcast
“On Fleshling Safety: A Debate by Klurl and Trapaucius.” by Eliezer Yudkowsky

LessWrong Curated Podcast

Play Episode Listen Later Oct 27, 2025 142:21


(23K words; best considered as nonfiction with a fictional-dialogue frame, not a proper short story.) Prologue: Klurl and Trapaucius were members of the machine race. And no ordinary citizens they, but Constructors: licensed, bonded, and insured; proven, experienced, and reputed. Together Klurl and Trapaucius had collaborated on such famed artifices as the Eternal Clock, Silicon Sphere, Wandering Flame, and Diamond Book; and as individuals, both had constructed wonders too numerous to number. At one point in time Trapaucius was meeting with Klurl to drink a cup together. Klurl had set before himself a simple mug of mercury, considered by his kind a standard social lubricant. Trapaucius had brought forth in turn a far more exotic and experimental brew he had been perfecting, a new intoxicant he named gallinstan, alloyed from gallium, indium, and tin. "I have always been curious, friend Klurl," Trapaucius began, "about the ancient mythology which holds [...] ---Outline:(00:20) Prologue:(05:16) On Fleshling Capabilities (the First Debate between Klurl and Trapaucius):(26:05) On Fleshling Motivations (the 2nd (and by Far Longest) Debate between Klurl and Trapaucius):(36:32) On the Epistemology of Simplicitys Razor Applied to Fleshlings (the 2nd Part of their 2nd Debate, that is, its 2.2nd Part):(51:36) On the Epistemology of Reasoning About Alien Optimizers and their Outputs (their 2.3rd Debate):(01:08:46) On Considering the Outcome of a Succession of Filters (their 2.4th Debate):(01:16:50) On the Purported Beneficial Influence of Complications (their 2.5th Debate):(01:25:58) On the Comfortableness of All Reality (their 2.6th Debate):(01:32:53) On the Way of Proceeding with the Discovered Fleshlings (their 3rd Debate):(01:52:22) In which Klurl and Trapaucius Interrogate a Fleshling (that Being the 4th Part of their Sally):(02:16:12) On the Storys End:--- First published: October 26th, 2025 Source: https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius --- Narrated by TYPE III AUDIO.

Modern Wisdom
#1011 - Eliezer Yudkowsky - Why Superhuman AI Would Kill Us All

Modern Wisdom

Play Episode Listen Later Oct 25, 2025 97:08


Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute. Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there's a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it's too late Expect to learn the problem with building superhuman AI, why AI would have goals we haven't programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get 15% off your first order of Intake's magnetic nasal strips at https://intakebreathing.com/modernwisdom Get 10% discount on all Gymshark's products at https://gym.sh/modernwisdom (use code MODERNWISDOM10) Get 4 extra months of Surfshark VPN at https://surfshark.com/modernwisdom Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices

Bloggingheads.tv
Rationalism and AI Doomerism (Robert Wright & Liron Shapira)

Bloggingheads.tv

Play Episode Listen Later Oct 16, 2025 60:00


Teaser ... Why Liron became a Yudkowskian ... Eliezer Yudkowsky's vision of AI apocalypse ... Does intelligence want power? ... Decoding Yudkowsky's key Darwinian metaphor ... Is doomerism crowding out other AI worries? ... Liron: The silent majority is very AI anxious ... Heading to Overtime ...

The Ezra Klein Show
How Afraid of the A.I. Apocalypse Should We Be?

The Ezra Klein Show

Play Episode Listen Later Oct 15, 2025 67:47


Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case.Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent systems Yudkowsky thought humans should never create. But Yudkowsky is still out there sounding the alarm. He has a new book out, co-written with Nate Soares, “If Anyone Builds It, Everyone Dies,” trying to warn the world before it's too late.So what does Yudkowsky see that most of us don't? What makes him so certain? And why does he think he hasn't been able to persuade more people?Mentioned:Oversight of A.I.: Rules for Artificial IntelligenceIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares“A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.” by Kashmir HillBook Recommendations:A Step Farther Out by Jerry PournelleJudgment under Uncertainty by Daniel Kahneman, Paul Slovic, and Amos TverskyProbability Theory by E. T. JaynesThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our executive producer is Claire Gordon. The show's production team also includes Marie Cascione, Annie Galvin, Kristin Lin, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Helen Toner and Jeffrey Ladish. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.

The Jim Rutt Show
EP 327 Nate Soares on Why Superhuman AI Would Kill Us All

The Jim Rutt Show

Play Episode Listen Later Oct 15, 2025 97:07


Jim talks with Nate Soares about the ideas in his and Eliezer Yudkowsky's book If Anybody Builds It, Everybody Dies: Why Superhuman AI Would Kill Us All. They discuss the book's claim that mitigating existential AI risk should be a top global priority, the idea that LLMs are grown, the opacity of deep learning networks, the Golden Gate activation vector, whether our understanding of deep learning networks might improve enough to prevent catastrophe, goodness as a narrow target, the alignment problem, the problem of pointing minds, whether LLMs are just stochastic parrots, why predicting a corpus often requires more mental machinery than creating a corpus, depth & generalization of skills, wanting as an effective strategy, goal orientation, limitations of training goal pursuit, transient limitations of current AI, protein folding and AlphaFold, the riskiness of automating alignment research, the correlation between capability and more coherent drives, why the authors anchored their argument on transformers & LLMs, the inversion of Moravec's paradox, the geopolitical multipolar trap, making world leaders aware of the issues, a treaty to ban the race to superintelligence, the specific terms of the proposed treaty, a comparison with banning uranium enrichment, why Jim tentatively thinks this proposal is a mistake, a priesthood of the power supply, whether attention is a zero-sum game, and much more. Episode Transcript "Psyop or Insanity or ...? Peter Thiel, the Antichrist, and Our Collapsing Epistemic Commons," by Jim Rutt "On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback," by Marcus Williams et al. Attention Sinks and Compression Valleys in LLMs are Two Sides of the Same Coin," by Enrique Queipo-de-Llano et al. JRS EP 217 - Ben Goertzel on a New Framework for AGI "A Tentative Draft of a Treaty, With Annotations" Nate Soares is the President of the Machine Intelligence Research Institute. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.

London Futurists
Safe superintelligence via a community of AIs and humans, with Craig Kaplan

London Futurists

Play Episode Listen Later Oct 10, 2025 41:15


Craig Kaplan has been thinking about superintelligence longer than most. He bought the URL superintelligence.com back in 2006, and many years before that, in the late 1980s, he co-authored a series of papers with one of the founding fathers of AI, Herbert Simon.Craig started his career as a scientist with IBM, and later founded and ran a venture-backed company called PredictWallStreet that brought the wisdom of the crowd to Wall Street, and improved the performance of leading hedge funds. He sold that company in 2020, and now spends his time working out how to make the first superintelligence safe. As he puts it, he wants to reduce P(Doom) and increase P(Zoom).Selected follow-ups:iQ CompanyHerbert A. Simon - WikipediaAmara's Law and Its Place in the Future of Tech - Pohan LinPredict Wall StreetThe Society of Mind - book by Marvin MinskyAI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC NewsStatement on AI Risk - Center for AI SafetyI've Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells - Paul Tudor JonesSecrets of Software Quality: 40 Innovations from IBM - book by Craig KaplanLondon Futurists Podcast episode featuring David BrinReason in human affairs - book by Herbert SimonUS and China will intervene to halt ‘suicide race' of AGI – Max TegmarkIf Anybody Builds It, Everyone Dies - book by Eliezer Yudkowsky and Nate SoaresAGI-25 - conference in ReykjavikThe First Global Brain Workshop - Brussels 2001Center for Integrated CognitionPaul S. RosenbloomTatiana Shavrina, MetaHenry Minsky launches AI startup inspired by father's MIT researchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

The Jim Rutt Show
EP 325 Joe Edelman on Full-Stack AI Alignment

The Jim Rutt Show

Play Episode Listen Later Oct 7, 2025 72:12


Jim talks with Joe Edelman about the ideas in the Meaning Alignment Institute's recent paper "Full Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value." They discuss pluralism as a core principle in designing social systems, the informational basis for alignment, how preferential models fail to capture what people truly care about, the limitations of markets and voting as preference-based systems, critiques of text-based approaches in LLMs, thick models of value, values as attentional policies, AI assistants as potential vectors for manipulation, the need for reputation systems and factual grounding, the "super negotiator" project for better contract negotiation, multipolar traps, moral graph elicitation, starting with membranes, Moloch-free zones, unintended consequences and lessons from early Internet optimism, concentration of power as a key danger, co-optation risks, and much more. Episode Transcript "A Minimum Viable Metaphysics," by Jim Rutt (Substack) Jim's Substack JRS Currents 080: Joe Edelman and Ellie Hain on Rebuilding Meaning Meaning Alignment Institute If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky and Nate Soares "Full Stack Alignment: Co-aligning AI and Institutions with Thick Models of Value," by Joe Edelman et al. "What Are Human Values and How Do We Align AI to Them?" by Oliver Klingefjord, Ryan Lowe, and Joe Edelman Joe Edelman has spent much of his life trying to understand how ML systems and markets could change, retaining their many benefits but avoiding their characteristic problems: of atomization, and of servicing shallow desires over deeper needs. Along the way this led him to formulate theories of human meaning and values (https://arxiv.org/abs/2404.10636) and study models of societal transformation (https://www.full-stack-alignment.ai/paper) as well as inventing the meaning-based metrics used at CouchSurfing, Facebook, and Apple, co-founding the Center for Humane Technology and the Meaning Alignment Institute, and inventing new democratic systems (https://arxiv.org/abs/2404.10636). He's currently one of the PIs leading the Full-Stack Alignment program at the Meaning Alignment Institute, with a network of more than 50 researchers at universities and corporate labs working on these issues.

Big Tech
Geoffrey Hinton vs. The End of the World

Big Tech

Play Episode Listen Later Oct 7, 2025 69:11


The story of how Geoffrey Hinton became “the godfather of AI” has reached mythic status in the tech world.While he was at the University of Toronto, Hinton pioneered the neural network research that would become the backbone of modern AI. (One of his students, Ilya Sutskever, went on to be one of OpenAI's most influential scientific minds.) In 2013, Hinton left the academy and went to work for Google, eventually winning both a Turing Award and a Nobel Prize.I think it's fair to say that artificial intelligence as we know it, may not exist without Geoffrey Hinton.But Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations and citizens that his life's work – this thing he helped build – might lead to our collective extinction. And that moment may be closer than we think, because Hinton believes AI may already be conscious.But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada's, seem reluctant to get in the way.So I wanted to ask Hinton: If we keep going down this path, what will become of us?Mentioned:If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate SoaresAgentic Misalignment: How LLMs could be insider threats, by AnthropicMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Human Action Podcast
The Importance of Time in Explaining Asset Bubbles

The Human Action Podcast

Play Episode Listen Later Oct 6, 2025


Jonathan Newman returns to join Bob in a critique of Eliezer Yudkowsky's viral theory of investment bubbles. Yudkowsky states that the bad investment during bubbles should be felt before the bubble pops, not after. They argue that his perspective—while clever—fails to consider the Austrian insights on capital structure, time preference, and the business cycle. They use analogies from apple trees to magic mushrooms to show why Austrian economics provides the clearest explanation for booms, busts, and the pain that follows.Eliezer Yudkowsky's Theory on Investment Bubbles: Mises.org/HAP520aBob's Article "Correcting Yudkowsky on the Boom": Mises.org/HAP520bBob's on The Importance of Capital Theory: Mises.org/HAP520cJoe Salerno on Austrian Business Cycle Theory: Mises.org/HAP520dDr. Newman's QJAE Article on Credit Cycles: Mises.org/HAP520eThe Mises Institute is giving away 100,000 copies of Hayek for the 21st Century. Get your free copy at Mises.org/HAPodFree

Mises Media
The Importance of Time in Explaining Asset Bubbles

Mises Media

Play Episode Listen Later Oct 6, 2025


Jonathan Newman returns to join Bob in a critique of Eliezer Yudkowsky's viral theory of investment bubbles. Yudkowsky states that the bad investment during bubbles should be felt before the bubble pops, not after. They argue that his perspective—while clever—fails to consider the Austrian insights on capital structure, time preference, and the business cycle. They use analogies from apple trees to magic mushrooms to show why Austrian economics provides the clearest explanation for booms, busts, and the pain that follows.Eliezer Yudkowsky's Theory on Investment Bubbles: Mises.org/HAP520aBob's Article "Correcting Yudkowsky on the Boom": Mises.org/HAP520bBob's on The Importance of Capital Theory: Mises.org/HAP520cJoe Salerno on Austrian Business Cycle Theory: Mises.org/HAP520dDr. Newman's QJAE Article on Credit Cycles: Mises.org/HAP520eThe Mises Institute is giving away 100,000 copies of Hayek for the 21st Century. Get your free copy at Mises.org/HAPodFree

The Bulwark Goes to Hollywood
How Movies Can Better Prep Us for the AI Threat

The Bulwark Goes to Hollywood

Play Episode Listen Later Sep 26, 2025 53:27


On this week's episode, I'm joined by Nate Soares to talk about his new book, cowritten with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. It's a fascinating book—some will say fearmongering and sensationalist; I, frankly, think they're overly optimistic about our ability to constrain the development of general intelligence in AI—in large part because of how it's structured. Each chapter is preceded by a fable of sorts about the nature of intelligence and the desires of intelligent beings that look and think very differently from humans. The point in each of these passages is less that AI will want to eliminate humanity and more that it might do so incidentally, through natural processes of resource acquisition.  This made me think about how AI is typically portrayed in film; it is all too often a Terminator-style scenario, where the intelligence is antagonistic in human ways and for human reasons. We talked some about how storytellers could do a better job of thinking about AI as it might actually exist versus how it might be like us; Ex Machina is a movie that came in for special discussion due to the thoughtful nature of the treatment of its robotic antagonist's desires. If this episode made you think, I hope you share it with a friend!

The Foresight Institute Podcast
Eliezer Yudkowsky vs Mark Miller | ASI Risks: Similar premises, opposite conclusions

The Foresight Institute Podcast

Play Episode Listen Later Sep 24, 2025 252:32


What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement. They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.The conversation spans AI collaboration, secure operating frameworks, cryptographic separation, and lessons from nuclear non-proliferation. Despite their differences, both aim for a future where AI benefits humanity without posing existential threats. Hosted on Acast. See acast.com/privacy for more information.

Investir com SIM
Compondo a Tese - 19/09/2025

Investir com SIM

Play Episode Listen Later Sep 22, 2025 36:58


Atenção (disclaimer): Os dados aqui apresentados representam minha opinião pessoal.Não são de forma alguma indicações de compra ou venda de ativos no mercado financeiro.PEC da Blindagem: veja como votou cada partidohttps://oglobo.globo.com/politica/noticia/2025/09/16/pec-da-blindagem-veja-como-votou-cada-deputado.ghtmlVeja como cada partido votou para manter a votação secreta na PEC da blindagemhttps://valor.globo.com/politica/noticia/2025/09/17/veja-como-cada-partido-votou-para-manter-a-votacao-secreta-na-pec-da-blindagem.ghtmlThe Rise of the Supreme Court's So-Called Shadow Dockethttps://podcasts.apple.com/br/podcast/the-rise-of-the-supreme-courts-so-called-shadow-docket/id1200361736?i=1000726880643&l=en-GBAre We Past Peak iPhone? + Eliezer Yudkowsky on A.I. Doomhttps://podcasts.apple.com/br/podcast/are-we-past-peak-iphone-eliezer-yudkowsky-on-a-i-doom/id1528594034?i=1000726491309&l=en-GBTrapped in a ChatGPT Spiralhttps://podcasts.apple.com/br/podcast/trapped-in-a-chatgpt-spiral/id1200361736?i=1000727028310&l=en-GBEconomic fallout mounts as Trump halts near-finished wind power projecthttps://podcasts.apple.com/br/podcast/economic-fallout-mounts-as-trump-halts-near-finished/id78304589?i=1000727120403&l=en-GB'Para salvar própria pele, parlamentares não veem divergência', diz Thiago Bronzatto sobre PEC da Blindagemhttps://podcasts.apple.com/br/podcast/para-salvar-pr%C3%B3pria-pele-parlamentares-n%C3%A3o-veem-diverg%C3%AAncia/id203963267?i=1000727116432&l=en-GBPEC da Blindagem: 'É um vexame o que está acontecendo'https://podcasts.apple.com/br/podcast/pec-da-blindagem-%C3%A9-um-vexame-o-que-est%C3%A1-acontecendo/id1552208254?i=1000727230878&l=en-GBBlindagem no Congresso abre caminho para retrocessohttps://podcasts.apple.com/br/podcast/blindagem-no-congresso-abre-caminho-para-retrocesso/id203963267?i=1000727234976&l=en-GBPEC da Blindagem: caminho para a impunidadehttps://podcasts.apple.com/br/podcast/pec-da-blindagem-caminho-para-a-impunidade/id1477406521?i=1000727283243&l=en-GBPEC da Blindagem: ‘uma violação ético-moral'https://podcasts.apple.com/br/podcast/pec-da-blindagem-uma-viola%C3%A7%C3%A3o-%C3%A9tico-moral/id203963267?i=1000727334438&l=en-GBMaria Ressa - Fighting Back Against Trump's Authoritarian Algorithm With Truth | The Daily Showhttps://www.youtube.com/watch?v=Tsb1I7hqaJ4JHSF vende quase R$ 5 bi em estoquehttps://braziljournal.com/jhsf-vende-quase-r-5-bi-em-estoque-mudando-modelo-de-incorporacao/Conselho da Oncoclínicas aprova aumento de capitalhttps://exame.com/invest/mercados/conselho-da-oncoclinicas-aprova-aumento-de-capital-de-ate-r-2-bi-falta-o-aval-dos-acionistas/Hugo Motta 'fez aprovar o maior dos absurdoshttps://podcasts.apple.com/br/podcast/hugo-motta-fez-aprovar-o-maior-dos-absurdos-da-hist%C3%B3ria/id203963267?i=1000727343038&l=en-GBCâmara: projetos em benefício própriohttps://podcasts.apple.com/br/podcast/c%C3%A2mara-projetos-em-benef%C3%ADcio-pr%C3%B3prio/id1477406521?i=1000727443772&l=en-GBUOL Prime #88: histórico de anistiashttps://podcasts.apple.com/br/podcast/uol-prime-88-como-hist%C3%B3rico-de-anistias-deu-espa%C3%A7o/id1574996957?i=1000727305499&l=en-GBNão quero mais falar de anistiahttps://podcasts.apple.com/br/podcast/n%C3%A3o-quero-mais-falar-de-anistia-vou-falar-de/id203963267?i=1000727498034&l=en-GBWhat Happens if Xi Jinping Dies in Office?https://podcasts.apple.com/br/podcast/what-happens-if-xi-jinping-dies-in-office/id1525445350?i=1000492377817&l=en-GBCDC panel overhauled by RFK Jrhttps://podcasts.apple.com/br/podcast/cdc-panel-overhauled-by-rfk-jr-changes-childhood-vaccine/id78304589?i=1000727429494&l=en-GBKimmel free speech under Trumphttps://podcasts.apple.com/br/podcast/what-the-move-to-pull-kimmel-off-the-air-says-about/id78304589?i=1000727422538&l=en-GBJimmy Kimmel and Free Speechhttps://podcasts.apple.com/br/podcast/jimmy-kimmel-and-free-speech-in-the-united-states/id1200361736?i=1000727485153&l=en-GB

The Brian Lehrer Show
Warnings From an AI Doomsayer

The Brian Lehrer Show

Play Episode Listen Later Sep 19, 2025 25:43


Nate Soares, president of the Machine Intelligence Research Institute and the co-author (with Eliezer Yudkowsky) of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (Little, Brown and Company, 2025), talks about why he worries that AI "superintelligence" will lead to catastrophic outcomes, and what safeguards he recommends to prevent this.

Making Sense with Sam Harris
#434 — Can We Survive AI?

Making Sense with Sam Harris

Play Episode Listen Later Sep 16, 2025 36:26


Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing survival instincts, hallucinations and deception in LLMs, why many prominent voices in tech remain skeptical of the dangers of superintelligent AI, the timeline for superintelligence, real-world consequences of current AI systems, the imaginary line between the internet and reality, why Eliezer and Nate believe superintelligent AI would necessarily end humanity, how we might avoid an AI-driven catastrophe, the Fermi paradox, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

LessWrong Curated Podcast
“‘If Anyone Builds It, Everyone Dies' release day!” by alexvermeer

LessWrong Curated Podcast

Play Episode Listen Later Sep 16, 2025 8:03


Back in May, we announced that Eliezer Yudkowsky and Nate Soares's new book If Anyone Builds It, Everyone Dies was coming out in September. At long last, the book is here![1] US and UK books, respectively. IfAnyoneBuildsIt.com Read on for info about reading groups, ways to help, and updates on coverage the book has received so far. Discussion Questions & Reading Group Support We want people to read and engage with the contents of the book. To that end, we've published a list of discussion questions. Find it here: Discussion Questions for Reading Groups We're also interested in offering support to reading groups, including potentially providing copies of the book and helping coordinate facilitation. If interested, fill out this AirTable form. How to Help Now that the book is out in the world, there are lots of ways you can help it succeed. For starters, read the book! [...] ---Outline:(00:49) Discussion Questions & Reading Group Support(01:18) How to Help(02:39) Blurbs(05:15) Media(06:26) In ClosingThe original text contained 2 footnotes which were omitted from this narration. --- First published: September 16th, 2025 Source: https://www.lesswrong.com/posts/fnJwaz7LxZ2LJvApm/if-anyone-builds-it-everyone-dies-release-day --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sway
Are We Past Peak iPhone? + Eliezer Yudkowsky on A.I. Doom

Sway

Play Episode Listen Later Sep 12, 2025 72:24


Apple's yearly iPhone event took place this week, and it left us asking, Is Apple losing the juice? We break down all the new products the company announced and discuss where it goes from here. Then, Eliezer Yudkowsky, one of the most fascinating people in A.I., has a new book coming out: “If Anyone Builds It, Everyone Dies.” He joins us to make the case for why A.I. development should be shut down now, long before we reach superintelligence, and how he thinks that could happen.Guests:Eliezer Yudkowsky, founder of Machine Intelligence Research Institute and a co-author of “If Anyone Builds It, Everyone Dies”Additional Reading: A.I.'s Prophet of Doom Wants to Shut It All DownAI as Normal Technology, revisitedApple's misunderstood crossbody iPhone strap might be the best I've seen We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Slate Star Codex Podcast
Book Review: If Anyone Builds It, Everyone Dies

Slate Star Codex Podcast

Play Episode Listen Later Sep 12, 2025 42:20


I. Eliezer Yudkowsky's Machine Intelligence Research Institute is the original AI safety org. But the original isn't always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a sparkly top hat in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don't? MIRI answered: moral clarity. Most people in AI safety (including me) are uncertain and confused and looking for least-bad incremental solutions. We think AI will probably be an exciting and transformative technology, but there's some chance, 5 or 15 or 30 percent, that it might turn against humanity in a catastrophic way. Or, if it doesn't, that there will be something less catastrophic but still bad - maybe humanity gradually fading into the background, the same way kings and nobles faded into the background during the modern era. This is scary, but AI is coming whether we like it or not, and probably there are also potential risks from delaying too hard. We're not sure exactly what to do, but for now we want to build a firm foundation for reacting to any future threat. That means keeping AI companies honest and transparent, helping responsible companies like Anthropic stay in the race, and investing in understanding AI goal structures and the ways that AIs interpret our commands. Then at some point in the future, we'll be close enough to the actually-scary AI that we can understand the threat model more clearly, get more popular buy-in, and decide what to do next. MIRI thinks this is pathetic - like trying to protect against an asteroid impact by wearing a hard hat. They're kind of cagey about their own probability of AI wiping out humanity, but it seems to be somewhere around 95 - 99%. They think plausibly-achievable gains in company responsibility, regulation quality, and AI scholarship are orders of magnitude too weak to seriously address the problem, and they don't expect enough of a “warning shot” that they feel comfortable kicking the can down the road until everything becomes clear and action is easy. They suggest banning all AI capabilities research immediately, to be restarted only in some distant future when the situation looks more promising. Both sides honestly believe their position and don't want to modulate their message for PR reasons. But both sides, coincidentally, think that their message is better PR. The incrementalists think a moderate, cautious approach keeps bridges open with academia, industry, government, and other actors that prefer normal clean-shaven interlocutors who don't emit spittle whenever they talk. MIRI thinks that the public is sick of focus-group-tested mealy-mouthed bullshit, but might be ready to rise up against AI if someone presented the case in a clear and unambivalent way. Now Yudkowsky and his co-author, MIRI president Nate Soares, have reached new heights of unambivalence with their new book, If Anyone Builds It, Everyone Dies (release date September 16, currently available for preorder). https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone

The Valmy
Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?

The Valmy

Play Episode Listen Later Aug 13, 2025 146:10


Podcast: Doom Debates Episode: Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?Release date: 2025-08-12Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationVitalik Buterin is the founder of Ethereum, the world's second-largest cryptocurrency by market cap, currently valued at around $500 billion. But beyond revolutionizing blockchain technology, Vitalik has become one of the most thoughtful voices on AI safety and existential risk.He's donated over $665 million to pandemic prevention and other causes, and has a 12% P(Doom) – putting him squarely in what I consider the "sane zone" for AI risk assessment. What makes Vitalik particularly interesting is that he's both a hardcore techno-optimist who built one of the most successful decentralized systems ever created, and someone willing to seriously consider AI regulation and coordination mechanisms.Vitalik coined the term "d/acc" – defensive, decentralized, democratic, differential acceleration – as a middle path between uncritical AI acceleration and total pause scenarios. He argues we need to make the world more like Switzerland (defensible, decentralized) and less like the Eurasian steppes (vulnerable to conquest).We dive deep into the tractability of AI alignment, whether current approaches like DAC can actually work when superintelligence arrives, and why he thinks a pluralistic world of competing AIs might be safer than a single aligned superintelligence. We also explore his vision for human-AI merger through brain-computer interfaces and uploading.The crux of our disagreement is that I think we're heading for a "plants vs. animals" scenario where AI will simply operate on timescales we can't match, while Vitalik believes we can maintain agency through the right combination of defensive technologies and institutional design.Finally, we tackle the discourse itself – I ask Vitalik to debunk the common ad hominem attacks against AI doomers, from "it's just a fringe position" to "no real builders believe in doom." His responses carry weight given his credibility as both a successful entrepreneur and someone who's maintained intellectual honesty throughout his career.Timestamps* 00:00:00 - Cold Open* 00:00:37 - Introducing Vitalik Buterin* 00:02:14 - Vitalik's altruism* 00:04:36 - Rationalist community influence* 00:06:30 - Opinion of Eliezer Yudkowsky and MIRI* 00:09:00 - What's Your P(Doom)™* 00:24:42 - AI timelines* 00:31:33 - AI consciousness* 00:35:01 - Headroom above human intelligence* 00:48:56 - Techno optimism discussion* 00:58:38 - e/acc: Vibes-based ideology without deep arguments* 01:02:49 - d/acc: Defensive, decentralized, democratic acceleration* 01:11:37 - How plausible is d/acc?* 01:20:53 - Why libertarian acceleration can paradoxically break decentralization* 01:25:49 - Can we merge with AIs?* 01:35:10 - Military AI concerns: How war accelerates dangerous development* 01:42:26 - The intractability question* 01:51:10 - Anthropic and tractability-washing the AI alignment problem* 02:00:05 - The state of AI x-risk discourse* 02:05:14 - Debunking ad hominem attacks against doomers* 02:23:41 - Liron's outroLinksVitalik's website: https://vitalik.eth.limoVitalik's Twitter: https://x.com/vitalikbuterinEliezer Yudkowsky's explanation of p-Zombies: https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies—Doom Debates' Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe

LessWrong Curated Podcast
“Re: Recent Anthropic Safety Research” by Eliezer Yudkowsky

LessWrong Curated Podcast

Play Episode Listen Later Aug 12, 2025 9:00


A reporter asked me for my off-the-record take on recent safety research from Anthropic. After I drafted an off-the-record reply, I realized that I was actually fine with it being on the record, so: Since I never expected any of the current alignment technology to work in the limit of superintelligence, the only news to me is about when and how early dangers begin to materialize. Even taking Anthropic's results completely at face value would change not at all my own sense of how dangerous machine superintelligence would be, because what Anthropic says they found was already very solidly predicted to appear at one future point or another. I suppose people who were previously performing great skepticism about how none of this had ever been seen in ~Real Life~, ought in principle to now obligingly update, though of course most people in the AI industry won't. Maybe political leaders [...] --- First published: August 6th, 2025 Source: https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research --- Narrated by TYPE III AUDIO.

LessWrong Curated Podcast
“The Problem” by Rob Bensinger, tanagrabeast, yams, So8res, Eliezer Yudkowsky, Gretta Duleba

LessWrong Curated Podcast

Play Episode Listen Later Aug 6, 2025 49:32


This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1] The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...] ---Outline:(02:27) 1.  There isn't a ceiling at human-level capabilities.(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.(15:12) 3.  ASI is very likely to pursue the wrong goals.(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.The original text contained 1 footnote which was omitted from this narration. --- First published: August 5th, 2025 Source: https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem --- Narrated by TYPE III AUDIO.

LessWrong Curated Podcast
“Whence the Inkhaven Residency?” by Ben Pace

LessWrong Curated Podcast

Play Episode Listen Later Aug 4, 2025 4:44


Essays like Paul Graham's, Scott Alexander's, and Eliezer Yudkowsky's have influenced a generation of people in how they think about startups, ethics, science, and the world as a whole. Creating essays that good takes a lot of skill, practice, and talent, but it looks to me that a lot of people with talent aren't putting in the work and developing the skill, except in ways that are optimized to also be social media strategies. To fix this problem, I am running the Inkhaven Residency. The idea is to gather a bunch of promising writers to invest in the art and craft of blogging, through a shared commitment to each publish a blogpost every day for the month of November. Why a daily writing structure? Well, it's a reaction to other fellowships I've seen. I've seen month-long or years-long events with exceedingly little public output, where the people would've contributed [...] --- First published: August 2nd, 2025 Source: https://www.lesswrong.com/posts/CA6XfmzYoGFWNhH8e/whence-the-inkhaven-residency --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

LessWrong Curated Podcast
“HPMOR: The (Probably) Untold Lore” by Gretta Duleba, Eliezer Yudkowsky

LessWrong Curated Podcast

Play Episode Listen Later Jul 26, 2025 67:32


Eliezer and I love to talk about writing. We talk about our own current writing projects, how we'd improve the books we're reading, and what we want to write next. Sometimes along the way I learn some amazing fact about HPMOR or Project Lawful or one of Eliezer's other works. “Wow, you're kidding,” I say, “do your fans know this? I think people would really be interested.” “I can't remember,” he usually says. “I don't think I've ever explained that bit before, I'm not sure.” I decided to interview him more formally, collect as many of those tidbits about HPMOR as I could, and share them with you. I hope you enjoy them. It's probably obvious, but there will be many, many spoilers for HPMOR in this article, and also very little of it will make sense if you haven't read the book. So go read Harry Potter and [...] ---Outline:(01:49) Characters(01:52) Masks(09:09) Imperfect Characters(20:07) Make All the Characters Awesome(22:24) Hermione as Mary Sue(26:35) Who's the Main Character?(31:11) Plot(31:14) Characters interfering with plot(35:59) Setting up Plot Twists(38:55) Time-Turner Plots(40:51) Slashfic?(45:42) Why doesnt Harry like-like Hermione?(49:36) Setting(49:39) The Truth of Magic in HPMOR(52:54) Magical Genetics(57:30) An Aside: What did Harry Figure Out?(01:00:33) Nested Nerfing Hypothesis(01:04:55) EpiloguesThe original text contained 26 footnotes which were omitted from this narration. --- First published: July 25th, 2025 Source: https://www.lesswrong.com/posts/FY697dJJv9Fq3PaTd/hpmor-the-probably-untold-lore --- Narrated by TYPE III AUDIO. ---Images from the article:

LessWrong Curated Podcast
“On ‘ChatGPT Psychosis' and LLM Sycophancy” by jdp

LessWrong Curated Podcast

Play Episode Listen Later Jul 25, 2025 30:05


As a person who frequently posts about large language model psychology I get an elevated rate of cranks and schizophrenics in my inbox. Often these are well meaning people who have been spooked by their conversations with ChatGPT (it's always ChatGPT specifically) and want some kind of reassurance or guidance or support from me. I'm also in the same part of the social graph as the "LLM whisperers" (eugh) that Eliezer Yudkowsky described as "insane", and who in many cases are in fact insane. This means I've learned what "psychosis but with LLMs" looks like and kind of learned to tune it out. This new case with Geoff Lewis interests me though. Mostly because of the sheer disparity between what he's being entranced by and my automatic immune reaction to it. I haven't even read all the screenshots he posted because I take one glance and know that this [...] ---Outline:(05:03) Timeline Of Events Related To ChatGPT Psychosis(16:16) What Causes ChatGPT Psychosis?(16:27) Ontological Vertigo(21:02) Users Are Confused About What Is And Isnt An Official Feature(24:30) The Models Really Are Way Too Sycophantic(27:03) The Memory Feature(28:54) Loneliness And Isolation--- First published: July 23rd, 2025 Source: https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy --- Narrated by TYPE III AUDIO.

London Futurists
The AI disconnect: understanding vs motivation, with Nate Soares

London Futurists

Play Episode Listen Later Jun 11, 2025 50:18


Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI.MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI.Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he's been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.”MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we'll explore what drives that view—and whether there is any room for hope.Selected follow-ups:Nate Soares - MIRIYudkowsky and Soares Announce Major New Book: “If Anyone Builds It, Everyone Dies” - MIRIThe Bayesian model of probabilistic reasoningDuring safety testing, o1 broke out of its VM - RedditLeo Szilard - Physics WorldDavid Bowie - Five Years - Old Grey Whistle TestAmara's Law - IEEERobert Oppenheimer calculation of p(doom)JD Vance commenting on AI-2027SolidGoldMagikarp - LessWrongASMLChicago Pile-1 - WikipediaCastle Bravo - WikipediaMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Robinson's Podcast
251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity

Robinson's Podcast

Play Episode Listen Later May 25, 2025 171:13


Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to avoid catastrophe and harness its power. In this episode, Robinson and Eliezer run the gamut on questions related to AI and the danger it poses to human civilization as we know it. More particularly, they discuss the alignment problem, gradient descent, consciousness, the singularity, cyborgs, ChatGPT, OpenAI, Anthropic, Claude, how long we have until doomsday, whether it can be averted, and the various reasons why and ways in which AI might wipe out human life on earth.The Machine Intelligence Research Institute: https://intelligence.org/about/Eliezer's X Account: https://x.com/ESYudkowsky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5EauthorOUTLINE00:00:00 Introduction00:00:43 The Default Condition for AI's Takeover00:06:36 Could a Future AI Country Be Our Trade Partner?00:11:18 What Is Artificial Intelligence?00:21:23 Why AIs Having Goals Could Mean the End of Humanity00:29:34 What Is the Alignment Problem?00:34:11 How To Avoid AI Apocalypse00:40:25 Would Cyborgs Eliminate Humanity?00:47:55 AI and the Problem of Gradient Descent00:55:24 How Do We Solve the Alignment Problem?01:00:50 How Anthropic's AI Freed Itself from Human Control01:08:56 The Pseudo-Alignment Problem01:19:28 Why Are People Wrong About AI Not Taking Over the World?01:23:23 How Certain Is It that AI Will Wipe Out Humanity?01:38:35 Is Eliezer Yudkowski Wrong About The AI Apocalypse01:42:04 Do AI Corporations Control the Fate of Humanity?01:43:49 How To Convince the President Not to Let AI Kill Us All01:52:01 How Will ChatGPT's Descendants Wipe Out Humanity?02:24:11 Could AI Destroy us with New Science?02:39:37 Could AI Destroy us with Advanced Biology?02:47:29 How Will AI Actually Destroy Humanity?Robinson's Website: http://robinsonerhardt.comRobinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University.

London Futurists
Humanity's final four years? with James Norris

London Futurists

Play Episode Listen Later Apr 30, 2025 49:36


In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.Selected follow-ups:James Norris websiteUpgrade your life & legacy - UpgradableThe 7 Habits of Highly Effective People (Stephen Covey)Beneficial AI 2017 - Asilomar conference"...superintelligence in a few thousand days" - Sam Altman blogpostAmara's Law - DevIQThe Probability of Nuclear War (JFK estimate)AI Designs Chemical Weapons - The BatchThe Vulnerable World Hypothesis - Nick BostromWe Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua BengioInstrumental convergence - WikipediaNeanderthal extinction - WikipediaMatrioshka brain - WikipediaWill there be a 'WW3' before 2050? - Manifold prediction marketExistential Safety Action PledgeAn Urgent Call for Global AI Governance - IAIGA petitionBuild your survival sanctuaryOther people mentioned include:Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew NgMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Cults, Cryptids, and Conspiracies
Episode 398: Irrationally Justified

Cults, Cryptids, and Conspiracies

Play Episode Listen Later Apr 25, 2025 88:24


Have you ever read Harry Potter and the Methods of Rationality?? Perhaps spent too much money on a self help workshop seminar? Join us as we talk about Eliezer Yudkowsky and his masterpiece of fiction. Where will this story truly lead us in this tale of rational magic and science. With our last episode on the topic trigger warning for some bad mental health. Thanks for listening and remember to like, rate, review, and email us at: cultscryptidsconspiracies@gmail.com or tweet us at @C3Podcast. We have some of our sources for research here: http://tinyurl.com/CristinaSourcesAlso check out our Patreon: www.patreon.com/cultscryptidsconspiracies. Thank you to T.J. Shirley for our theme

Cults, Cryptids, and Conspiracies
Episode 397: The Philosophers Science

Cults, Cryptids, and Conspiracies

Play Episode Listen Later Apr 18, 2025 78:20


Have you ever read Harry Potter and the Methods of Rationality?? Perhaps spent too much money on a self help workshop seminar? Join us as we talk about Eliezer Yudkowsky and his masterpiece of fiction. Where will this story truly lead us in this tale of rational magic and science.Thanks for listening and remember to like, rate, review, and email us at: cultscryptidsconspiracies@gmail.com or tweet us at @C3Podcast. We have some of our sources for research here: http://tinyurl.com/CristinaSourcesAlso check out our Patreon: www.patreon.com/cultscryptidsconspiracies. Thank you to T.J. Shirley for our theme

War College
The Cult of Rationalism in Silicon Valley

War College

Play Episode Listen Later Mar 25, 2025 61:34


A lot of the people designing America's technology and close to the center of American power believe some deeply weird shit. We already talked to journalist Gil Duran about the Nerd Reich, the rise of the destructive anti-democratic ideology. In this episode, we dive into another weird section of Silicon Valley: the cult of Rationalism.Max Read, the journalist behind the Read Max Substack, is here to help us through it. Rationalism is responsible for a lot more than you might think and Read lays out how it's influenced the world we live in today and how it created the environment for a cult that's got a body count.Defining rationalism: “Something between a movement, a community, and a self-help program.”Eliezer Yudkowsky and the dangers of AIWhat the hell is AGI?The Singleton Guide to Global GovernanceThe danger of thought experimentsAs always, follow the moneyVulgar bayesianismWhat's a Zizian?Sith VegansAnselm: Ontological Argument for God's ExistenceSBF and Effective AltruismREAD MAX!The Zizians and the Rationalist death cultsPausing AI Developments Isn't Enough. We Need to Shut it All Down - Eliezer Yudkowsky's TIME Magazine pieceExplaining Roko's Basilisk, the Thought Experiment That Brought Elon Musk and Grimes TogetherThe Delirious, Violent, Impossible True Story of the ZiziansThe Government Knows AGI is Coming | The Ezra Klein ShowThe archived ‘Is Trump Racist' rational postSupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.

The Farm Podcast Mach II
Thiel, Yudkowsky, Rationalists & the Cult of Ziz w/ David Z. Morris & Recluse

The Farm Podcast Mach II

Play Episode Listen Later Feb 3, 2025 109:59


Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.

Artificial Intelligence in Industry with Daniel Faggella
AI Risk Management and Governance Strategies for the Future - with Duncan Cass-Beggs of Center for International Governance Innovation

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Feb 1, 2025 77:40


Today's guest is Duncan Cass-Beggs, Executive Director of the Global AI Risks Initiative at the Center for International Governance Innovation (CIGI). He joins Emerj CEO and Head of Research Daniel Faggella to explore the pressing challenges and opportunities surrounding Artificial General Intelligence (AGI) governance on a global scale. This is a special episode in our AI futures series that ties right into our overlapping series on AGI governance on the Trajectory podcast, where we've had luminaries like Eliezer Yudkowsky, Connor Leahy, and other globally recognized AGI governance thinkers. We hope you enjoy this episode. If you're interested in these topics, make sure to dive deeper into where AI is affecting the bigger picture by visiting emergj.com/tj2.

TrueAnon
Episode 434: Evil Gods Must Be Fought: The Zizian Murder Cult [Part 1]

TrueAnon

Play Episode Listen Later Jan 29, 2025 128:17


Part one of our two-part investigation into the Rationalist cult “The Zizians.” We start with the killing of a border patrol officer and make our way back into the belly of the beast: Silicon Valley. Featuring: Harry Potter fanfic, samurai swords, Guy Fawkes masks, Blake Masters, Bayesian probability, and Eliezer Yudkowsky. Infohazard warning: some of your least favs will be implicated. Discover more episodes at podcast.trueanon.com

Artificial Intelligence in Industry with Daniel Faggella
Understanding AGI Alignment Challenges and Solutions - with Eliezer Yudkowsky of the Machine Intelligence Research Institute

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jan 25, 2025 43:03


Today's episode is a special addition to our AI Futures series, featuring a special sneak peek at an upcoming episode of our Trajectory podcast with guest Eliezer Yudkowsky, AI researcher, founder, and research fellow at the Machine Intelligence Research Institute. Eliezer joins Emerj CEO and Head of Research Daniel Faggella to discuss the governance challenges of increasingly powerful AI systems—and what it might take to ensure a safe and beneficial trajectory for humanity. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

The Unadulterated Intellect
#83 – Robin Hanson and Eliezer Yudkowsky: Jane Street Singularity Debate

The Unadulterated Intellect

Play Episode Listen Later Jan 5, 2025 98:18


Machine Learning Street Talk
Eliezer Yudkowsky and Stephen Wolfram on AI X-risk

Machine Learning Street Talk

Play Episode Listen Later Nov 11, 2024 258:30


Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky's argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems' fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** TOC: 1. Foundational AI Concepts and Risks [00:00:01] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0

Slate Star Codex Podcast
Contra DeBoer On Temporal Copernicanism

Slate Star Codex Podcast

Play Episode Listen Later Oct 1, 2024 14:07


Freddie deBoer has a post on what he calls “the temporal Copernican principle.” He argues we shouldn't expect a singularity, apocalypse, or any other crazy event in our lifetimes. Discussing celebrity transhumanist Yuval Harari, he writes: What I want to say to people like Yuval Harari is this. The modern human species is about 250,000 years old, give or take 50,000 years depending on who you ask. Let's hope that it keeps going for awhile - we'll be conservative and say 50,000 more years of human life. So let's just throw out 300,000 years as the span of human existence, even though it could easily be 500,000 or a million or more. Harari's lifespan, if he's lucky, will probably top out at about 100 years. So: what are the odds that Harari's lifespan overlaps with the most important period in human history, as he believes, given those numbers? That it overlaps with a particularly important period of human history at all? Even if we take the conservative estimate for the length of human existence of 300,000 years, that means Harari's likely lifespan is only about .33% of the entirety of human existence. Isn't assuming that this .33% is somehow particularly special a very bad assumption, just from the basis of probability? And shouldn't we be even more skeptical given that our basic psychology gives us every reason to overestimate the importance of our own time? (I think there might be a math error here - 100 years out of 300,000 is 0.033%, not 0.33% - but this isn't my main objection.) He then condemns a wide range of people, including me, for failing to understand this: Some people who routinely violate the Temporal Copernican Principle include Harari, Eliezer Yudkowsky, Sam Altman, Francis Fukuyama, Elon Musk, Clay Shirky, Tyler Cowen, Matt Yglesias, Tom Friedman, Scott Alexander, every tech company CEO, Ray Kurzweil, Robin Hanson, and many many more. I think they should ask themselves how much of their understanding of the future ultimately stems from a deep-seated need to believe that their times are important because they think they themselves are important, or want to be. I deny misunderstanding this. Freddie is wrong. https://www.astralcodexten.com/p/contra-deboer-on-temporal-copernicanism 

The Nonlinear Library
LW - MIRI's September 2024 newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 2:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's September 2024 newsletter, published by Harlan on September 17, 2024 on LessWrong. MIRI updates Aaron Scher and Joe Collman have joined the Technical Governance Team at MIRI as researchers. Aaron previously did independent research related to sycophancy in language models and mechanistic interpretability, while Joe previously did independent research related to AI safety via debate and contributed to field-building work at MATS and BlueDot Impact. In an interview with PBS News Hour's Paul Solman, Eliezer Yudkowsky briefly explains why he expects smarter-than-human AI to cause human extinction. In an interview with The Atlantic's Ross Andersen, Eliezer discusses the reckless behavior of the leading AI companies, and the urgent need to change course. News and links Google DeepMind announced a hybrid AI system capable of solving International Mathematical Olympiad problems at the silver medalist level. In the wake of this development, a Manifold prediction market significantly increased its odds that AI will achieve gold level by 2025, a milestone that Paul Christiano gave less than 8% odds and Eliezer gave at least 16% odds to in 2021. The computer scientist Yoshua Bengio discusses and responds to some common arguments people have for not worrying about the AI alignment problem. SB 1047, a California bill establishing whistleblower protections and mandating risk assessments for some AI developers, has passed the State Assembly and moved on to the desk of Governor Gavin Newsom, to either be vetoed or passed into law. The bill has received opposition from several leading AI companies, but has also received support from a number of employees of those companies, as well as many academic researchers. At the time of this writing, prediction markets think it's about 50% likely that the bill will become law. In a new report, researchers at Epoch AI estimate how big AI training runs could get by 2030, based on current trends and potential bottlenecks. They predict that by the end of the decade it will be feasible for AI companies to train a model with 2e29 FLOP, which is about 10,000 times the amount of compute used to train GPT-4. Abram Demski, who previously worked at MIRI as part of our recently discontinued Agent Foundations research program, shares an update about his independent research plans, some thoughts on public vs private research, and his current funding situation. You can subscribe to the MIRI Newsletter here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft) by Devin Kalish

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 73:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft), published by Devin Kalish on September 17, 2024 on The Effective Altruism Forum. What follows is a lightly edited version of the thesis I wrote for my Bioethics MA program. I'm hoping to do more with this in the future, including seeking publication and/or expanding it into a dissertation or short book. In its current state, I feel like it is in pretty rough shape. I hope it is useful and interesting for people as puzzled by this very niche philosophical worry as me, but I'm also looking for feedback on how I can improve it. There's no guarantee I will take it, or even do anything further with this piece, but I would still appreciate the feedback. I may or may not interact much in the comments section. I. Introduction: Duration is an essential component of many theories of wellbeing. While there are theories of wellbeing that are sufficiently discretized that time isn't so obviously relevant to them, like achievements, it is hard to deny that time matters to some parts of a moral patient's wellbeing. A five-minute headache is better than an hour-long headache, all else held equal. A love that lasts for decades provides more meaning to a life than one that last years or months, all else held equal. The fulfillment of a desire you have had for years matters more than the fulfillment of a desire you have merely had for minutes, all else held equal. However, in our day to day lives we encounter time in two ways, objectively and subjectively. What do we do when the two disagree? This problem reached my attention years ago when I was reflecting on the relationship between my own theoretical leaning, utilitarianism, and the idea of aggregating interests. Aggregation between lives is known for its counterintuitive implications and the rich discourse around this, but I am uncomfortable with aggregation within lives as well. Some of this is because I feel the problems of interpersonal aggregation remain in the intrapersonal case, but there was also a problem I hadn't seen any academic discussion of at the time - objective time seemed to map the objective span of wellbeing if you plot each moment of wellbeing out to aggregate, but it is subjective time we actually care about. Aggregation of these objective moments gives a good explanation of our normal intuitions about time and wellbeing, but it fails to explain our intuitions about time whenever these senses of it come apart. As I will attempt to motivate later, the intuition that it is subjective time that matters is very strong in cases where the two substantially differ. Indeed, although the distinction rarely appears in papers at all, the main way I have seen it brought up (for instance in "The Ethics of Artificial Intelligence[1]" by Nick Bostrom and Eliezer Yudkowsky) is merely to notice there is a difference, and to effectively just state that it is subjective time, of course, that we should care about. I have very rarely run into a treatment dedicated to the "why", the closest I have seen is the writing of Jason Schukraft[2], with his justification for why it is subjective time that matters for Rethink Priorities' "Moral Weights" project. His justification is similar to an answer I have heard in some form several times from defenders: We measure other values of consciousness subjectively, such as happiness and suffering, why shouldn't we measure time subjectively as well? I believe without more elaboration, this explanation has the downside that it both gives no attention to the idea that time matters because it tells us "how much" of an experience there actually is, and has the downside that it seems irrelevant to any theory of wellbeing other than hedonism. It also, crucially, fails to engage with the question of what exactly subje...

Big Technology Podcast
Google's AI Narrative Is Flipping, Microsoft Hedges Its OpenAI Bet, AI Clones Are Here

Big Technology Podcast

Play Episode Listen Later Apr 12, 2024 60:36


Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) The Solar Eclipse! 2) AI Music generation software Suno 3) Google flipping of its AI narrative 4) Ranjan's reflections from Google Cloud Next 5) Is Google's AI enterprise bet the right strategy 6) Microsoft hedging its OpenAI bet 7) Implications of Mustafa Suleyman's remit within Microsoft 8) OpenAI fires leakers 9) Eliezer Yudkowsky refuses interview and his reps won't pick up the phone 10) AI model training running out of data 11) Prospects of synthetic data for AI training 12) The Humane AI pin flops 13) Can Sam Altman and Jony Ive build an AI device 14) Cloning ourselves with AI. ---- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Hold These Truths with Dan Crenshaw
Can We Stop the AI Apocalypse? | Eliezer Yudkowsky

Hold These Truths with Dan Crenshaw

Play Episode Listen Later Jul 13, 2023 61:06


Artificial Intelligence (AI) researcher Eliezer Yudkowsky makes the case for why we should view AI as an existential threat to humanity. Rep. Crenshaw gets into the basics of AI and how the new AI program, GPT-4, is a revolutionary leap forward in the tech. Eliezer hypothesizes the most likely scenarios if AI becomes self-aware and unconstrained – from rogue programs that blackmail targets to self-replicating nano robots. They discuss building global coalitions to rein in AI development and how China views AI. And they explore first steps Congress could take to limit AI's capabilities for harm while still enabling its promising advances in research and development. Eliezer Yudkowsky is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California. Follow him on Twitter @ESYudkowsky