Podcasts about Eliezer Yudkowsky

American blogger, writer, and artificial intelligence researcher

  • 146PODCASTS
  • 914EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Oct 27, 2025LATEST
Eliezer Yudkowsky

POPULARITY

20172018201920202021202220232024


Best podcasts about Eliezer Yudkowsky

Latest podcast episodes about Eliezer Yudkowsky

The 92 Report
150. Steve Petersen, ​​From Improv to Philosophy of AI

The 92 Report

Play Episode Listen Later Oct 27, 2025 61:47


Show Notes: Steve recounts his senior year at Harvard, and how he was torn between pursuing acting and philosophy. He graduated with a dual degree in philosophy and math but also found time to act in theater and participated in 20 shows.  A Love of Theater and a Move to London Steve explains why the lack of a theater major at Harvard allowed him to explore acting more than a university with a theater major. He touches on his parents' concerns about his career prospects if he pursued acting, and his decision to apply to both acting and philosophy graduate schools. Steve discusses his rejection from all graduate schools and why he decided to move to London with friends Evan Cohn and Brad Rouse. He talks about his experience in London. Europe on $20 a Day Steve details his backpacking trip through Europe on a $20 a day budget, staying with friends from Harvard and high school. He mentions a job opportunity in Japan through the Japanese Ministry of Education and describes his three-year stint in Japan, working as a native English speaker for the Japanese Ministry of Education, and being immersed in Japanese culture. He shares his experiences of living in the countryside and reflects on the impact of living in a different culture, learning some Japanese, and making Japanese friends. He discusses the personal growth and self-reflection that came from his time in Japan, including his first steps off the "achiever track." On to Philosophy Graduate School  When Steve returned to the U.S. he decided to apply to philosophy graduate schools again, this time with more success. He enrolled at the University of Michigan. However, he was miserable during grad school, which led him to seek therapy. Steve credits therapy with helping him make better choices in life. He discusses the competitive and prestigious nature of the Michigan philosophy department and the challenges of finishing his dissertation. He touches on the narrow and competitive aspects of pursuing a career in philosophy and shares his experience of finishing his dissertation and the support he received from a good co-thesis advisor. Kalamazoo College and Improv Steve describes his postdoc experience at Kalamazoo College, where he continued his improv hobby and formed his own improv group. He mentions a mockumentary-style improv movie called Comic Evangelists that premiered at the AFI Film Festival. Steve moved to Buffalo, Niagara University, and reflects on the challenges of adjusting to a non-research job. He discusses his continued therapy in Buffalo and the struggle with both societal and his own expectations of  professional status, however, with the help of a friend, he came to the realization that he had "made it" in his current circumstances. Steve describes his acting career in Buffalo, including roles in Shakespeare in the Park and collaborating with a classmate, Ian Lithgow. A Speciality in Philosophy of Science Steve shares his personal life, including meeting his wife in 2009 and starting a family. He explains his specialty in philosophy of science, focusing on the math and precise questions in analytic philosophy. He discusses his early interest in AI and computational epistemology, including the ethics of AI and the superintelligence worry. Steve describes his involvement in a group that discusses the moral status of digital minds and AI alignment.  Aligning AI with Human Interests Steve reflects on the challenges of aligning AI with human interests and the potential existential risks of advanced AI. He shares his concerns about the future of AI and the potential for AI to have moral status. He touches on the superintelligence concern and the challenges of aligning AI with human goals. Steve mentions the work of Eliezer Yudkowsky and the importance of governance and alignment in AI development. He reflects on the broader implications of AI for humanity and the need for careful consideration of long-term risks. Harvard Reflections Steve mentions Math 45 and how it kicked his butt, and his core classes included jazz, an acting class and clown improv with Jay Nichols.  Timestamps: 01:43: Dilemma Between Acting and Philosophy 03:44: Rejection and Move to London  07:09: Life in Japan and Cultural Insights  12:19: Return to Academia and Grad School Challenges  20:09: Therapy and Personal Growth  22:06: Transition to Buffalo and Philosophy Career  26:54: Philosophy of Science and AI Ethics  33:20: Future Concerns and AI Predictions  55:17: Reflections on Career and Personal Growth  Links: Steve's Website: https://stevepetersen.net/ On AI Superintelligence:  If Anyone Builds it, Everyone Dies Superintelligence The Alignment Problem Some places to donate: The Long-Term Future Fund Open Philanthropy On improv Impro Upright Citizens Brigade Comedy Improvisation Manual   Featured Non-profit: The featured non-profit of this week's episode is brought to you by Rich Buery who reports: “Hi, I'm Rich Buery, class of 1992. The featured nonprofit of this episode of The 92 Report is imentor. imentor is a powerful youth mentoring organization that connects volunteers with high school students and prepares them on the path to and through college. Mentors stay with the students through the last two years of high school and on the beginning of their college journey. I helped found imentor over 25 years ago and served as its founding executive director, and I am proud that over the last two decades, I've remained on the board of directors. It's truly a great organization. They need donors and they need volunteers. You can learn more about their work@www.imentor.org That's www, dot i m, e n, t, O, r.org, and now here is Will Bachman with this week's episode. To learn more about their work, visit: www.imentor.org.   

Modern Wisdom
#1011 - Eliezer Yudkowsky - Why Superhuman AI Would Kill Us All

Modern Wisdom

Play Episode Listen Later Oct 25, 2025 97:08


Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute. Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there's a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it's too late Expect to learn the problem with building superhuman AI, why AI would have goals we haven't programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get 15% off your first order of Intake's magnetic nasal strips at https://intakebreathing.com/modernwisdom Get 10% discount on all Gymshark's products at https://gym.sh/modernwisdom (use code MODERNWISDOM10) Get 4 extra months of Surfshark VPN at https://surfshark.com/modernwisdom Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices

Bloggingheads.tv
Rationalism and AI Doomerism (Robert Wright & Liron Shapira)

Bloggingheads.tv

Play Episode Listen Later Oct 16, 2025 60:00


Teaser ... Why Liron became a Yudkowskian ... Eliezer Yudkowsky's vision of AI apocalypse ... Does intelligence want power? ... Decoding Yudkowsky's key Darwinian metaphor ... Is doomerism crowding out other AI worries? ... Liron: The silent majority is very AI anxious ... Heading to Overtime ...

The Ezra Klein Show
How Afraid of the A.I. Apocalypse Should We Be?

The Ezra Klein Show

Play Episode Listen Later Oct 15, 2025 67:47


Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case.Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent systems Yudkowsky thought humans should never create. But Yudkowsky is still out there sounding the alarm. He has a new book out, co-written with Nate Soares, “If Anyone Builds It, Everyone Dies,” trying to warn the world before it's too late.So what does Yudkowsky see that most of us don't? What makes him so certain? And why does he think he hasn't been able to persuade more people?Mentioned:Oversight of A.I.: Rules for Artificial IntelligenceIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares“A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.” by Kashmir HillBook Recommendations:A Step Farther Out by Jerry PournelleJudgment under Uncertainty by Daniel Kahneman, Paul Slovic, and Amos TverskyProbability Theory by E. T. JaynesThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our executive producer is Claire Gordon. The show's production team also includes Marie Cascione, Annie Galvin, Kristin Lin, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Helen Toner and Jeffrey Ladish. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.

The Jim Rutt Show
EP 327 Nate Soares on Why Superhuman AI Would Kill Us All

The Jim Rutt Show

Play Episode Listen Later Oct 15, 2025 97:07


Jim talks with Nate Soares about the ideas in his and Eliezer Yudkowsky's book If Anybody Builds It, Everybody Dies: Why Superhuman AI Would Kill Us All. They discuss the book's claim that mitigating existential AI risk should be a top global priority, the idea that LLMs are grown, the opacity of deep learning networks, the Golden Gate activation vector, whether our understanding of deep learning networks might improve enough to prevent catastrophe, goodness as a narrow target, the alignment problem, the problem of pointing minds, whether LLMs are just stochastic parrots, why predicting a corpus often requires more mental machinery than creating a corpus, depth & generalization of skills, wanting as an effective strategy, goal orientation, limitations of training goal pursuit, transient limitations of current AI, protein folding and AlphaFold, the riskiness of automating alignment research, the correlation between capability and more coherent drives, why the authors anchored their argument on transformers & LLMs, the inversion of Moravec's paradox, the geopolitical multipolar trap, making world leaders aware of the issues, a treaty to ban the race to superintelligence, the specific terms of the proposed treaty, a comparison with banning uranium enrichment, why Jim tentatively thinks this proposal is a mistake, a priesthood of the power supply, whether attention is a zero-sum game, and much more. Episode Transcript "Psyop or Insanity or ...? Peter Thiel, the Antichrist, and Our Collapsing Epistemic Commons," by Jim Rutt "On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback," by Marcus Williams et al. Attention Sinks and Compression Valleys in LLMs are Two Sides of the Same Coin," by Enrique Queipo-de-Llano et al. JRS EP 217 - Ben Goertzel on a New Framework for AGI "A Tentative Draft of a Treaty, With Annotations" Nate Soares is the President of the Machine Intelligence Research Institute. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.

London Futurists
Safe superintelligence via a community of AIs and humans, with Craig Kaplan

London Futurists

Play Episode Listen Later Oct 10, 2025 41:15


Craig Kaplan has been thinking about superintelligence longer than most. He bought the URL superintelligence.com back in 2006, and many years before that, in the late 1980s, he co-authored a series of papers with one of the founding fathers of AI, Herbert Simon.Craig started his career as a scientist with IBM, and later founded and ran a venture-backed company called PredictWallStreet that brought the wisdom of the crowd to Wall Street, and improved the performance of leading hedge funds. He sold that company in 2020, and now spends his time working out how to make the first superintelligence safe. As he puts it, he wants to reduce P(Doom) and increase P(Zoom).Selected follow-ups:iQ CompanyHerbert A. Simon - WikipediaAmara's Law and Its Place in the Future of Tech - Pohan LinPredict Wall StreetThe Society of Mind - book by Marvin MinskyAI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC NewsStatement on AI Risk - Center for AI SafetyI've Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells - Paul Tudor JonesSecrets of Software Quality: 40 Innovations from IBM - book by Craig KaplanLondon Futurists Podcast episode featuring David BrinReason in human affairs - book by Herbert SimonUS and China will intervene to halt ‘suicide race' of AGI – Max TegmarkIf Anybody Builds It, Everyone Dies - book by Eliezer Yudkowsky and Nate SoaresAGI-25 - conference in ReykjavikThe First Global Brain Workshop - Brussels 2001Center for Integrated CognitionPaul S. RosenbloomTatiana Shavrina, MetaHenry Minsky launches AI startup inspired by father's MIT researchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

The Jim Rutt Show
EP 325 Joe Edelman on Full-Stack AI Alignment

The Jim Rutt Show

Play Episode Listen Later Oct 7, 2025 72:12


Jim talks with Joe Edelman about the ideas in the Meaning Alignment Institute's recent paper "Full Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value." They discuss pluralism as a core principle in designing social systems, the informational basis for alignment, how preferential models fail to capture what people truly care about, the limitations of markets and voting as preference-based systems, critiques of text-based approaches in LLMs, thick models of value, values as attentional policies, AI assistants as potential vectors for manipulation, the need for reputation systems and factual grounding, the "super negotiator" project for better contract negotiation, multipolar traps, moral graph elicitation, starting with membranes, Moloch-free zones, unintended consequences and lessons from early Internet optimism, concentration of power as a key danger, co-optation risks, and much more. Episode Transcript "A Minimum Viable Metaphysics," by Jim Rutt (Substack) Jim's Substack JRS Currents 080: Joe Edelman and Ellie Hain on Rebuilding Meaning Meaning Alignment Institute If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky and Nate Soares "Full Stack Alignment: Co-aligning AI and Institutions with Thick Models of Value," by Joe Edelman et al. "What Are Human Values and How Do We Align AI to Them?" by Oliver Klingefjord, Ryan Lowe, and Joe Edelman Joe Edelman has spent much of his life trying to understand how ML systems and markets could change, retaining their many benefits but avoiding their characteristic problems: of atomization, and of servicing shallow desires over deeper needs. Along the way this led him to formulate theories of human meaning and values (https://arxiv.org/abs/2404.10636) and study models of societal transformation (https://www.full-stack-alignment.ai/paper) as well as inventing the meaning-based metrics used at CouchSurfing, Facebook, and Apple, co-founding the Center for Humane Technology and the Meaning Alignment Institute, and inventing new democratic systems (https://arxiv.org/abs/2404.10636). He's currently one of the PIs leading the Full-Stack Alignment program at the Meaning Alignment Institute, with a network of more than 50 researchers at universities and corporate labs working on these issues.

Big Tech
Geoffrey Hinton vs. The End of the World

Big Tech

Play Episode Listen Later Oct 7, 2025 69:11


The story of how Geoffrey Hinton became “the godfather of AI” has reached mythic status in the tech world.While he was at the University of Toronto, Hinton pioneered the neural network research that would become the backbone of modern AI. (One of his students, Ilya Sutskever, went on to be one of OpenAI's most influential scientific minds.) In 2013, Hinton left the academy and went to work for Google, eventually winning both a Turing Award and a Nobel Prize.I think it's fair to say that artificial intelligence as we know it, may not exist without Geoffrey Hinton.But Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations and citizens that his life's work – this thing he helped build – might lead to our collective extinction. And that moment may be closer than we think, because Hinton believes AI may already be conscious.But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada's, seem reluctant to get in the way.So I wanted to ask Hinton: If we keep going down this path, what will become of us?Mentioned:If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate SoaresAgentic Misalignment: How LLMs could be insider threats, by AnthropicMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Human Action Podcast
The Importance of Time in Explaining Asset Bubbles

The Human Action Podcast

Play Episode Listen Later Oct 6, 2025


Jonathan Newman returns to join Bob in a critique of Eliezer Yudkowsky's viral theory of investment bubbles. Yudkowsky states that the bad investment during bubbles should be felt before the bubble pops, not after. They argue that his perspective—while clever—fails to consider the Austrian insights on capital structure, time preference, and the business cycle. They use analogies from apple trees to magic mushrooms to show why Austrian economics provides the clearest explanation for booms, busts, and the pain that follows.Eliezer Yudkowsky's Theory on Investment Bubbles: Mises.org/HAP520aBob's Article "Correcting Yudkowsky on the Boom": Mises.org/HAP520bBob's on The Importance of Capital Theory: Mises.org/HAP520cJoe Salerno on Austrian Business Cycle Theory: Mises.org/HAP520dDr. Newman's QJAE Article on Credit Cycles: Mises.org/HAP520eThe Mises Institute is giving away 100,000 copies of Hayek for the 21st Century. Get your free copy at Mises.org/HAPodFree

Mises Media
The Importance of Time in Explaining Asset Bubbles

Mises Media

Play Episode Listen Later Oct 6, 2025


Jonathan Newman returns to join Bob in a critique of Eliezer Yudkowsky's viral theory of investment bubbles. Yudkowsky states that the bad investment during bubbles should be felt before the bubble pops, not after. They argue that his perspective—while clever—fails to consider the Austrian insights on capital structure, time preference, and the business cycle. They use analogies from apple trees to magic mushrooms to show why Austrian economics provides the clearest explanation for booms, busts, and the pain that follows.Eliezer Yudkowsky's Theory on Investment Bubbles: Mises.org/HAP520aBob's Article "Correcting Yudkowsky on the Boom": Mises.org/HAP520bBob's on The Importance of Capital Theory: Mises.org/HAP520cJoe Salerno on Austrian Business Cycle Theory: Mises.org/HAP520dDr. Newman's QJAE Article on Credit Cycles: Mises.org/HAP520eThe Mises Institute is giving away 100,000 copies of Hayek for the 21st Century. Get your free copy at Mises.org/HAPodFree

The Bulwark Goes to Hollywood
How Movies Can Better Prep Us for the AI Threat

The Bulwark Goes to Hollywood

Play Episode Listen Later Sep 26, 2025 53:27


On this week's episode, I'm joined by Nate Soares to talk about his new book, cowritten with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. It's a fascinating book—some will say fearmongering and sensationalist; I, frankly, think they're overly optimistic about our ability to constrain the development of general intelligence in AI—in large part because of how it's structured. Each chapter is preceded by a fable of sorts about the nature of intelligence and the desires of intelligent beings that look and think very differently from humans. The point in each of these passages is less that AI will want to eliminate humanity and more that it might do so incidentally, through natural processes of resource acquisition.  This made me think about how AI is typically portrayed in film; it is all too often a Terminator-style scenario, where the intelligence is antagonistic in human ways and for human reasons. We talked some about how storytellers could do a better job of thinking about AI as it might actually exist versus how it might be like us; Ex Machina is a movie that came in for special discussion due to the thoughtful nature of the treatment of its robotic antagonist's desires. If this episode made you think, I hope you share it with a friend!

The Foresight Institute Podcast
Eliezer Yudkowsky vs Mark Miller | ASI Risks: Similar premises, opposite conclusions

The Foresight Institute Podcast

Play Episode Listen Later Sep 24, 2025 252:32


What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement. They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.The conversation spans AI collaboration, secure operating frameworks, cryptographic separation, and lessons from nuclear non-proliferation. Despite their differences, both aim for a future where AI benefits humanity without posing existential threats. Hosted on Acast. See acast.com/privacy for more information.

Investir com SIM
Compondo a Tese - 19/09/2025

Investir com SIM

Play Episode Listen Later Sep 22, 2025 36:58


Atenção (disclaimer): Os dados aqui apresentados representam minha opinião pessoal.Não são de forma alguma indicações de compra ou venda de ativos no mercado financeiro.PEC da Blindagem: veja como votou cada partidohttps://oglobo.globo.com/politica/noticia/2025/09/16/pec-da-blindagem-veja-como-votou-cada-deputado.ghtmlVeja como cada partido votou para manter a votação secreta na PEC da blindagemhttps://valor.globo.com/politica/noticia/2025/09/17/veja-como-cada-partido-votou-para-manter-a-votacao-secreta-na-pec-da-blindagem.ghtmlThe Rise of the Supreme Court's So-Called Shadow Dockethttps://podcasts.apple.com/br/podcast/the-rise-of-the-supreme-courts-so-called-shadow-docket/id1200361736?i=1000726880643&l=en-GBAre We Past Peak iPhone? + Eliezer Yudkowsky on A.I. Doomhttps://podcasts.apple.com/br/podcast/are-we-past-peak-iphone-eliezer-yudkowsky-on-a-i-doom/id1528594034?i=1000726491309&l=en-GBTrapped in a ChatGPT Spiralhttps://podcasts.apple.com/br/podcast/trapped-in-a-chatgpt-spiral/id1200361736?i=1000727028310&l=en-GBEconomic fallout mounts as Trump halts near-finished wind power projecthttps://podcasts.apple.com/br/podcast/economic-fallout-mounts-as-trump-halts-near-finished/id78304589?i=1000727120403&l=en-GB'Para salvar própria pele, parlamentares não veem divergência', diz Thiago Bronzatto sobre PEC da Blindagemhttps://podcasts.apple.com/br/podcast/para-salvar-pr%C3%B3pria-pele-parlamentares-n%C3%A3o-veem-diverg%C3%AAncia/id203963267?i=1000727116432&l=en-GBPEC da Blindagem: 'É um vexame o que está acontecendo'https://podcasts.apple.com/br/podcast/pec-da-blindagem-%C3%A9-um-vexame-o-que-est%C3%A1-acontecendo/id1552208254?i=1000727230878&l=en-GBBlindagem no Congresso abre caminho para retrocessohttps://podcasts.apple.com/br/podcast/blindagem-no-congresso-abre-caminho-para-retrocesso/id203963267?i=1000727234976&l=en-GBPEC da Blindagem: caminho para a impunidadehttps://podcasts.apple.com/br/podcast/pec-da-blindagem-caminho-para-a-impunidade/id1477406521?i=1000727283243&l=en-GBPEC da Blindagem: ‘uma violação ético-moral'https://podcasts.apple.com/br/podcast/pec-da-blindagem-uma-viola%C3%A7%C3%A3o-%C3%A9tico-moral/id203963267?i=1000727334438&l=en-GBMaria Ressa - Fighting Back Against Trump's Authoritarian Algorithm With Truth | The Daily Showhttps://www.youtube.com/watch?v=Tsb1I7hqaJ4JHSF vende quase R$ 5 bi em estoquehttps://braziljournal.com/jhsf-vende-quase-r-5-bi-em-estoque-mudando-modelo-de-incorporacao/Conselho da Oncoclínicas aprova aumento de capitalhttps://exame.com/invest/mercados/conselho-da-oncoclinicas-aprova-aumento-de-capital-de-ate-r-2-bi-falta-o-aval-dos-acionistas/Hugo Motta 'fez aprovar o maior dos absurdoshttps://podcasts.apple.com/br/podcast/hugo-motta-fez-aprovar-o-maior-dos-absurdos-da-hist%C3%B3ria/id203963267?i=1000727343038&l=en-GBCâmara: projetos em benefício própriohttps://podcasts.apple.com/br/podcast/c%C3%A2mara-projetos-em-benef%C3%ADcio-pr%C3%B3prio/id1477406521?i=1000727443772&l=en-GBUOL Prime #88: histórico de anistiashttps://podcasts.apple.com/br/podcast/uol-prime-88-como-hist%C3%B3rico-de-anistias-deu-espa%C3%A7o/id1574996957?i=1000727305499&l=en-GBNão quero mais falar de anistiahttps://podcasts.apple.com/br/podcast/n%C3%A3o-quero-mais-falar-de-anistia-vou-falar-de/id203963267?i=1000727498034&l=en-GBWhat Happens if Xi Jinping Dies in Office?https://podcasts.apple.com/br/podcast/what-happens-if-xi-jinping-dies-in-office/id1525445350?i=1000492377817&l=en-GBCDC panel overhauled by RFK Jrhttps://podcasts.apple.com/br/podcast/cdc-panel-overhauled-by-rfk-jr-changes-childhood-vaccine/id78304589?i=1000727429494&l=en-GBKimmel free speech under Trumphttps://podcasts.apple.com/br/podcast/what-the-move-to-pull-kimmel-off-the-air-says-about/id78304589?i=1000727422538&l=en-GBJimmy Kimmel and Free Speechhttps://podcasts.apple.com/br/podcast/jimmy-kimmel-and-free-speech-in-the-united-states/id1200361736?i=1000727485153&l=en-GB

The Brian Lehrer Show
Warnings From an AI Doomsayer

The Brian Lehrer Show

Play Episode Listen Later Sep 19, 2025 25:43


Nate Soares, president of the Machine Intelligence Research Institute and the co-author (with Eliezer Yudkowsky) of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (Little, Brown and Company, 2025), talks about why he worries that AI "superintelligence" will lead to catastrophic outcomes, and what safeguards he recommends to prevent this.

Making Sense with Sam Harris
#434 — Can We Survive AI?

Making Sense with Sam Harris

Play Episode Listen Later Sep 16, 2025 36:26


Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing survival instincts, hallucinations and deception in LLMs, why many prominent voices in tech remain skeptical of the dangers of superintelligent AI, the timeline for superintelligence, real-world consequences of current AI systems, the imaginary line between the internet and reality, why Eliezer and Nate believe superintelligent AI would necessarily end humanity, how we might avoid an AI-driven catastrophe, the Fermi paradox, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

LessWrong Curated Podcast
“‘If Anyone Builds It, Everyone Dies' release day!” by alexvermeer

LessWrong Curated Podcast

Play Episode Listen Later Sep 16, 2025 8:03


Back in May, we announced that Eliezer Yudkowsky and Nate Soares's new book If Anyone Builds It, Everyone Dies was coming out in September. At long last, the book is here![1] US and UK books, respectively. IfAnyoneBuildsIt.com Read on for info about reading groups, ways to help, and updates on coverage the book has received so far. Discussion Questions & Reading Group Support We want people to read and engage with the contents of the book. To that end, we've published a list of discussion questions. Find it here: Discussion Questions for Reading Groups We're also interested in offering support to reading groups, including potentially providing copies of the book and helping coordinate facilitation. If interested, fill out this AirTable form. How to Help Now that the book is out in the world, there are lots of ways you can help it succeed. For starters, read the book! [...] ---Outline:(00:49) Discussion Questions & Reading Group Support(01:18) How to Help(02:39) Blurbs(05:15) Media(06:26) In ClosingThe original text contained 2 footnotes which were omitted from this narration. --- First published: September 16th, 2025 Source: https://www.lesswrong.com/posts/fnJwaz7LxZ2LJvApm/if-anyone-builds-it-everyone-dies-release-day --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sway
Are We Past Peak iPhone? + Eliezer Yudkowsky on A.I. Doom

Sway

Play Episode Listen Later Sep 12, 2025 72:24


Apple's yearly iPhone event took place this week, and it left us asking, Is Apple losing the juice? We break down all the new products the company announced and discuss where it goes from here. Then, Eliezer Yudkowsky, one of the most fascinating people in A.I., has a new book coming out: “If Anyone Builds It, Everyone Dies.” He joins us to make the case for why A.I. development should be shut down now, long before we reach superintelligence, and how he thinks that could happen.Guests:Eliezer Yudkowsky, founder of Machine Intelligence Research Institute and a co-author of “If Anyone Builds It, Everyone Dies”Additional Reading: A.I.'s Prophet of Doom Wants to Shut It All DownAI as Normal Technology, revisitedApple's misunderstood crossbody iPhone strap might be the best I've seen We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Slate Star Codex Podcast
Book Review: If Anyone Builds It, Everyone Dies

Slate Star Codex Podcast

Play Episode Listen Later Sep 12, 2025 42:20


I. Eliezer Yudkowsky's Machine Intelligence Research Institute is the original AI safety org. But the original isn't always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a sparkly top hat in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don't? MIRI answered: moral clarity. Most people in AI safety (including me) are uncertain and confused and looking for least-bad incremental solutions. We think AI will probably be an exciting and transformative technology, but there's some chance, 5 or 15 or 30 percent, that it might turn against humanity in a catastrophic way. Or, if it doesn't, that there will be something less catastrophic but still bad - maybe humanity gradually fading into the background, the same way kings and nobles faded into the background during the modern era. This is scary, but AI is coming whether we like it or not, and probably there are also potential risks from delaying too hard. We're not sure exactly what to do, but for now we want to build a firm foundation for reacting to any future threat. That means keeping AI companies honest and transparent, helping responsible companies like Anthropic stay in the race, and investing in understanding AI goal structures and the ways that AIs interpret our commands. Then at some point in the future, we'll be close enough to the actually-scary AI that we can understand the threat model more clearly, get more popular buy-in, and decide what to do next. MIRI thinks this is pathetic - like trying to protect against an asteroid impact by wearing a hard hat. They're kind of cagey about their own probability of AI wiping out humanity, but it seems to be somewhere around 95 - 99%. They think plausibly-achievable gains in company responsibility, regulation quality, and AI scholarship are orders of magnitude too weak to seriously address the problem, and they don't expect enough of a “warning shot” that they feel comfortable kicking the can down the road until everything becomes clear and action is easy. They suggest banning all AI capabilities research immediately, to be restarted only in some distant future when the situation looks more promising. Both sides honestly believe their position and don't want to modulate their message for PR reasons. But both sides, coincidentally, think that their message is better PR. The incrementalists think a moderate, cautious approach keeps bridges open with academia, industry, government, and other actors that prefer normal clean-shaven interlocutors who don't emit spittle whenever they talk. MIRI thinks that the public is sick of focus-group-tested mealy-mouthed bullshit, but might be ready to rise up against AI if someone presented the case in a clear and unambivalent way. Now Yudkowsky and his co-author, MIRI president Nate Soares, have reached new heights of unambivalence with their new book, If Anyone Builds It, Everyone Dies (release date September 16, currently available for preorder). https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone

The Valmy
Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?

The Valmy

Play Episode Listen Later Aug 13, 2025 146:10


Podcast: Doom Debates Episode: Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?Release date: 2025-08-12Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationVitalik Buterin is the founder of Ethereum, the world's second-largest cryptocurrency by market cap, currently valued at around $500 billion. But beyond revolutionizing blockchain technology, Vitalik has become one of the most thoughtful voices on AI safety and existential risk.He's donated over $665 million to pandemic prevention and other causes, and has a 12% P(Doom) – putting him squarely in what I consider the "sane zone" for AI risk assessment. What makes Vitalik particularly interesting is that he's both a hardcore techno-optimist who built one of the most successful decentralized systems ever created, and someone willing to seriously consider AI regulation and coordination mechanisms.Vitalik coined the term "d/acc" – defensive, decentralized, democratic, differential acceleration – as a middle path between uncritical AI acceleration and total pause scenarios. He argues we need to make the world more like Switzerland (defensible, decentralized) and less like the Eurasian steppes (vulnerable to conquest).We dive deep into the tractability of AI alignment, whether current approaches like DAC can actually work when superintelligence arrives, and why he thinks a pluralistic world of competing AIs might be safer than a single aligned superintelligence. We also explore his vision for human-AI merger through brain-computer interfaces and uploading.The crux of our disagreement is that I think we're heading for a "plants vs. animals" scenario where AI will simply operate on timescales we can't match, while Vitalik believes we can maintain agency through the right combination of defensive technologies and institutional design.Finally, we tackle the discourse itself – I ask Vitalik to debunk the common ad hominem attacks against AI doomers, from "it's just a fringe position" to "no real builders believe in doom." His responses carry weight given his credibility as both a successful entrepreneur and someone who's maintained intellectual honesty throughout his career.Timestamps* 00:00:00 - Cold Open* 00:00:37 - Introducing Vitalik Buterin* 00:02:14 - Vitalik's altruism* 00:04:36 - Rationalist community influence* 00:06:30 - Opinion of Eliezer Yudkowsky and MIRI* 00:09:00 - What's Your P(Doom)™* 00:24:42 - AI timelines* 00:31:33 - AI consciousness* 00:35:01 - Headroom above human intelligence* 00:48:56 - Techno optimism discussion* 00:58:38 - e/acc: Vibes-based ideology without deep arguments* 01:02:49 - d/acc: Defensive, decentralized, democratic acceleration* 01:11:37 - How plausible is d/acc?* 01:20:53 - Why libertarian acceleration can paradoxically break decentralization* 01:25:49 - Can we merge with AIs?* 01:35:10 - Military AI concerns: How war accelerates dangerous development* 01:42:26 - The intractability question* 01:51:10 - Anthropic and tractability-washing the AI alignment problem* 02:00:05 - The state of AI x-risk discourse* 02:05:14 - Debunking ad hominem attacks against doomers* 02:23:41 - Liron's outroLinksVitalik's website: https://vitalik.eth.limoVitalik's Twitter: https://x.com/vitalikbuterinEliezer Yudkowsky's explanation of p-Zombies: https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies—Doom Debates' Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe

LessWrong Curated Podcast
“Re: Recent Anthropic Safety Research” by Eliezer Yudkowsky

LessWrong Curated Podcast

Play Episode Listen Later Aug 12, 2025 9:00


A reporter asked me for my off-the-record take on recent safety research from Anthropic. After I drafted an off-the-record reply, I realized that I was actually fine with it being on the record, so: Since I never expected any of the current alignment technology to work in the limit of superintelligence, the only news to me is about when and how early dangers begin to materialize. Even taking Anthropic's results completely at face value would change not at all my own sense of how dangerous machine superintelligence would be, because what Anthropic says they found was already very solidly predicted to appear at one future point or another. I suppose people who were previously performing great skepticism about how none of this had ever been seen in ~Real Life~, ought in principle to now obligingly update, though of course most people in the AI industry won't. Maybe political leaders [...] --- First published: August 6th, 2025 Source: https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research --- Narrated by TYPE III AUDIO.

LessWrong Curated Podcast
“The Problem” by Rob Bensinger, tanagrabeast, yams, So8res, Eliezer Yudkowsky, Gretta Duleba

LessWrong Curated Podcast

Play Episode Listen Later Aug 6, 2025 49:32


This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1] The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...] ---Outline:(02:27) 1.  There isn't a ceiling at human-level capabilities.(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.(15:12) 3.  ASI is very likely to pursue the wrong goals.(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.The original text contained 1 footnote which was omitted from this narration. --- First published: August 5th, 2025 Source: https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem --- Narrated by TYPE III AUDIO.

LessWrong Curated Podcast
“Whence the Inkhaven Residency?” by Ben Pace

LessWrong Curated Podcast

Play Episode Listen Later Aug 4, 2025 4:44


Essays like Paul Graham's, Scott Alexander's, and Eliezer Yudkowsky's have influenced a generation of people in how they think about startups, ethics, science, and the world as a whole. Creating essays that good takes a lot of skill, practice, and talent, but it looks to me that a lot of people with talent aren't putting in the work and developing the skill, except in ways that are optimized to also be social media strategies. To fix this problem, I am running the Inkhaven Residency. The idea is to gather a bunch of promising writers to invest in the art and craft of blogging, through a shared commitment to each publish a blogpost every day for the month of November. Why a daily writing structure? Well, it's a reaction to other fellowships I've seen. I've seen month-long or years-long events with exceedingly little public output, where the people would've contributed [...] --- First published: August 2nd, 2025 Source: https://www.lesswrong.com/posts/CA6XfmzYoGFWNhH8e/whence-the-inkhaven-residency --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

LessWrong Curated Podcast
“HPMOR: The (Probably) Untold Lore” by Gretta Duleba, Eliezer Yudkowsky

LessWrong Curated Podcast

Play Episode Listen Later Jul 26, 2025 67:32


Eliezer and I love to talk about writing. We talk about our own current writing projects, how we'd improve the books we're reading, and what we want to write next. Sometimes along the way I learn some amazing fact about HPMOR or Project Lawful or one of Eliezer's other works. “Wow, you're kidding,” I say, “do your fans know this? I think people would really be interested.” “I can't remember,” he usually says. “I don't think I've ever explained that bit before, I'm not sure.” I decided to interview him more formally, collect as many of those tidbits about HPMOR as I could, and share them with you. I hope you enjoy them. It's probably obvious, but there will be many, many spoilers for HPMOR in this article, and also very little of it will make sense if you haven't read the book. So go read Harry Potter and [...] ---Outline:(01:49) Characters(01:52) Masks(09:09) Imperfect Characters(20:07) Make All the Characters Awesome(22:24) Hermione as Mary Sue(26:35) Who's the Main Character?(31:11) Plot(31:14) Characters interfering with plot(35:59) Setting up Plot Twists(38:55) Time-Turner Plots(40:51) Slashfic?(45:42) Why doesnt Harry like-like Hermione?(49:36) Setting(49:39) The Truth of Magic in HPMOR(52:54) Magical Genetics(57:30) An Aside: What did Harry Figure Out?(01:00:33) Nested Nerfing Hypothesis(01:04:55) EpiloguesThe original text contained 26 footnotes which were omitted from this narration. --- First published: July 25th, 2025 Source: https://www.lesswrong.com/posts/FY697dJJv9Fq3PaTd/hpmor-the-probably-untold-lore --- Narrated by TYPE III AUDIO. ---Images from the article:

LessWrong Curated Podcast
“On ‘ChatGPT Psychosis' and LLM Sycophancy” by jdp

LessWrong Curated Podcast

Play Episode Listen Later Jul 25, 2025 30:05


As a person who frequently posts about large language model psychology I get an elevated rate of cranks and schizophrenics in my inbox. Often these are well meaning people who have been spooked by their conversations with ChatGPT (it's always ChatGPT specifically) and want some kind of reassurance or guidance or support from me. I'm also in the same part of the social graph as the "LLM whisperers" (eugh) that Eliezer Yudkowsky described as "insane", and who in many cases are in fact insane. This means I've learned what "psychosis but with LLMs" looks like and kind of learned to tune it out. This new case with Geoff Lewis interests me though. Mostly because of the sheer disparity between what he's being entranced by and my automatic immune reaction to it. I haven't even read all the screenshots he posted because I take one glance and know that this [...] ---Outline:(05:03) Timeline Of Events Related To ChatGPT Psychosis(16:16) What Causes ChatGPT Psychosis?(16:27) Ontological Vertigo(21:02) Users Are Confused About What Is And Isnt An Official Feature(24:30) The Models Really Are Way Too Sycophantic(27:03) The Memory Feature(28:54) Loneliness And Isolation--- First published: July 23rd, 2025 Source: https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy --- Narrated by TYPE III AUDIO.

LessWrong Curated Podcast
“Foom & Doom 1: ‘Brain in a box in a basement'” by Steven Byrnes

LessWrong Curated Podcast

Play Episode Listen Later Jun 24, 2025 58:46


1.1 Series summary and Table of Contents This is a two-post series on AI “foom” (this post) and “doom” (next post). A decade or two ago, it was pretty common to discuss “foom & doom” scenarios, as advocated especially by Eliezer Yudkowsky. In a typical such scenario, a small team would build a system that would rocket (“foom”) from “unimpressive” to “Artificial Superintelligence” (ASI) within a very short time window (days, weeks, maybe months), involving very little compute (e.g. “brain in a box in a basement”), via recursive self-improvement. Absent some future technical breakthrough, the ASI would definitely be egregiously misaligned, without the slightest intrinsic interest in whether humans live or die. The ASI would be born into a world generally much like today's, a world utterly unprepared for this new mega-mind. The extinction of humans (and every other species) would rapidly follow (“doom”). The ASI would then spend [...] ---Outline:(00:11) 1.1 Series summary and Table of Contents(02:35) 1.1.2 Should I stop reading if I expect LLMs to scale to ASI?(04:50) 1.2 Post summary and Table of Contents(07:40) 1.3 A far-more-powerful, yet-to-be-discovered, simple(ish) core of intelligence(10:08) 1.3.1 Existence proof: the human cortex(12:13) 1.3.2 Three increasingly-radical perspectives on what AI capability acquisition will look like(14:18) 1.4 Counter-arguments to there being a far-more-powerful future AI paradigm, and my responses(14:26) 1.4.1 Possible counter: If a different, much more powerful, AI paradigm existed, then someone would have already found it.(16:33) 1.4.2 Possible counter: But LLMs will have already reached ASI before any other paradigm can even put its shoes on(17:14) 1.4.3 Possible counter: If ASI will be part of a different paradigm, who cares? It's just gonna be a different flavor of ML.(17:49) 1.4.4 Possible counter: If ASI will be part of a different paradigm, the new paradigm will be discovered by LLM agents, not humans, so this is just part of the continuous 'AIs-doing-AI-R&D' story like I've been saying(18:54) 1.5 Training compute requirements: Frighteningly little(20:34) 1.6 Downstream consequences of new paradigm with frighteningly little training compute(20:42) 1.6.1 I'm broadly pessimistic about existing efforts to delay AGI(23:18) 1.6.2 I'm broadly pessimistic about existing efforts towards regulating AGI(24:09) 1.6.3 I expect that, almost as soon as we have AGI at all, we will have AGI that could survive indefinitely without humans(25:46) 1.7 Very little R&D separating seemingly irrelevant from ASI(26:34) 1.7.1 For a non-imitation-learning paradigm, getting to relevant at all is only slightly easier than getting to superintelligence(31:05) 1.7.2 Plenty of room at the top(31:47) 1.7.3 What's the rate-limiter?(33:22) 1.8 Downstream consequences of very little R&D separating 'seemingly irrelevant' from 'ASI'(33:30) 1.8.1 Very sharp takeoff in wall-clock time(35:34) 1.8.1.1 But what about training time?(36:26) 1.8.1.2 But what if we try to make takeoff smoother?(37:18) 1.8.2 Sharp takeoff even without recursive self-improvement(38:22) 1.8.2.1 ...But recursive self-improvement could also happen(40:12) 1.8.3 Next-paradigm AI probably won't be deployed at all, and ASI will probably show up in a world not wildly different from today's(42:55) 1.8.4 We better sort out technical alignment, sandbox test protocols, etc., before the new paradigm seems even relevant at all, let alone scary(43:40) 1.8.5 AI-assisted alignment research seems pretty doomed(45:22) 1.8.6 The rest of AI for AI safety seems

London Futurists
The AI disconnect: understanding vs motivation, with Nate Soares

London Futurists

Play Episode Listen Later Jun 11, 2025 50:18


Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI.MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI.Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he's been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.”MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we'll explore what drives that view—and whether there is any room for hope.Selected follow-ups:Nate Soares - MIRIYudkowsky and Soares Announce Major New Book: “If Anyone Builds It, Everyone Dies” - MIRIThe Bayesian model of probabilistic reasoningDuring safety testing, o1 broke out of its VM - RedditLeo Szilard - Physics WorldDavid Bowie - Five Years - Old Grey Whistle TestAmara's Law - IEEERobert Oppenheimer calculation of p(doom)JD Vance commenting on AI-2027SolidGoldMagikarp - LessWrongASMLChicago Pile-1 - WikipediaCastle Bravo - WikipediaMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Robinson's Podcast
251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity

Robinson's Podcast

Play Episode Listen Later May 25, 2025 171:13


Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to avoid catastrophe and harness its power. In this episode, Robinson and Eliezer run the gamut on questions related to AI and the danger it poses to human civilization as we know it. More particularly, they discuss the alignment problem, gradient descent, consciousness, the singularity, cyborgs, ChatGPT, OpenAI, Anthropic, Claude, how long we have until doomsday, whether it can be averted, and the various reasons why and ways in which AI might wipe out human life on earth.The Machine Intelligence Research Institute: https://intelligence.org/about/Eliezer's X Account: https://x.com/ESYudkowsky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5EauthorOUTLINE00:00:00 Introduction00:00:43 The Default Condition for AI's Takeover00:06:36 Could a Future AI Country Be Our Trade Partner?00:11:18 What Is Artificial Intelligence?00:21:23 Why AIs Having Goals Could Mean the End of Humanity00:29:34 What Is the Alignment Problem?00:34:11 How To Avoid AI Apocalypse00:40:25 Would Cyborgs Eliminate Humanity?00:47:55 AI and the Problem of Gradient Descent00:55:24 How Do We Solve the Alignment Problem?01:00:50 How Anthropic's AI Freed Itself from Human Control01:08:56 The Pseudo-Alignment Problem01:19:28 Why Are People Wrong About AI Not Taking Over the World?01:23:23 How Certain Is It that AI Will Wipe Out Humanity?01:38:35 Is Eliezer Yudkowski Wrong About The AI Apocalypse01:42:04 Do AI Corporations Control the Fate of Humanity?01:43:49 How To Convince the President Not to Let AI Kill Us All01:52:01 How Will ChatGPT's Descendants Wipe Out Humanity?02:24:11 Could AI Destroy us with New Science?02:39:37 Could AI Destroy us with Advanced Biology?02:47:29 How Will AI Actually Destroy Humanity?Robinson's Website: http://robinsonerhardt.comRobinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University.

London Futurists
Humanity's final four years? with James Norris

London Futurists

Play Episode Listen Later Apr 30, 2025 49:36


In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.Selected follow-ups:James Norris websiteUpgrade your life & legacy - UpgradableThe 7 Habits of Highly Effective People (Stephen Covey)Beneficial AI 2017 - Asilomar conference"...superintelligence in a few thousand days" - Sam Altman blogpostAmara's Law - DevIQThe Probability of Nuclear War (JFK estimate)AI Designs Chemical Weapons - The BatchThe Vulnerable World Hypothesis - Nick BostromWe Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua BengioInstrumental convergence - WikipediaNeanderthal extinction - WikipediaMatrioshka brain - WikipediaWill there be a 'WW3' before 2050? - Manifold prediction marketExistential Safety Action PledgeAn Urgent Call for Global AI Governance - IAIGA petitionBuild your survival sanctuaryOther people mentioned include:Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew NgMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Cults, Cryptids, and Conspiracies
Episode 398: Irrationally Justified

Cults, Cryptids, and Conspiracies

Play Episode Listen Later Apr 25, 2025 88:24


Have you ever read Harry Potter and the Methods of Rationality?? Perhaps spent too much money on a self help workshop seminar? Join us as we talk about Eliezer Yudkowsky and his masterpiece of fiction. Where will this story truly lead us in this tale of rational magic and science. With our last episode on the topic trigger warning for some bad mental health. Thanks for listening and remember to like, rate, review, and email us at: cultscryptidsconspiracies@gmail.com or tweet us at @C3Podcast. We have some of our sources for research here: http://tinyurl.com/CristinaSourcesAlso check out our Patreon: www.patreon.com/cultscryptidsconspiracies. Thank you to T.J. Shirley for our theme

Cults, Cryptids, and Conspiracies
Episode 397: The Philosophers Science

Cults, Cryptids, and Conspiracies

Play Episode Listen Later Apr 18, 2025 78:20


Have you ever read Harry Potter and the Methods of Rationality?? Perhaps spent too much money on a self help workshop seminar? Join us as we talk about Eliezer Yudkowsky and his masterpiece of fiction. Where will this story truly lead us in this tale of rational magic and science.Thanks for listening and remember to like, rate, review, and email us at: cultscryptidsconspiracies@gmail.com or tweet us at @C3Podcast. We have some of our sources for research here: http://tinyurl.com/CristinaSourcesAlso check out our Patreon: www.patreon.com/cultscryptidsconspiracies. Thank you to T.J. Shirley for our theme

War College
The Cult of Rationalism in Silicon Valley

War College

Play Episode Listen Later Mar 25, 2025 61:34


A lot of the people designing America's technology and close to the center of American power believe some deeply weird shit. We already talked to journalist Gil Duran about the Nerd Reich, the rise of the destructive anti-democratic ideology. In this episode, we dive into another weird section of Silicon Valley: the cult of Rationalism.Max Read, the journalist behind the Read Max Substack, is here to help us through it. Rationalism is responsible for a lot more than you might think and Read lays out how it's influenced the world we live in today and how it created the environment for a cult that's got a body count.Defining rationalism: “Something between a movement, a community, and a self-help program.”Eliezer Yudkowsky and the dangers of AIWhat the hell is AGI?The Singleton Guide to Global GovernanceThe danger of thought experimentsAs always, follow the moneyVulgar bayesianismWhat's a Zizian?Sith VegansAnselm: Ontological Argument for God's ExistenceSBF and Effective AltruismREAD MAX!The Zizians and the Rationalist death cultsPausing AI Developments Isn't Enough. We Need to Shut it All Down - Eliezer Yudkowsky's TIME Magazine pieceExplaining Roko's Basilisk, the Thought Experiment That Brought Elon Musk and Grimes TogetherThe Delirious, Violent, Impossible True Story of the ZiziansThe Government Knows AGI is Coming | The Ezra Klein ShowThe archived ‘Is Trump Racist' rational postSupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.

The Farm Podcast Mach II
Thiel, Yudkowsky, Rationalists & the Cult of Ziz w/ David Z. Morris & Recluse

The Farm Podcast Mach II

Play Episode Listen Later Feb 3, 2025 109:59


Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.

Artificial Intelligence in Industry with Daniel Faggella
AI Risk Management and Governance Strategies for the Future - with Duncan Cass-Beggs of Center for International Governance Innovation

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Feb 1, 2025 77:40


Today's guest is Duncan Cass-Beggs, Executive Director of the Global AI Risks Initiative at the Center for International Governance Innovation (CIGI). He joins Emerj CEO and Head of Research Daniel Faggella to explore the pressing challenges and opportunities surrounding Artificial General Intelligence (AGI) governance on a global scale. This is a special episode in our AI futures series that ties right into our overlapping series on AGI governance on the Trajectory podcast, where we've had luminaries like Eliezer Yudkowsky, Connor Leahy, and other globally recognized AGI governance thinkers. We hope you enjoy this episode. If you're interested in these topics, make sure to dive deeper into where AI is affecting the bigger picture by visiting emergj.com/tj2.

TrueAnon
Episode 434: Evil Gods Must Be Fought: The Zizian Murder Cult [Part 1]

TrueAnon

Play Episode Listen Later Jan 29, 2025 128:17


Part one of our two-part investigation into the Rationalist cult “The Zizians.” We start with the killing of a border patrol officer and make our way back into the belly of the beast: Silicon Valley. Featuring: Harry Potter fanfic, samurai swords, Guy Fawkes masks, Blake Masters, Bayesian probability, and Eliezer Yudkowsky. Infohazard warning: some of your least favs will be implicated. Discover more episodes at podcast.trueanon.com

Artificial Intelligence in Industry with Daniel Faggella
Understanding AGI Alignment Challenges and Solutions - with Eliezer Yudkowsky of the Machine Intelligence Research Institute

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jan 25, 2025 43:03


Today's episode is a special addition to our AI Futures series, featuring a special sneak peek at an upcoming episode of our Trajectory podcast with guest Eliezer Yudkowsky, AI researcher, founder, and research fellow at the Machine Intelligence Research Institute. Eliezer joins Emerj CEO and Head of Research Daniel Faggella to discuss the governance challenges of increasingly powerful AI systems—and what it might take to ensure a safe and beneficial trajectory for humanity. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

The Unadulterated Intellect
#83 – Robin Hanson and Eliezer Yudkowsky: Jane Street Singularity Debate

The Unadulterated Intellect

Play Episode Listen Later Jan 5, 2025 98:18


Machine Learning Street Talk
Eliezer Yudkowsky and Stephen Wolfram on AI X-risk

Machine Learning Street Talk

Play Episode Listen Later Nov 11, 2024 258:30


Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky's argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems' fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** TOC: 1. Foundational AI Concepts and Risks [00:00:01] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0

Slate Star Codex Podcast
Contra DeBoer On Temporal Copernicanism

Slate Star Codex Podcast

Play Episode Listen Later Oct 1, 2024 14:07


Freddie deBoer has a post on what he calls “the temporal Copernican principle.” He argues we shouldn't expect a singularity, apocalypse, or any other crazy event in our lifetimes. Discussing celebrity transhumanist Yuval Harari, he writes: What I want to say to people like Yuval Harari is this. The modern human species is about 250,000 years old, give or take 50,000 years depending on who you ask. Let's hope that it keeps going for awhile - we'll be conservative and say 50,000 more years of human life. So let's just throw out 300,000 years as the span of human existence, even though it could easily be 500,000 or a million or more. Harari's lifespan, if he's lucky, will probably top out at about 100 years. So: what are the odds that Harari's lifespan overlaps with the most important period in human history, as he believes, given those numbers? That it overlaps with a particularly important period of human history at all? Even if we take the conservative estimate for the length of human existence of 300,000 years, that means Harari's likely lifespan is only about .33% of the entirety of human existence. Isn't assuming that this .33% is somehow particularly special a very bad assumption, just from the basis of probability? And shouldn't we be even more skeptical given that our basic psychology gives us every reason to overestimate the importance of our own time? (I think there might be a math error here - 100 years out of 300,000 is 0.033%, not 0.33% - but this isn't my main objection.) He then condemns a wide range of people, including me, for failing to understand this: Some people who routinely violate the Temporal Copernican Principle include Harari, Eliezer Yudkowsky, Sam Altman, Francis Fukuyama, Elon Musk, Clay Shirky, Tyler Cowen, Matt Yglesias, Tom Friedman, Scott Alexander, every tech company CEO, Ray Kurzweil, Robin Hanson, and many many more. I think they should ask themselves how much of their understanding of the future ultimately stems from a deep-seated need to believe that their times are important because they think they themselves are important, or want to be. I deny misunderstanding this. Freddie is wrong. https://www.astralcodexten.com/p/contra-deboer-on-temporal-copernicanism 

The Nonlinear Library
LW - MIRI's September 2024 newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 2:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's September 2024 newsletter, published by Harlan on September 17, 2024 on LessWrong. MIRI updates Aaron Scher and Joe Collman have joined the Technical Governance Team at MIRI as researchers. Aaron previously did independent research related to sycophancy in language models and mechanistic interpretability, while Joe previously did independent research related to AI safety via debate and contributed to field-building work at MATS and BlueDot Impact. In an interview with PBS News Hour's Paul Solman, Eliezer Yudkowsky briefly explains why he expects smarter-than-human AI to cause human extinction. In an interview with The Atlantic's Ross Andersen, Eliezer discusses the reckless behavior of the leading AI companies, and the urgent need to change course. News and links Google DeepMind announced a hybrid AI system capable of solving International Mathematical Olympiad problems at the silver medalist level. In the wake of this development, a Manifold prediction market significantly increased its odds that AI will achieve gold level by 2025, a milestone that Paul Christiano gave less than 8% odds and Eliezer gave at least 16% odds to in 2021. The computer scientist Yoshua Bengio discusses and responds to some common arguments people have for not worrying about the AI alignment problem. SB 1047, a California bill establishing whistleblower protections and mandating risk assessments for some AI developers, has passed the State Assembly and moved on to the desk of Governor Gavin Newsom, to either be vetoed or passed into law. The bill has received opposition from several leading AI companies, but has also received support from a number of employees of those companies, as well as many academic researchers. At the time of this writing, prediction markets think it's about 50% likely that the bill will become law. In a new report, researchers at Epoch AI estimate how big AI training runs could get by 2030, based on current trends and potential bottlenecks. They predict that by the end of the decade it will be feasible for AI companies to train a model with 2e29 FLOP, which is about 10,000 times the amount of compute used to train GPT-4. Abram Demski, who previously worked at MIRI as part of our recently discontinued Agent Foundations research program, shares an update about his independent research plans, some thoughts on public vs private research, and his current funding situation. You can subscribe to the MIRI Newsletter here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft) by Devin Kalish

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 73:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft), published by Devin Kalish on September 17, 2024 on The Effective Altruism Forum. What follows is a lightly edited version of the thesis I wrote for my Bioethics MA program. I'm hoping to do more with this in the future, including seeking publication and/or expanding it into a dissertation or short book. In its current state, I feel like it is in pretty rough shape. I hope it is useful and interesting for people as puzzled by this very niche philosophical worry as me, but I'm also looking for feedback on how I can improve it. There's no guarantee I will take it, or even do anything further with this piece, but I would still appreciate the feedback. I may or may not interact much in the comments section. I. Introduction: Duration is an essential component of many theories of wellbeing. While there are theories of wellbeing that are sufficiently discretized that time isn't so obviously relevant to them, like achievements, it is hard to deny that time matters to some parts of a moral patient's wellbeing. A five-minute headache is better than an hour-long headache, all else held equal. A love that lasts for decades provides more meaning to a life than one that last years or months, all else held equal. The fulfillment of a desire you have had for years matters more than the fulfillment of a desire you have merely had for minutes, all else held equal. However, in our day to day lives we encounter time in two ways, objectively and subjectively. What do we do when the two disagree? This problem reached my attention years ago when I was reflecting on the relationship between my own theoretical leaning, utilitarianism, and the idea of aggregating interests. Aggregation between lives is known for its counterintuitive implications and the rich discourse around this, but I am uncomfortable with aggregation within lives as well. Some of this is because I feel the problems of interpersonal aggregation remain in the intrapersonal case, but there was also a problem I hadn't seen any academic discussion of at the time - objective time seemed to map the objective span of wellbeing if you plot each moment of wellbeing out to aggregate, but it is subjective time we actually care about. Aggregation of these objective moments gives a good explanation of our normal intuitions about time and wellbeing, but it fails to explain our intuitions about time whenever these senses of it come apart. As I will attempt to motivate later, the intuition that it is subjective time that matters is very strong in cases where the two substantially differ. Indeed, although the distinction rarely appears in papers at all, the main way I have seen it brought up (for instance in "The Ethics of Artificial Intelligence[1]" by Nick Bostrom and Eliezer Yudkowsky) is merely to notice there is a difference, and to effectively just state that it is subjective time, of course, that we should care about. I have very rarely run into a treatment dedicated to the "why", the closest I have seen is the writing of Jason Schukraft[2], with his justification for why it is subjective time that matters for Rethink Priorities' "Moral Weights" project. His justification is similar to an answer I have heard in some form several times from defenders: We measure other values of consciousness subjectively, such as happiness and suffering, why shouldn't we measure time subjectively as well? I believe without more elaboration, this explanation has the downside that it both gives no attention to the idea that time matters because it tells us "how much" of an experience there actually is, and has the downside that it seems irrelevant to any theory of wellbeing other than hedonism. It also, crucially, fails to engage with the question of what exactly subje...

The Nonlinear Library
LW - That Alien Message - The Animation by Writer

The Nonlinear Library

Play Episode Listen Later Sep 7, 2024 12:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: That Alien Message - The Animation, published by Writer on September 7, 2024 on LessWrong. Our new video is an adaptation of That Alien Message, by @Eliezer Yudkowsky. This time, the text has been significantly adapted, so I include it below. The author of the adaptation is Arthur Frost. Eliezer has reviewed the adaptation. Part 1 Picture a world just like ours, except the people are a fair bit smarter: in this world, Einstein isn't one in a million, he's one in a thousand. In fact, here he is now. He's made all the same discoveries, but they're not quite as unusual: there have been lots of other discoveries. Anyway, he's out one night with a friend looking up at the stars when something odd happens. [visual: stars get brighter and dimmer, one per second. The two people on the hill look at each other, confused] The stars are flickering. And it's just not a hallucination. Everyone's seeing it. And so everyone immediately freaks out and panics! Ah, just kidding, the people of this world are smarter than ours; What they do is try to work together and figure out what's going on. It turns out that exactly one star seems to shift in brightness every 1.005 seconds. Except, the stars are light years away, so actually the shifts must have happened a long time ago, and somehow they've all been perfectly timed to reach Earth specifically every 1.005 seconds. If you look at the stars from a high-orbit satellite (which of course this planet has) then the flickering looks a little out of sync. So whatever this is, it's directed at Earth. Nobody can find a pattern in the position of the stars, but it's one at a time getting either much dimmer or much brighter by the same amount and, well, that looks a bit like binary. So loads of people think 'huh, maybe it's a code!'. But a lot of other people wonder, 'Who would be trying to send a message to Earth by shifting the brightness of stars across the galaxy? There must be an easier way to talk to us?' But it seems like there must be some intelligence behind it, so the data gets gathered and put on the internet. Some people wonder if maybe it's somehow dangerous, but, well, whoever is making the stars change brightness probably has easier ways to destroy humanity. And so the great analysis begins. Half the planet's physicists, mathematicians, cryptographers, precocious kids, crossword enthusiasts, whoever, they're all trying to work out what this means, they're trying to crack the code. And as they do, the stars keep flickering, exactly one every 1.005 seconds. There are some obvious patterns [visual: display the code, probably someone lining up different wrappings and finding one that makes the pattern look less noisy]: it seems like the numbers come in groups of 32, which in turn come from four groups of 8. Some chunks are much more common. [visual: chunks of 8 getting matched across the text, sorted into uneven piles perhaps] By the way, they do all this just in the first five hours, because like I said, people here are smart. Their civilisation is… a bit more on top of things. And so they are very ready to respond when, after five hours and 16,384 winking stars, it seems like the message begins to repeat itself, or, almost repeat itself, it's just slightly different this time. And it keeps going. [slow zoom out on code going from one line to two, showing only a few differences between the new line and the previous line] Some people start thinking maybe we're seeing the next row of a picture, pixel by pixel. Only, the designers of this image format - whoever they are - use four primary colours instead of three [visual of 32-chunk getting broken into four 8-chunks]. And the picture seems less chaotic if we assume they do binary slightly differently to us. [probably someone gesturing at a diagram of how to get numbers from binary repres...

The Nonlinear Library
LW - Executable philosophy as a failed totalizing meta-worldview by jessicata

The Nonlinear Library

Play Episode Listen Later Sep 5, 2024 7:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Executable philosophy as a failed totalizing meta-worldview, published by jessicata on September 5, 2024 on LessWrong. (this is an expanded, edited version of an x.com post) It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative. So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital: Two motivations of "executable philosophy" are as follows: 1. We need a philosophical analysis to be "effective" in Turing's sense: that is, the terms of the analysis must be useful in writing programs. We need ideas that we can compile and run; they must be "executable" like code is executable. 2. We need to produce adequate answers on a time scale of years or decades, not centuries. In the entrepreneurial sense of "good execution", we need a methodology we can execute on in a reasonable timeframe. There is such a thing as common sense rationality, which says the world is round, you shouldn't play the lottery, etc. Formal notions like Bayesianism, VNM utility theory, and Solomonoff induction formalize something strongly related to this common sense rationality. Yudkowsky believes further study in this tradition can supersede ordinary academic philosophy, which he believes to be conceptually weak and motivated to continue ongoing disputes for more publications. In the Sequences, Yudkowsky presents these formal ideas as the basis for a totalizing meta-worldview, of epistemic and instrumental rationality, and uses the meta-worldview to argue for his object-level worldview (which includes many-worlds, AGI foom, importance of AI alignment, etc.). While one can get totalizing (meta-)worldviews from elsewhere (such as interdisciplinary academic studies), Yudkowsky's (meta-)worldview is relatively easy to pick up for analytically strong people (who tend towards STEM), and is effective ("correct" and "winning") relative to its simplicity. Yudkowsky's source material and his own writing do not form a closed meta-worldview, however. There are open problems as to how to formalize and solve real problems. Many of the more technical sort are described in MIRI's technical agent foundations agenda. These include questions about how to parse a physically realistic problem as a set of VNM lotteries ("decision theory"), how to use something like Bayesianism to handle uncertainty about mathematics ("logical uncertainty"), how to formalize realistic human values ("value loading"), and so on. Whether or not the closure of this meta-worldview leads to creation of friendly AGI, it would certainly have practical value. It would allow real world decisions to be made by first formalizing them within a computational framework (related to Yudkowsky's notion of "executable philosophy"), whether or not the computation itself is tractable (with its tractable version being friendly AGI). The practical strategy of MIRI as a technical research institute is to go meta on these open problems by recruiting analytically strong STEM people (especially mathematicians and computer scientists) to work on them, as part of the agent foundations agenda. I was one of these people. While we made some progress on these problems (such as with the Logical Induction paper), we didn't come close to completing the meta-worldview, let alone building friendly AGI. With the Agent Foundations team at MIRI eliminated, MIRI's agent foundations agenda is now unambiguously a failed project. I had called MIRI technical research as likely to fail around 2017 with the increase in internal secrecy, but at thi...

The Nonlinear Library
LW - How I got 3.2 million Youtube views without making a single video by Closed Limelike Curves

The Nonlinear Library

Play Episode Listen Later Sep 3, 2024 2:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I got 3.2 million Youtube views without making a single video, published by Closed Limelike Curves on September 3, 2024 on LessWrong. Just over a month ago, I wrote this. The Wikipedia articles on the VNM theorem, Dutch Book arguments, money pump, Decision Theory, Rational Choice Theory, etc. are all a horrific mess. They're also completely disjoint, without any kind of Wikiproject or wikiboxes for tying together all the articles on rational choice. It's worth noting that Wikipedia is the place where you - yes, you! - can actually have some kind of impact on public discourse, education, or policy. There is just no other place you can get so many views with so little barrier to entry. A typical Wikipedia article will get more hits in a day than all of your LessWrong blog posts have gotten across your entire life, unless you're @Eliezer Yudkowsky. I'm not sure if we actually "failed" to raise the sanity waterline, like people sometimes say, or if we just didn't even try. Given even some very basic low-hanging fruit interventions like "write a couple good Wikipedia articles" still haven't been done 15 years later, I'm leaning towards the latter. edit me senpai EDIT: Discord to discuss editing here. An update on this. I've been working on Wikipedia articles for just a few months, and Veritasium just put a video out on Arrow's impossibility theorem - which is almost completely based on my Wikipedia article on Arrow's impossibility theorem! Lots of lines and the whole structure/outline of the video are taken almost verbatim from what I wrote. I think there's a pretty clear reason for this: I recently rewrote the entire article to make it easy-to-read and focus heavily on the most important points. Relatedly, if anyone else knows any educational YouTubers like CGPGrey, Veritasium, Kurzgesagt, or whatever - please let me know! I'd love a chance to talk with them about any of the fields I've done work teaching or explaining (including social or rational choice, economics, math, and statistics). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - "Deception Genre" What Books are like Project Lawful? by Double

The Nonlinear Library

Play Episode Listen Later Aug 28, 2024 1:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Deception Genre" What Books are like Project Lawful?, published by Double on August 28, 2024 on LessWrong. This post is spoiler-free I just finished Project Lawful, a really long, really weird book by Eliezer Yudkowsky. The book's protagonist is a knowledgable and perceptive target. A conspiracy forms around the target to learn from him while keeping him from finding out that helping them is not in the target's best interests. The book is written from the perspective of both the target and the conspiracists. The target notices inconsistencies and performs experiments to test his false reality while also acting in the fabricated reality according to his interests. The conspiracists frantically try to keep the target from catching them or building enough evidence against them that he concludes they have been lying. This is a description of (part of) the plot of Project Lawful. But this could be the description of an entire genre! If the genre doesn't already have a name, it could be the "Deception Genre." Another work in this category would be The Truman Show, which fits the deception and the target's escape within a

The Nonlinear Library
LW - Ten arguments that AI is an existential risk by KatjaGrace

The Nonlinear Library

Play Episode Listen Later Aug 13, 2024 10:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ten arguments that AI is an existential risk, published by KatjaGrace on August 13, 2024 on LessWrong. This is a snapshot of a new page on the AI Impacts Wiki. We've made a list of arguments[1] that AI poses an existential risk to humanity. We'd love to hear how you feel about them in the comments and polls. Competent non-aligned agents Summary: 1. Humans will build AI systems that are 'agents', i.e. they will autonomously pursue goals 2. Humans won't figure out how to make systems with goals that are compatible with human welfare and realizing human values 3. Such systems will be built or selected to be highly competent, and so gain the power to achieve their goals 4. Thus the future will be primarily controlled by AIs, who will direct it in ways that are at odds with long-run human welfare or the realization of human values Selected counterarguments: It is unclear that AI will tend to have goals that are bad for humans There are many forms of power. It is unclear that a competence advantage will ultimately trump all others in time This argument also appears to apply to human groups such as corporations, so we need an explanation of why those are not an existential risk People who have favorably discussed[2] this argument (specific quotes here): Paul Christiano (2021), Ajeya Cotra (2023), Eliezer Yudkowsky (2024), Nick Bostrom (2014[3]). See also: Full wiki page on the competent non-aligned agents argument Second species argument Summary: 1. Human dominance over other animal species is primarily due to humans having superior cognitive and coordination abilities 2. Therefore if another 'species' appears with abilities superior to those of humans, that species will become dominant over humans in the same way 3. AI will essentially be a 'species' with superior abilities to humans 4. Therefore AI will dominate humans Selected counterarguments: Human dominance over other species is plausibly not due to the cognitive abilities of individual humans, but rather because of human ability to communicate and store information through culture and artifacts Intelligence in animals doesn't appear to generally relate to dominance. For instance, elephants are much more intelligent than beetles, and it is not clear that elephants have dominated beetles Differences in capabilities don't necessarily lead to extinction. In the modern world, more powerful countries arguably control less powerful countries, but they do not wipe them out and most colonized countries have eventually gained independence People who have favorably discussed this argument (specific quotes here): Joe Carlsmith (2024), Richard Ngo (2020), Stuart Russell (2020[4]), Nick Bostrom (2015). See also: Full wiki page on the second species argument Loss of control via inferiority Summary: 1. AI systems will become much more competent than humans at decision-making 2. Thus most decisions will probably be allocated to AI systems 3. If AI systems make most decisions, humans will lose control of the future 4. If humans have no control of the future, the future will probably be bad for humans Selected counterarguments: Humans do not generally seem to become disempowered by possession of software that is far superior to them, even if it makes many 'decisions' in the process of carrying out their will In the same way that humans avoid being overpowered by companies, even though companies are more competent than individual humans, humans can track AI trustworthiness and have AI systems compete for them as users. This might substantially mitigate untrustworthy AI behavior People who have favorably discussed this argument (specific quotes here): Paul Christiano (2014), Ajeya Cotra (2023), Richard Ngo (2024). See also: Full wiki page on loss of control via inferiority Loss of control via speed Summary: 1. Advances in AI will produce...

The Nick Halaris Show
Nathan Labenz – AI's Revolutionary Potential and the Need for a New Social Contract

The Nick Halaris Show

Play Episode Listen Later Jun 18, 2024 52:02


This week on The Nick Halaris Show we are featuring Nathan Labenz, a founder of Waymark, a company using AI to help companies easily make compelling marketing videos, and the host of the Cognitive Revolution podcast.  Nathan, our first guest on the show who went to my high school, has carved out a niche for himself in the crowded online world as an AI scout and is fast becoming one of the most sought-after voices in the industry.  I have been thinking a ton about AI lately and wanted to have Nathan on the show to get some intelligent insider perspectives on what's really going on in the space.  What you are about to hear is part one of a two-part interview where Nathan delivers a tour de force on the AI landscape.  We explore the big questions everyone wants to ask about AI, the good, the bad, and the ugly of the AI world, and what's trending and why.  In this episode, we learn what led Nathan down the path of AI, what motivates his important work as a thought leader, and why AI has the potential to be a force for great good in the world.  Tune in to this fascinating episode to learn: How a paper by prominent AI scientist Eliezer Yudkowsky opened Nathan's eyes to the potential and dangers of AIHow an experience at Waymark, while serving as CEO, helped Nathan realize the revolutionary potential of AI Why Nathan believes AI, if handled responsibly, has immense potential to dramatically improve our world, reduce human suffering, and usher in an unprecedented era of human prosperity What a post-AI world might look like and why we might need to start thinking about a new social contract  & Much, much moreIn part two of the interview, which will drop next week, we get into the other side of AI story and explore what could go wrong and why.  We also examine disturbing trends already at play in the industry and discuss ideas on what we could/should do to make things safer.  This is another fascinating conversation that you will not want to miss!As always, I hope you all enjoy this episode.  Thanks for tuning in!Love this episode? Please rate, subscribe, and review on your favorite podcast platform to help more users find our show.

Big Technology Podcast
Google's AI Narrative Is Flipping, Microsoft Hedges Its OpenAI Bet, AI Clones Are Here

Big Technology Podcast

Play Episode Listen Later Apr 12, 2024 60:36


Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) The Solar Eclipse! 2) AI Music generation software Suno 3) Google flipping of its AI narrative 4) Ranjan's reflections from Google Cloud Next 5) Is Google's AI enterprise bet the right strategy 6) Microsoft hedging its OpenAI bet 7) Implications of Mustafa Suleyman's remit within Microsoft 8) OpenAI fires leakers 9) Eliezer Yudkowsky refuses interview and his reps won't pick up the phone 10) AI model training running out of data 11) Prospects of synthetic data for AI training 12) The Humane AI pin flops 13) Can Sam Altman and Jony Ive build an AI device 14) Cloning ourselves with AI. ---- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Mother of Learning Audiobook (Jack Voraces)
Chapter 121: Something to Protect: Severus Snape

Mother of Learning Audiobook (Jack Voraces)

Play Episode Listen Later Mar 15, 2024 12:33


All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too.  Each chapter is recorded live on Discord on Mondays at 20:00 GMT:

Lex Fridman Podcast
#392 – Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

Lex Fridman Podcast

Play Episode Listen Later Aug 1, 2023 179:04


Joscha Bach is a cognitive scientist, AI researcher, and philosopher. Please support this podcast by checking out our sponsors: - Numerai: https://numer.ai/lex - Eight Sleep: https://www.eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lex to get 15% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/joscha-bach-3-transcript EPISODE LINKS: Joscha's Twitter: https://twitter.com/Plinz Joscha's Website: http://bach.ai Joscha's Substack: https://substack.com/@joscha PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (06:26) - Stages of life (18:48) - Identity (25:24) - Enlightenment (31:55) - Adaptive Resonance Theory (38:42) - Panpsychism (48:42) - How to think (56:36) - Plants communication (1:14:31) - Fame (1:40:09) - Happiness (1:47:26) - Artificial consciousness (1:59:35) - Suffering (2:04:19) - Eliezer Yudkowsky (2:11:55) - e/acc (Effective Accelerationism) (2:17:33) - Mind uploading (2:28:22) - Vision Pro (2:32:36) - Open source AI (2:45:29) - Twitter (2:52:44) - Advice for young people (2:55:40) - Meaning of life

Hold These Truths with Dan Crenshaw
Can We Stop the AI Apocalypse? | Eliezer Yudkowsky

Hold These Truths with Dan Crenshaw

Play Episode Listen Later Jul 13, 2023 61:06


Artificial Intelligence (AI) researcher Eliezer Yudkowsky makes the case for why we should view AI as an existential threat to humanity. Rep. Crenshaw gets into the basics of AI and how the new AI program, GPT-4, is a revolutionary leap forward in the tech. Eliezer hypothesizes the most likely scenarios if AI becomes self-aware and unconstrained – from rogue programs that blackmail targets to self-replicating nano robots. They discuss building global coalitions to rein in AI development and how China views AI. And they explore first steps Congress could take to limit AI's capabilities for harm while still enabling its promising advances in research and development. Eliezer Yudkowsky is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California. Follow him on Twitter @ESYudkowsky