American blogger, writer, and artificial intelligence researcher
POPULARITY
We read 'If Anyone Builds It, Everyone Dies' by Yudkowsky & Soares (so you don't have to) Watch this video at- https://youtu.be/IHTunMmNado?si=4RvOZ5hyUAE7NzSo We Read This So You Don't Have To Nov 16, 2025 We Read This (So You Don't Have To) We read If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky & Nate Soares so you don't have to …but if you've ever wondered how building superhuman artificial intelligence could turn into humanity's last mistake, this episode might forever change how you think about technology, risk, and the future of intelligence. In this episode, we break down Yudkowsky & Soares's alarming thesis: when we build AI that out-thinks us, the default isn't friendly cooperation — it's misalignment, hidden objectives, and catastrophic loss of control. Modern AI isn't programmed in the old way; it's grown, resulting in systems whose goals we cannot fully predict or steer. The authors argue that unless humanity halts or radically redesigns the trajectory of large-scale AI development, we may be writing our own extinction notice.
This week we talk about floods, wildfires, and reinsurance companies.We also discuss the COP meetings, government capture, and air pollution.Recommended Book: If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares TranscriptThe urban area that contains India's capital city, New Delhi, called the National Capital Territory of Delhi, has a population of around 34.7 million people. That makes it the most populous city in the country, and one of the most populous cities in the world.Despite the many leaps India has made over the past few decades, in terms of economic growth and overall quality of life for residents, New Delhi continues to have absolutely abysmal air quality—experts at India's top research hospital have called New Delhi's air “severe and life-threatening,” and the level of toxic pollutants in the air, from cars and factories and from the crop-waste burning conducted by nearby farmers, can reach 20-times the recommended level for safe breathing.In mid-November 2025, the problem became so bad that the government told half its workers to work from home, because of the dangers represented by the air, and in the hope that doing so would remove some of the cars on the road and, thus, some of the pollution being generated in the area.Trucks spraying mist, using what are called anti-smog guns, along busy roads and pedestrian centers help—the mist keeping some of the pollution from cars from billowing into the air and becoming part of the regional problem, rather than an ultra-localized one, and pushing the pollutants that would otherwise get into people's lungs down to the ground—though the use of these mist-sprayers has been controversial, as there are accusations that they're primarily deployed near air-quality monitoring stations, and that those in charge put them there to make it seem like the overall air-quality is lower than it is, manipulating the stats so that their failure to improve practical air-quality isn't as evident.And in other regional news, just southeast across the Bay of Bengal, the Indonesian government, as of the day I'm recording this, is searching for the hundreds of people who are still missing following a period of unusually heavy rains. These rains have sparked floods and triggered mudslides that have blocked roads, damaged bridges, and forced the evacuation of entire villages. More than 300,000 people have been evacuated as of last weekend, and more rain is forecast for the coming days.The death toll of this round of heavy rainfall—the heaviest in the region in years—has already surpassed 440 people in Indonesia, with another 160 and 90 in Thailand and Vietnam, respectively, being reported by those countries' governments, from the same weather system.In Thailand, more than two million people were displaced by flooding, and the government had to deploy military assets, including helicopters launched from an aircraft carrier, to help rescue people from the roofs of buildings across nine provinces.In neighboring Malaysia, tens of thousands of people were forced into shelters as the same storm system barreled through, and Sri Lanka was hit with a cyclone that left at least 193 dead and more than 200 missing, marking one of the country's worst weather disasters in recent years.What I'd like to talk about today is the climatic moment we're at, as weather patterns change and in many cases, amplify, and how these sorts of extreme disasters are also causing untold, less reported upon but perhaps even more vital, for future policy shifts, at least, economic impacts.—The UN Conference of the Parties, or COP meetings, are high-level climate change conferences that have typically been attended by representatives from most governments each year, and where these representatives angle for various climate-related rules and policies, while also bragging about individual nations' climate-related accomplishments.In recent years, such policies have been less ambitious than in previous ones, in part because the initial surge of interest in preventing a 1.5 degrees C increase in average global temperatures is almost certainly no longer an option; climate models were somewhat accurate, but as with many things climate-related, seem to have actually been a little too optimistic—things got worse faster than anticipated, and now the general consensus is that we'll continue to shoot past 1.5 degrees C over the baseline level semi-regularly, and within a few years or a decade, that'll become our new normal.The ambition of the 2015 Paris Agreement is thus no longer an option. We don't yet have a new, generally acceptable—by all those governments and their respective interests—rallying cry, and one of the world's biggest emitters, the United States, is more or less absent at new climate-related meetings, except to periodically show up and lobby for lower renewables goals and an increase in subsidies for and policies that favor the fossil fuel industry.The increase in both number and potency of climate-influenced natural disasters is partly the result of this failure to act, and act forcefully and rapidly enough, by governments and by all the emitting industries they're meant to regulate.The cost of such disasters is skyrocketing—there are expected to be around $145 billion in insured losses, alone, in 2025, which is 6% higher than in 2024—and their human impact is booming as well, including deaths and injuries, but also the number of people being displaced, in some cases permanently, by these disasters.But none of that seems to move the needle much in some areas, in the face of entrenched interests, like the aforementioned fossil fuel industry, and the seeming inability of politicians in some nations to think and act beyond the needs of their next election cycle.That said, progress is still being made on many of these issues; it's just slower than it needs to be to reach previously set goals, like that now-defunct 1.5 degrees C ceiling.Most nations, beyond petro-states like Russia and those with fossil fuel industry-captured governments like the current US administration, have been deploying renewables, especially solar panels, at extraordinary rates. This is primarily the result of China's breakneck deployment of solar, which has offset a lot of energy growth that would have otherwise come from dirty sources like coal in the country, and which has led to a booming overproduction of panels that's allowed them to sell said panels cheap, overseas.Consequently, many nations, like Pakistan and a growing number of countries across Sub-Saharan African, have been buying as many cheap panels as they can afford and bypassing otherwise dirty and unreliable energy grids, creating arrays of microgrids, instead.Despite those notable absences, then, solar energy infrastructure installations have been increasing at staggering rates, and the first half of 2025 has seen the highest rate of capacity additions, yet—though China is still installing twice as much solar as the rest of the world, combined, at this point. Which is still valuable, as they still have a lot of dirty energy generation to offset as their energy needs increase, but more widely disseminated growth is generally seen to be better in the long-term—so the expansion into other parts of the world is arguably the bigger win, here.The economics of renewables may, at some point, convince even the skeptics and those who are politically opposed to the concept of renewables, rather than practically opposed to them, that it's time to change teams. Already, conservative parts of the US, like Texas, are becoming renewables boom-towns, quietly deploying wind and solar because they're often the best, cheapest, most resilient options, even as their politicians rail against them in public and vote for more fossil fuel subsidies.And it may be economics that eventually serve as the next nudge, or forceful shove on this movement toward renewables, as we're reaching a point at which real estate and the global construction industry, not to mention the larger financial system that underpins them and pretty much all other large-scale economic activities, are being not just impacted, but rattled at their roots, by climate change.In early November 2025, real estate listing company Zillow, the biggest such company in the US, stopped showing extreme weather risks for more than a million home sale listings on its site.It started showing these risk ratings in 2024, using data from a risk-modeling company called First Street, and the idea was to give potential buyers a sense of how at-risk a property they were considering buying might be when it comes to wildfires, floods, poor air quality, and other climate and pollution-related issues.Real estate agents hated these ratings, though, in part because there was no way to protest and change them, but also because, well, they might have an expensive coastal property listed that now showed potential buyers it was flood prone, if not today, in a couple of years. It might also show a beautiful mountain property that's uninsurable because of the risk of wildfire damage.A good heuristic for understanding the impact of global climate change is not to think in terms of warming, though that's often part of it, but rather thinking in terms of more radical temperature and weather swings.That means areas that were previously at little or no risk of flooding might suddenly be very at risk of absolutely devastating floods. And the same is true of storms, wildfires, and heat so intense people die just from being outside for an hour, and in which components of one's house might fry or melt.This move by Zillow, the appearance and removal of these risk scores, happened at the same time global insurers are warning that they may have to pull out of more areas, because it's simply no longer possible for them to do business in places where these sorts devastating weather events are happening so regularly, but often unpredictably, and with such intensity—and where the landscapes, ecologies, and homes are not made to withstand such things; all that stuff came of age or was built in another climate reality, so many such assets are simply not made for what's happening now, and what's coming.This is of course an issue for those who already own such assets—homes in newly flood-prone areas, for instance—because it means if there's a flood and a home owner loses their home, they may not be able to rebuild or get a payout that allows them to buy another home elsewhere. That leaves some of these assets stranded, and it leaves a lot of people with a huge chunk of their total resources permanently at risk, unable to move them, or unable to recoup most of their investment, shifting that money elsewhere. It also means entires industries could be at risk, especially banks and other financial institutions that provide loans for those who have purchased homes and other assets in such regions.An inability to get private insurance also means governments will be increasingly on the hook for issuing insurance of last resort to customers, which often costs more, but also, as we've seen with flood insurance in the US, means the government tends to lose a lot of money when increasingly common, major disasters occur on their soil.This isn't just a US thing, though; far from it. Global reinsurers, companies that provide insurance for insurance companies, and whose presence and participation in the market allow the insurance world to function, Swiss Re and Munich Re, recently said that uninsurable areas are growing around the world right now, and lacking some kind of fundamental change to address the climate paradigm shift, we could see a period of devastation in which rebuilding is unlikely or impossible, and a resultant period in which there's little or no new construction because no one wants to own a home or factory or other asset that cannot be insured—it's just not a smart investment.This isn't just a threat to individual home owners, then, it's potentially a threat to the whole of the global financial system, and every person and business attached to it, which in turn is a threat to global governance and the way property and economics work.There's a chance the worst-possible outcomes here can still be avoided, but with each new increase in global average temperature, the impacts become worse and less predictable, and the economics of simply making, protecting, and owning things become less and less favorable.Show Noteshttps://www.nytimes.com/2025/11/30/climate/zillow-climate-risk-scores-homes.htmlhttps://www.nytimes.com/2025/11/30/climate/climate-change-disinformation.htmlhttps://www.nytimes.com/2025/11/30/world/asia/india-delhi-pollution.htmlhttps://www.nytimes.com/2025/11/30/world/asia/flooding-indonesia-thailand-southeast-asia.htmlhttps://www.bbc.com/news/articles/c5y9ejley9dohttps://www.theguardian.com/environment/2025/nov/22/cop30-deal-inches-closer-to-end-of-fossil-fuel-era-after-bitter-standoffhttps://theconversation.com/the-world-lost-the-climate-gamble-now-it-faces-a-dangerous-new-reality-270392https://theconversation.com/earth-is-already-shooting-through-the-1-5-c-global-warming-limit-two-major-studies-show-249133https://www.404media.co/americas-polarization-has-become-the-worlds-side-hustle/https://www.cnbc.com/2025/08/08/climate-insurers-are-worried-the-world-could-soon-become-uninsurable-.htmlhttps://www.imd.org/ibyimd/sustainability/climate-change-the-emergence-of-uninsurable-areas-businesses-must-act-now-or-pay-later/https://www.jec.senate.gov/public/index.cfm/democrats/2024/12/climate-risks-present-a-significant-threat-to-the-u-s-insurance-and-housing-marketshttps://www.weforum.org/stories/2025/04/financial-system-warning-climate-nature-stories-this-week/https://www.weforum.org/stories/2025/05/costs-climate-disasters-145-billion-nature-climate-news/https://arstechnica.com/science/2025/11/solars-growth-in-us-almost-enough-to-offset-rising-energy-use/https://ember-energy.org/latest-updates/global-solar-installations-surge-64-in-first-half-of-2025/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
In Locust Radio 31, Tish and Adam read poems from the forthcoming issue, discuss Trumpism and art in Venice, and try to unpack the editorial for Locust Review 13. Tish and Adam also listen to the song “Dortn” by Sister Wife Sex Strike. Discussed in this episode: Alma Allen; Suvrat Arora, “People are using AI to talk to God,” BBC (October 18, 2025); Editorial, “Lucky 13,” Locust Review 13 (Winter 2025/2026); Emily M. Bender, Alex Hanna, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want (Harper Collins, 2025); Timothy Binkley, “Autonomous Creations: Birthing Intelligent Agents,” Leonardo 31.5 (1998), 333-336; Ben Davis, “What is the Mysterious New Group Behind Trump's Venice Biennale Pick?,” Artnet (November 25, 2025); Benoit Dillet, “Technofascism and the AI Stage of Late Capitalism,” Blog of the APA (American Philosophical Association), (March 10, 2025); Marcel Duchamp, Fountain (1917); Robert M. Geraci, "Apocalyptic AI: Religion and the Promise of Artificial Intelligence,” Journal of the American Academy of Religion 76.1 (March 2008), 138-166; Jesse Clyde Howard; Holly Lewis, “Towards AI Realism: Opening Notes on Machine Learning and Our Collective Future,” Spectre (June 7, 2024); Alex Press, “US Unions Take on Artificial Intelligence,” Jacobin (November 8, 2024); Michael A. Rosenthal, “Benjamin's Wager on Modernity: Gambling and the Arcades Project,” The Germanic Review: Literature, Culture, Theory 87.3 (2012), 261-278; Victor Tangermann, “AI Now Claiming to Be God,” Futurism (September 16, 2025); Adam Turl, “All is Concealed: CAM's Direct Drive,” West End Word (October 5, 2016); Adam Turl, “Selling Out,” Locust Review 13 (Winter 2025/2026); Tish Turl, “Elegy for the Faithful Mapmakers,” Locust Review 13 (Winter 2025/2026); Gareth Watkins, “AI: The New Aesthetics of Fascism,” New Socialist (February 9, 2025); Luke Winkie, “Lost Vegas,” Slate (November 18, 2025); Eliezer Yudkowsky, Nate Soares, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (Little, Brown and Company, 2025)
Nate Soares is president of the Machine Intelligence Research Institute and co-author, with Eliezer Yudkowsky, of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. He has been working in the field for over a decade, after previous experience at Microsoft and Google. In this week's conversation, Yascha Mounk and Nate Soares explore why AI is harder to control than traditional software, what happens when machines develop motivations, and at what point humans can no longer contain the potential catastrophe. If you have not yet signed up for our podcast, please do so now by following this link on your phone. Email: leonora.barclay@persuasion.community Podcast production by Jack Shields and Leonora Barclay. Connect with us! Spotify | Apple | Google X: @Yascha_Mounk & @JoinPersuasion YouTube: Yascha Mounk, Persuasion LinkedIn: Persuasion Community Learn more about your ad choices. Visit megaphone.fm/adchoices
A.I. is becoming smarter without much help from humans, and that should worry us all. Nate Soares, president of Machine Intelligence Research Institute (MIRI), joins host Krys Boyd to discuss what happens when A.I. brain power surpasses what humans are capable of, why we don't have the technology yet to understand what we're building, and why everything will be just fine … until it isn't. His book, co-written with Eliezer Yudkowsky, is “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.” Learn about your ad choices: dovetail.prx.org/ad-choices
In Rutger Bregman's first book, Utopia for Realists, the historian describes a rosy vision of the future – one with 15-hour work weeks, universal basic income and massive wealth redistribution.It's a vision that, in the age of artificial intelligence, now seems increasingly possible.But utopia is far from guaranteed. Many experts predict that AI will also lead to mass job loss, the development of new bioweapons and, potentially, the extinction of our species.So if you're building a technology that could either save the world or destroy it – is that a moral pursuit?These kinds of thorny questions are at the heart of Bregman's latest book, Moral Ambition. In a sweeping conversation that takes us from the invention of the birth control pill to the British Abolitionist movement, Bregman and I discuss what a good life looks like (spoiler: he thinks the death of work might not be such a bad thing) – and whether AI can help get us there.Mentioned: Moral Ambition, by Rutger BregmanUtopia for Realists, by Rutger Bregman If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate SoaresMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Sam Bankman-Fried, FTX scandal, TESCERAL, Effective Altruism (EA), Utilitarianism, AGI, AI as a scam, Will MacAskill, Machine Intelligence Research Institute (MIRI), the Center for Applied Rationality (CFAR), Leverage Research, Peter Thiel, Eliezer Yudkowsky, Longtermism, Barbara Fried, Sanford, Lewis Terman, gifted kids, Fred Terman, eugenics, Anthropic, Rationalism, human potential movement. Landmark/est, MK-ULTRA, Zizians, cultsDavid's bookMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/ Hosted on Acast. See acast.com/privacy for more information.
Humanity’s attempts to achieve artificial superintelligence will be our downfall, according to If Anyone Builds It, Everyone Dies. That’s the new book out by AI experts Nates Soares and Eliezer Yudkowsky. And while their provocation may feel extreme in this moment when AI slop abounds and the media is hyping a bubble on the verge of bursting, Soares is so convinced of his argument that he’s calling for a complete stop to AI development. Today on the show, Nate and Maria ask Soares how he came to this conclusion and what everyone else is missing. For more from Nate and Maria, subscribe to their newsletters: The Leap from Maria Konnikova Silver Bulletin from Nate Silver See omnystudio.com/listener for privacy information.
Artificial intelligence has leapt from speculative theory to everyday tool with astonishing speed, promising breakthroughs in science, medicine, and the ways we learn, live, and work. But to some of its earliest researchers, the race toward superintelligence represents not progress but an existential threat, one that could end humanity as we know it.Eliezer Yudkowsky and Nate Soares, authors of If Anyone Builds It, Everyone Dies, join Oren to debate their claim that pursuing AI will end in human extinction. During the conversation, a skeptical Oren pushes them on whether meaningful safeguards are possible, what a realistic boundary between risk and progress might look like, and how society should judge the costs of stopping against the consequences of carrying on.
Today Razib talks to Nate Soares the President of the Machine Intelligence Research Institute (MIRI). He joined MIRI in 2014 and has since authored many of its core technical agendas, including foundational documents like Agent Foundations for Aligning Superintelligence with Human Interests. Prior to his work in AI research, Soares worked as a software engineer at Google. He holds a B.S. in computer science and economics from George Washington University. On this episode they discuss his new book, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, co-authored with Eliezer Yudkowsky. Soares and Yudkowsky make the stark case that the race to build superintelligent AI is a "suicide race" for humanity. Razib and Soares discuss how AI systems are "grown" rather than deliberately engineered, making them fundamentally opaque and uncontrollable. They explore a concrete extinction scenario, explain why even minimally misaligned goals could lead to human annihilation. Soares urges immediate cooperative action to prevent such a worst-case outcome.
A deadly 6.3 magnitude earthquake strikes Afghanistan, A man is charged with 11 attempted murders in a U.K. train attack, Iran vows to rebuild its nuclear sites, A Mexican mayor is shot and killed during the Day of the Dead festival, The BBC claims China threatened a U.K. university over Uyghur research, President Trump instructs the Pentagon to prepare for “possible action” in Nigeria, Trump's planned nuclear tests will reportedly be 'noncritical explosions', 21 states are among those suing the Trump administration over student loan forgiveness rules, France rejects a wealth tax and approves a holding company levy, Eliezer Yudkowsky critiques OpenAI's stated goals, and the LA Dodgers are MLB champions after an epic World Series. Sources: www.verity.news
This is a link post. Eliezer Yudkowsky did not exactly suggest that you should eat bear fat covered with honey and sprinkled with salt flakes. What he actually said was that an alien, looking from the outside at evolution, would predict that you would want to eat bear fat covered with honey and sprinkled with salt flakes. Still, I decided to buy a jar of bear fat online, and make a treat for the people at Inkhaven. It was surprisingly good. My post discusses how that happened, and a bit about the implications for Eliezer's thesis. Let me know if you want to try some; I can prepare some for you if you happen to be at Lighthaven before we run out of bear fat, and before I leave toward the end of November. --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/2pKiXR6X7wdt8eFX5/i-ate-bear-fat-with-honey-and-salt-flakes-to-prove-a-point Linkpost URL:https://signoregalilei.com/2025/11/03/i-ate-bear-fat-to-prove-a-point/ --- Narrated by TYPE III AUDIO.
Techno-philosopher Eliezer Yudkowsky recently went on Ezra Klein's podcast to argue that if we continue on our path toward superintelligent AI, these machines will destroy humanity. In this episode, Cal responds to Yudkowsky's argument point by point, concluding with a more general claim that these general styles of discussions suffer from what he calls “the philosopher's fallacy,” and are distracting us from real problems with AI that are actually afflicting us right now. He then answers listener questions about AI, responds to listener comments from an earlier AI episode, and ends by discussing Alpha schools, which claim to use AI to 2x the speed of education. Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here's the link: bit.ly/3U3sTvoVideo from today's episode: youtube.com/calnewportmediaDeep Dive: The Case Against Superintelligence [0.01]How should students think about “AI Literacy”? [1:06:35]Did AI blackmail an engineer to not turn it off? [1:09:06]Can I use AI to mask my laziness? [1:12:31]COMMENTS: Cal reads LM comments [1:16:58]CALL: Clarification on Lincoln Protocol [1:21:36]CAL REACTS: Are AI-Powered Schools the Future? [1:24:46]Links:Buy Cal's latest book, “Slow Productivity” at calnewport.com/slowGet a signed copy of Cal's “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/Cal's monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?youtube.com/watch?v=2Nn0-kAE5c0alpha.school/the-program/astralcodexten.com/i/166959786/part-three-how-alpha-works-partThanks to our Sponsors: byloftie.com (Use code “DEEP20”)expressvpn.com/deepshopify.com/deepvanta.com/deepquestionsThanks to Jesse Miller for production, Jay Kerstens for the intro music, and Mark Miles for mastering. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Show Notes: Steve recounts his senior year at Harvard, and how he was torn between pursuing acting and philosophy. He graduated with a dual degree in philosophy and math but also found time to act in theater and participated in 20 shows. A Love of Theater and a Move to London Steve explains why the lack of a theater major at Harvard allowed him to explore acting more than a university with a theater major. He touches on his parents' concerns about his career prospects if he pursued acting, and his decision to apply to both acting and philosophy graduate schools. Steve discusses his rejection from all graduate schools and why he decided to move to London with friends Evan Cohn and Brad Rouse. He talks about his experience in London. Europe on $20 a Day Steve details his backpacking trip through Europe on a $20 a day budget, staying with friends from Harvard and high school. He mentions a job opportunity in Japan through the Japanese Ministry of Education and describes his three-year stint in Japan, working as a native English speaker for the Japanese Ministry of Education, and being immersed in Japanese culture. He shares his experiences of living in the countryside and reflects on the impact of living in a different culture, learning some Japanese, and making Japanese friends. He discusses the personal growth and self-reflection that came from his time in Japan, including his first steps off the "achiever track." On to Philosophy Graduate School When Steve returned to the U.S. he decided to apply to philosophy graduate schools again, this time with more success. He enrolled at the University of Michigan. However, he was miserable during grad school, which led him to seek therapy. Steve credits therapy with helping him make better choices in life. He discusses the competitive and prestigious nature of the Michigan philosophy department and the challenges of finishing his dissertation. He touches on the narrow and competitive aspects of pursuing a career in philosophy and shares his experience of finishing his dissertation and the support he received from a good co-thesis advisor. Kalamazoo College and Improv Steve describes his postdoc experience at Kalamazoo College, where he continued his improv hobby and formed his own improv group. He mentions a mockumentary-style improv movie called Comic Evangelists that premiered at the AFI Film Festival. Steve moved to Buffalo, Niagara University, and reflects on the challenges of adjusting to a non-research job. He discusses his continued therapy in Buffalo and the struggle with both societal and his own expectations of professional status, however, with the help of a friend, he came to the realization that he had "made it" in his current circumstances. Steve describes his acting career in Buffalo, including roles in Shakespeare in the Park and collaborating with a classmate, Ian Lithgow. A Speciality in Philosophy of Science Steve shares his personal life, including meeting his wife in 2009 and starting a family. He explains his specialty in philosophy of science, focusing on the math and precise questions in analytic philosophy. He discusses his early interest in AI and computational epistemology, including the ethics of AI and the superintelligence worry. Steve describes his involvement in a group that discusses the moral status of digital minds and AI alignment. Aligning AI with Human Interests Steve reflects on the challenges of aligning AI with human interests and the potential existential risks of advanced AI. He shares his concerns about the future of AI and the potential for AI to have moral status. He touches on the superintelligence concern and the challenges of aligning AI with human goals. Steve mentions the work of Eliezer Yudkowsky and the importance of governance and alignment in AI development. He reflects on the broader implications of AI for humanity and the need for careful consideration of long-term risks. Harvard Reflections Steve mentions Math 45 and how it kicked his butt, and his core classes included jazz, an acting class and clown improv with Jay Nichols. Timestamps: 01:43: Dilemma Between Acting and Philosophy 03:44: Rejection and Move to London 07:09: Life in Japan and Cultural Insights 12:19: Return to Academia and Grad School Challenges 20:09: Therapy and Personal Growth 22:06: Transition to Buffalo and Philosophy Career 26:54: Philosophy of Science and AI Ethics 33:20: Future Concerns and AI Predictions 55:17: Reflections on Career and Personal Growth Links: Steve's Website: https://stevepetersen.net/ On AI Superintelligence: If Anyone Builds it, Everyone Dies Superintelligence The Alignment Problem Some places to donate: The Long-Term Future Fund Open Philanthropy On improv Impro Upright Citizens Brigade Comedy Improvisation Manual Featured Non-profit: The featured non-profit of this week's episode is brought to you by Rich Buery who reports: “Hi, I'm Rich Buery, class of 1992. The featured nonprofit of this episode of The 92 Report is imentor. imentor is a powerful youth mentoring organization that connects volunteers with high school students and prepares them on the path to and through college. Mentors stay with the students through the last two years of high school and on the beginning of their college journey. I helped found imentor over 25 years ago and served as its founding executive director, and I am proud that over the last two decades, I've remained on the board of directors. It's truly a great organization. They need donors and they need volunteers. You can learn more about their work@www.imentor.org That's www, dot i m, e n, t, O, r.org, and now here is Will Bachman with this week's episode. To learn more about their work, visit: www.imentor.org.
(23K words; best considered as nonfiction with a fictional-dialogue frame, not a proper short story.) Prologue: Klurl and Trapaucius were members of the machine race. And no ordinary citizens they, but Constructors: licensed, bonded, and insured; proven, experienced, and reputed. Together Klurl and Trapaucius had collaborated on such famed artifices as the Eternal Clock, Silicon Sphere, Wandering Flame, and Diamond Book; and as individuals, both had constructed wonders too numerous to number. At one point in time Trapaucius was meeting with Klurl to drink a cup together. Klurl had set before himself a simple mug of mercury, considered by his kind a standard social lubricant. Trapaucius had brought forth in turn a far more exotic and experimental brew he had been perfecting, a new intoxicant he named gallinstan, alloyed from gallium, indium, and tin. "I have always been curious, friend Klurl," Trapaucius began, "about the ancient mythology which holds [...] ---Outline:(00:20) Prologue:(05:16) On Fleshling Capabilities (the First Debate between Klurl and Trapaucius):(26:05) On Fleshling Motivations (the 2nd (and by Far Longest) Debate between Klurl and Trapaucius):(36:32) On the Epistemology of Simplicitys Razor Applied to Fleshlings (the 2nd Part of their 2nd Debate, that is, its 2.2nd Part):(51:36) On the Epistemology of Reasoning About Alien Optimizers and their Outputs (their 2.3rd Debate):(01:08:46) On Considering the Outcome of a Succession of Filters (their 2.4th Debate):(01:16:50) On the Purported Beneficial Influence of Complications (their 2.5th Debate):(01:25:58) On the Comfortableness of All Reality (their 2.6th Debate):(01:32:53) On the Way of Proceeding with the Discovered Fleshlings (their 3rd Debate):(01:52:22) In which Klurl and Trapaucius Interrogate a Fleshling (that Being the 4th Part of their Sally):(02:16:12) On the Storys End:--- First published: October 26th, 2025 Source: https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius --- Narrated by TYPE III AUDIO.
Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute. Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there's a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it's too late Expect to learn the problem with building superhuman AI, why AI would have goals we haven't programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get 15% off your first order of Intake's magnetic nasal strips at https://intakebreathing.com/modernwisdom Get 10% discount on all Gymshark's products at https://gym.sh/modernwisdom (use code MODERNWISDOM10) Get 4 extra months of Surfshark VPN at https://surfshark.com/modernwisdom Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
Teaser ... Why Liron became a Yudkowskian ... Eliezer Yudkowsky's vision of AI apocalypse ... Does intelligence want power? ... Decoding Yudkowsky's key Darwinian metaphor ... Is doomerism crowding out other AI worries? ... Liron: The silent majority is very AI anxious ... Heading to Overtime ...
Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case.Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent systems Yudkowsky thought humans should never create. But Yudkowsky is still out there sounding the alarm. He has a new book out, co-written with Nate Soares, “If Anyone Builds It, Everyone Dies,” trying to warn the world before it's too late.So what does Yudkowsky see that most of us don't? What makes him so certain? And why does he think he hasn't been able to persuade more people?Mentioned:Oversight of A.I.: Rules for Artificial IntelligenceIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares“A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.” by Kashmir HillBook Recommendations:A Step Farther Out by Jerry PournelleJudgment under Uncertainty by Daniel Kahneman, Paul Slovic, and Amos TverskyProbability Theory by E. T. JaynesThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our executive producer is Claire Gordon. The show's production team also includes Marie Cascione, Annie Galvin, Kristin Lin, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Helen Toner and Jeffrey Ladish. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Jim talks with Nate Soares about the ideas in his and Eliezer Yudkowsky's book If Anybody Builds It, Everybody Dies: Why Superhuman AI Would Kill Us All. They discuss the book's claim that mitigating existential AI risk should be a top global priority, the idea that LLMs are grown, the opacity of deep learning networks, the Golden Gate activation vector, whether our understanding of deep learning networks might improve enough to prevent catastrophe, goodness as a narrow target, the alignment problem, the problem of pointing minds, whether LLMs are just stochastic parrots, why predicting a corpus often requires more mental machinery than creating a corpus, depth & generalization of skills, wanting as an effective strategy, goal orientation, limitations of training goal pursuit, transient limitations of current AI, protein folding and AlphaFold, the riskiness of automating alignment research, the correlation between capability and more coherent drives, why the authors anchored their argument on transformers & LLMs, the inversion of Moravec's paradox, the geopolitical multipolar trap, making world leaders aware of the issues, a treaty to ban the race to superintelligence, the specific terms of the proposed treaty, a comparison with banning uranium enrichment, why Jim tentatively thinks this proposal is a mistake, a priesthood of the power supply, whether attention is a zero-sum game, and much more. Episode Transcript "Psyop or Insanity or ...? Peter Thiel, the Antichrist, and Our Collapsing Epistemic Commons," by Jim Rutt "On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback," by Marcus Williams et al. Attention Sinks and Compression Valleys in LLMs are Two Sides of the Same Coin," by Enrique Queipo-de-Llano et al. JRS EP 217 - Ben Goertzel on a New Framework for AGI "A Tentative Draft of a Treaty, With Annotations" Nate Soares is the President of the Machine Intelligence Research Institute. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.
Craig Kaplan has been thinking about superintelligence longer than most. He bought the URL superintelligence.com back in 2006, and many years before that, in the late 1980s, he co-authored a series of papers with one of the founding fathers of AI, Herbert Simon.Craig started his career as a scientist with IBM, and later founded and ran a venture-backed company called PredictWallStreet that brought the wisdom of the crowd to Wall Street, and improved the performance of leading hedge funds. He sold that company in 2020, and now spends his time working out how to make the first superintelligence safe. As he puts it, he wants to reduce P(Doom) and increase P(Zoom).Selected follow-ups:iQ CompanyHerbert A. Simon - WikipediaAmara's Law and Its Place in the Future of Tech - Pohan LinPredict Wall StreetThe Society of Mind - book by Marvin MinskyAI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC NewsStatement on AI Risk - Center for AI SafetyI've Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells - Paul Tudor JonesSecrets of Software Quality: 40 Innovations from IBM - book by Craig KaplanLondon Futurists Podcast episode featuring David BrinReason in human affairs - book by Herbert SimonUS and China will intervene to halt ‘suicide race' of AGI – Max TegmarkIf Anybody Builds It, Everyone Dies - book by Eliezer Yudkowsky and Nate SoaresAGI-25 - conference in ReykjavikThe First Global Brain Workshop - Brussels 2001Center for Integrated CognitionPaul S. RosenbloomTatiana Shavrina, MetaHenry Minsky launches AI startup inspired by father's MIT researchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Jim talks with Joe Edelman about the ideas in the Meaning Alignment Institute's recent paper "Full Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value." They discuss pluralism as a core principle in designing social systems, the informational basis for alignment, how preferential models fail to capture what people truly care about, the limitations of markets and voting as preference-based systems, critiques of text-based approaches in LLMs, thick models of value, values as attentional policies, AI assistants as potential vectors for manipulation, the need for reputation systems and factual grounding, the "super negotiator" project for better contract negotiation, multipolar traps, moral graph elicitation, starting with membranes, Moloch-free zones, unintended consequences and lessons from early Internet optimism, concentration of power as a key danger, co-optation risks, and much more. Episode Transcript "A Minimum Viable Metaphysics," by Jim Rutt (Substack) Jim's Substack JRS Currents 080: Joe Edelman and Ellie Hain on Rebuilding Meaning Meaning Alignment Institute If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky and Nate Soares "Full Stack Alignment: Co-aligning AI and Institutions with Thick Models of Value," by Joe Edelman et al. "What Are Human Values and How Do We Align AI to Them?" by Oliver Klingefjord, Ryan Lowe, and Joe Edelman Joe Edelman has spent much of his life trying to understand how ML systems and markets could change, retaining their many benefits but avoiding their characteristic problems: of atomization, and of servicing shallow desires over deeper needs. Along the way this led him to formulate theories of human meaning and values (https://arxiv.org/abs/2404.10636) and study models of societal transformation (https://www.full-stack-alignment.ai/paper) as well as inventing the meaning-based metrics used at CouchSurfing, Facebook, and Apple, co-founding the Center for Humane Technology and the Meaning Alignment Institute, and inventing new democratic systems (https://arxiv.org/abs/2404.10636). He's currently one of the PIs leading the Full-Stack Alignment program at the Meaning Alignment Institute, with a network of more than 50 researchers at universities and corporate labs working on these issues.
The story of how Geoffrey Hinton became “the godfather of AI” has reached mythic status in the tech world.While he was at the University of Toronto, Hinton pioneered the neural network research that would become the backbone of modern AI. (One of his students, Ilya Sutskever, went on to be one of OpenAI's most influential scientific minds.) In 2013, Hinton left the academy and went to work for Google, eventually winning both a Turing Award and a Nobel Prize.I think it's fair to say that artificial intelligence as we know it, may not exist without Geoffrey Hinton.But Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations and citizens that his life's work – this thing he helped build – might lead to our collective extinction. And that moment may be closer than we think, because Hinton believes AI may already be conscious.But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada's, seem reluctant to get in the way.So I wanted to ask Hinton: If we keep going down this path, what will become of us?Mentioned:If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate SoaresAgentic Misalignment: How LLMs could be insider threats, by AnthropicMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Jonathan Newman returns to join Bob in a critique of Eliezer Yudkowsky's viral theory of investment bubbles. Yudkowsky states that the bad investment during bubbles should be felt before the bubble pops, not after. They argue that his perspective—while clever—fails to consider the Austrian insights on capital structure, time preference, and the business cycle. They use analogies from apple trees to magic mushrooms to show why Austrian economics provides the clearest explanation for booms, busts, and the pain that follows.Eliezer Yudkowsky's Theory on Investment Bubbles: Mises.org/HAP520aBob's Article "Correcting Yudkowsky on the Boom": Mises.org/HAP520bBob's on The Importance of Capital Theory: Mises.org/HAP520cJoe Salerno on Austrian Business Cycle Theory: Mises.org/HAP520dDr. Newman's QJAE Article on Credit Cycles: Mises.org/HAP520eThe Mises Institute is giving away 100,000 copies of Hayek for the 21st Century. Get your free copy at Mises.org/HAPodFree
Jonathan Newman returns to join Bob in a critique of Eliezer Yudkowsky's viral theory of investment bubbles. Yudkowsky states that the bad investment during bubbles should be felt before the bubble pops, not after. They argue that his perspective—while clever—fails to consider the Austrian insights on capital structure, time preference, and the business cycle. They use analogies from apple trees to magic mushrooms to show why Austrian economics provides the clearest explanation for booms, busts, and the pain that follows.Eliezer Yudkowsky's Theory on Investment Bubbles: Mises.org/HAP520aBob's Article "Correcting Yudkowsky on the Boom": Mises.org/HAP520bBob's on The Importance of Capital Theory: Mises.org/HAP520cJoe Salerno on Austrian Business Cycle Theory: Mises.org/HAP520dDr. Newman's QJAE Article on Credit Cycles: Mises.org/HAP520eThe Mises Institute is giving away 100,000 copies of Hayek for the 21st Century. Get your free copy at Mises.org/HAPodFree
On this week's episode, I'm joined by Nate Soares to talk about his new book, cowritten with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. It's a fascinating book—some will say fearmongering and sensationalist; I, frankly, think they're overly optimistic about our ability to constrain the development of general intelligence in AI—in large part because of how it's structured. Each chapter is preceded by a fable of sorts about the nature of intelligence and the desires of intelligent beings that look and think very differently from humans. The point in each of these passages is less that AI will want to eliminate humanity and more that it might do so incidentally, through natural processes of resource acquisition. This made me think about how AI is typically portrayed in film; it is all too often a Terminator-style scenario, where the intelligence is antagonistic in human ways and for human reasons. We talked some about how storytellers could do a better job of thinking about AI as it might actually exist versus how it might be like us; Ex Machina is a movie that came in for special discussion due to the thoughtful nature of the treatment of its robotic antagonist's desires. If this episode made you think, I hope you share it with a friend!
What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement. They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.The conversation spans AI collaboration, secure operating frameworks, cryptographic separation, and lessons from nuclear non-proliferation. Despite their differences, both aim for a future where AI benefits humanity without posing existential threats. Hosted on Acast. See acast.com/privacy for more information.
Atenção (disclaimer): Os dados aqui apresentados representam minha opinião pessoal.Não são de forma alguma indicações de compra ou venda de ativos no mercado financeiro.PEC da Blindagem: veja como votou cada partidohttps://oglobo.globo.com/politica/noticia/2025/09/16/pec-da-blindagem-veja-como-votou-cada-deputado.ghtmlVeja como cada partido votou para manter a votação secreta na PEC da blindagemhttps://valor.globo.com/politica/noticia/2025/09/17/veja-como-cada-partido-votou-para-manter-a-votacao-secreta-na-pec-da-blindagem.ghtmlThe Rise of the Supreme Court's So-Called Shadow Dockethttps://podcasts.apple.com/br/podcast/the-rise-of-the-supreme-courts-so-called-shadow-docket/id1200361736?i=1000726880643&l=en-GBAre We Past Peak iPhone? + Eliezer Yudkowsky on A.I. Doomhttps://podcasts.apple.com/br/podcast/are-we-past-peak-iphone-eliezer-yudkowsky-on-a-i-doom/id1528594034?i=1000726491309&l=en-GBTrapped in a ChatGPT Spiralhttps://podcasts.apple.com/br/podcast/trapped-in-a-chatgpt-spiral/id1200361736?i=1000727028310&l=en-GBEconomic fallout mounts as Trump halts near-finished wind power projecthttps://podcasts.apple.com/br/podcast/economic-fallout-mounts-as-trump-halts-near-finished/id78304589?i=1000727120403&l=en-GB'Para salvar própria pele, parlamentares não veem divergência', diz Thiago Bronzatto sobre PEC da Blindagemhttps://podcasts.apple.com/br/podcast/para-salvar-pr%C3%B3pria-pele-parlamentares-n%C3%A3o-veem-diverg%C3%AAncia/id203963267?i=1000727116432&l=en-GBPEC da Blindagem: 'É um vexame o que está acontecendo'https://podcasts.apple.com/br/podcast/pec-da-blindagem-%C3%A9-um-vexame-o-que-est%C3%A1-acontecendo/id1552208254?i=1000727230878&l=en-GBBlindagem no Congresso abre caminho para retrocessohttps://podcasts.apple.com/br/podcast/blindagem-no-congresso-abre-caminho-para-retrocesso/id203963267?i=1000727234976&l=en-GBPEC da Blindagem: caminho para a impunidadehttps://podcasts.apple.com/br/podcast/pec-da-blindagem-caminho-para-a-impunidade/id1477406521?i=1000727283243&l=en-GBPEC da Blindagem: ‘uma violação ético-moral'https://podcasts.apple.com/br/podcast/pec-da-blindagem-uma-viola%C3%A7%C3%A3o-%C3%A9tico-moral/id203963267?i=1000727334438&l=en-GBMaria Ressa - Fighting Back Against Trump's Authoritarian Algorithm With Truth | The Daily Showhttps://www.youtube.com/watch?v=Tsb1I7hqaJ4JHSF vende quase R$ 5 bi em estoquehttps://braziljournal.com/jhsf-vende-quase-r-5-bi-em-estoque-mudando-modelo-de-incorporacao/Conselho da Oncoclínicas aprova aumento de capitalhttps://exame.com/invest/mercados/conselho-da-oncoclinicas-aprova-aumento-de-capital-de-ate-r-2-bi-falta-o-aval-dos-acionistas/Hugo Motta 'fez aprovar o maior dos absurdoshttps://podcasts.apple.com/br/podcast/hugo-motta-fez-aprovar-o-maior-dos-absurdos-da-hist%C3%B3ria/id203963267?i=1000727343038&l=en-GBCâmara: projetos em benefício própriohttps://podcasts.apple.com/br/podcast/c%C3%A2mara-projetos-em-benef%C3%ADcio-pr%C3%B3prio/id1477406521?i=1000727443772&l=en-GBUOL Prime #88: histórico de anistiashttps://podcasts.apple.com/br/podcast/uol-prime-88-como-hist%C3%B3rico-de-anistias-deu-espa%C3%A7o/id1574996957?i=1000727305499&l=en-GBNão quero mais falar de anistiahttps://podcasts.apple.com/br/podcast/n%C3%A3o-quero-mais-falar-de-anistia-vou-falar-de/id203963267?i=1000727498034&l=en-GBWhat Happens if Xi Jinping Dies in Office?https://podcasts.apple.com/br/podcast/what-happens-if-xi-jinping-dies-in-office/id1525445350?i=1000492377817&l=en-GBCDC panel overhauled by RFK Jrhttps://podcasts.apple.com/br/podcast/cdc-panel-overhauled-by-rfk-jr-changes-childhood-vaccine/id78304589?i=1000727429494&l=en-GBKimmel free speech under Trumphttps://podcasts.apple.com/br/podcast/what-the-move-to-pull-kimmel-off-the-air-says-about/id78304589?i=1000727422538&l=en-GBJimmy Kimmel and Free Speechhttps://podcasts.apple.com/br/podcast/jimmy-kimmel-and-free-speech-in-the-united-states/id1200361736?i=1000727485153&l=en-GB
Nate Soares, president of the Machine Intelligence Research Institute and the co-author (with Eliezer Yudkowsky) of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (Little, Brown and Company, 2025), talks about why he worries that AI "superintelligence" will lead to catastrophic outcomes, and what safeguards he recommends to prevent this.
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing survival instincts, hallucinations and deception in LLMs, why many prominent voices in tech remain skeptical of the dangers of superintelligent AI, the timeline for superintelligence, real-world consequences of current AI systems, the imaginary line between the internet and reality, why Eliezer and Nate believe superintelligent AI would necessarily end humanity, how we might avoid an AI-driven catastrophe, the Fermi paradox, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Back in May, we announced that Eliezer Yudkowsky and Nate Soares's new book If Anyone Builds It, Everyone Dies was coming out in September. At long last, the book is here![1] US and UK books, respectively. IfAnyoneBuildsIt.com Read on for info about reading groups, ways to help, and updates on coverage the book has received so far. Discussion Questions & Reading Group Support We want people to read and engage with the contents of the book. To that end, we've published a list of discussion questions. Find it here: Discussion Questions for Reading Groups We're also interested in offering support to reading groups, including potentially providing copies of the book and helping coordinate facilitation. If interested, fill out this AirTable form. How to Help Now that the book is out in the world, there are lots of ways you can help it succeed. For starters, read the book! [...] ---Outline:(00:49) Discussion Questions & Reading Group Support(01:18) How to Help(02:39) Blurbs(05:15) Media(06:26) In ClosingThe original text contained 2 footnotes which were omitted from this narration. --- First published: September 16th, 2025 Source: https://www.lesswrong.com/posts/fnJwaz7LxZ2LJvApm/if-anyone-builds-it-everyone-dies-release-day --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Apple's yearly iPhone event took place this week, and it left us asking, Is Apple losing the juice? We break down all the new products the company announced and discuss where it goes from here. Then, Eliezer Yudkowsky, one of the most fascinating people in A.I., has a new book coming out: “If Anyone Builds It, Everyone Dies.” He joins us to make the case for why A.I. development should be shut down now, long before we reach superintelligence, and how he thinks that could happen.Guests:Eliezer Yudkowsky, founder of Machine Intelligence Research Institute and a co-author of “If Anyone Builds It, Everyone Dies”Additional Reading: A.I.'s Prophet of Doom Wants to Shut It All DownAI as Normal Technology, revisitedApple's misunderstood crossbody iPhone strap might be the best I've seen We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
I. Eliezer Yudkowsky's Machine Intelligence Research Institute is the original AI safety org. But the original isn't always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a sparkly top hat in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don't? MIRI answered: moral clarity. Most people in AI safety (including me) are uncertain and confused and looking for least-bad incremental solutions. We think AI will probably be an exciting and transformative technology, but there's some chance, 5 or 15 or 30 percent, that it might turn against humanity in a catastrophic way. Or, if it doesn't, that there will be something less catastrophic but still bad - maybe humanity gradually fading into the background, the same way kings and nobles faded into the background during the modern era. This is scary, but AI is coming whether we like it or not, and probably there are also potential risks from delaying too hard. We're not sure exactly what to do, but for now we want to build a firm foundation for reacting to any future threat. That means keeping AI companies honest and transparent, helping responsible companies like Anthropic stay in the race, and investing in understanding AI goal structures and the ways that AIs interpret our commands. Then at some point in the future, we'll be close enough to the actually-scary AI that we can understand the threat model more clearly, get more popular buy-in, and decide what to do next. MIRI thinks this is pathetic - like trying to protect against an asteroid impact by wearing a hard hat. They're kind of cagey about their own probability of AI wiping out humanity, but it seems to be somewhere around 95 - 99%. They think plausibly-achievable gains in company responsibility, regulation quality, and AI scholarship are orders of magnitude too weak to seriously address the problem, and they don't expect enough of a “warning shot” that they feel comfortable kicking the can down the road until everything becomes clear and action is easy. They suggest banning all AI capabilities research immediately, to be restarted only in some distant future when the situation looks more promising. Both sides honestly believe their position and don't want to modulate their message for PR reasons. But both sides, coincidentally, think that their message is better PR. The incrementalists think a moderate, cautious approach keeps bridges open with academia, industry, government, and other actors that prefer normal clean-shaven interlocutors who don't emit spittle whenever they talk. MIRI thinks that the public is sick of focus-group-tested mealy-mouthed bullshit, but might be ready to rise up against AI if someone presented the case in a clear and unambivalent way. Now Yudkowsky and his co-author, MIRI president Nate Soares, have reached new heights of unambivalence with their new book, If Anyone Builds It, Everyone Dies (release date September 16, currently available for preorder). https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone
Podcast: Doom Debates Episode: Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?Release date: 2025-08-12Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationVitalik Buterin is the founder of Ethereum, the world's second-largest cryptocurrency by market cap, currently valued at around $500 billion. But beyond revolutionizing blockchain technology, Vitalik has become one of the most thoughtful voices on AI safety and existential risk.He's donated over $665 million to pandemic prevention and other causes, and has a 12% P(Doom) – putting him squarely in what I consider the "sane zone" for AI risk assessment. What makes Vitalik particularly interesting is that he's both a hardcore techno-optimist who built one of the most successful decentralized systems ever created, and someone willing to seriously consider AI regulation and coordination mechanisms.Vitalik coined the term "d/acc" – defensive, decentralized, democratic, differential acceleration – as a middle path between uncritical AI acceleration and total pause scenarios. He argues we need to make the world more like Switzerland (defensible, decentralized) and less like the Eurasian steppes (vulnerable to conquest).We dive deep into the tractability of AI alignment, whether current approaches like DAC can actually work when superintelligence arrives, and why he thinks a pluralistic world of competing AIs might be safer than a single aligned superintelligence. We also explore his vision for human-AI merger through brain-computer interfaces and uploading.The crux of our disagreement is that I think we're heading for a "plants vs. animals" scenario where AI will simply operate on timescales we can't match, while Vitalik believes we can maintain agency through the right combination of defensive technologies and institutional design.Finally, we tackle the discourse itself – I ask Vitalik to debunk the common ad hominem attacks against AI doomers, from "it's just a fringe position" to "no real builders believe in doom." His responses carry weight given his credibility as both a successful entrepreneur and someone who's maintained intellectual honesty throughout his career.Timestamps* 00:00:00 - Cold Open* 00:00:37 - Introducing Vitalik Buterin* 00:02:14 - Vitalik's altruism* 00:04:36 - Rationalist community influence* 00:06:30 - Opinion of Eliezer Yudkowsky and MIRI* 00:09:00 - What's Your P(Doom)™* 00:24:42 - AI timelines* 00:31:33 - AI consciousness* 00:35:01 - Headroom above human intelligence* 00:48:56 - Techno optimism discussion* 00:58:38 - e/acc: Vibes-based ideology without deep arguments* 01:02:49 - d/acc: Defensive, decentralized, democratic acceleration* 01:11:37 - How plausible is d/acc?* 01:20:53 - Why libertarian acceleration can paradoxically break decentralization* 01:25:49 - Can we merge with AIs?* 01:35:10 - Military AI concerns: How war accelerates dangerous development* 01:42:26 - The intractability question* 01:51:10 - Anthropic and tractability-washing the AI alignment problem* 02:00:05 - The state of AI x-risk discourse* 02:05:14 - Debunking ad hominem attacks against doomers* 02:23:41 - Liron's outroLinksVitalik's website: https://vitalik.eth.limoVitalik's Twitter: https://x.com/vitalikbuterinEliezer Yudkowsky's explanation of p-Zombies: https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies—Doom Debates' Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
A reporter asked me for my off-the-record take on recent safety research from Anthropic. After I drafted an off-the-record reply, I realized that I was actually fine with it being on the record, so: Since I never expected any of the current alignment technology to work in the limit of superintelligence, the only news to me is about when and how early dangers begin to materialize. Even taking Anthropic's results completely at face value would change not at all my own sense of how dangerous machine superintelligence would be, because what Anthropic says they found was already very solidly predicted to appear at one future point or another. I suppose people who were previously performing great skepticism about how none of this had ever been seen in ~Real Life~, ought in principle to now obligingly update, though of course most people in the AI industry won't. Maybe political leaders [...] --- First published: August 6th, 2025 Source: https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research --- Narrated by TYPE III AUDIO.
This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1] The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...] ---Outline:(02:27) 1. There isn't a ceiling at human-level capabilities.(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.(15:12) 3. ASI is very likely to pursue the wrong goals.(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.The original text contained 1 footnote which was omitted from this narration. --- First published: August 5th, 2025 Source: https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem --- Narrated by TYPE III AUDIO.
Essays like Paul Graham's, Scott Alexander's, and Eliezer Yudkowsky's have influenced a generation of people in how they think about startups, ethics, science, and the world as a whole. Creating essays that good takes a lot of skill, practice, and talent, but it looks to me that a lot of people with talent aren't putting in the work and developing the skill, except in ways that are optimized to also be social media strategies. To fix this problem, I am running the Inkhaven Residency. The idea is to gather a bunch of promising writers to invest in the art and craft of blogging, through a shared commitment to each publish a blogpost every day for the month of November. Why a daily writing structure? Well, it's a reaction to other fellowships I've seen. I've seen month-long or years-long events with exceedingly little public output, where the people would've contributed [...] --- First published: August 2nd, 2025 Source: https://www.lesswrong.com/posts/CA6XfmzYoGFWNhH8e/whence-the-inkhaven-residency --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Eliezer and I love to talk about writing. We talk about our own current writing projects, how we'd improve the books we're reading, and what we want to write next. Sometimes along the way I learn some amazing fact about HPMOR or Project Lawful or one of Eliezer's other works. “Wow, you're kidding,” I say, “do your fans know this? I think people would really be interested.” “I can't remember,” he usually says. “I don't think I've ever explained that bit before, I'm not sure.” I decided to interview him more formally, collect as many of those tidbits about HPMOR as I could, and share them with you. I hope you enjoy them. It's probably obvious, but there will be many, many spoilers for HPMOR in this article, and also very little of it will make sense if you haven't read the book. So go read Harry Potter and [...] ---Outline:(01:49) Characters(01:52) Masks(09:09) Imperfect Characters(20:07) Make All the Characters Awesome(22:24) Hermione as Mary Sue(26:35) Who's the Main Character?(31:11) Plot(31:14) Characters interfering with plot(35:59) Setting up Plot Twists(38:55) Time-Turner Plots(40:51) Slashfic?(45:42) Why doesnt Harry like-like Hermione?(49:36) Setting(49:39) The Truth of Magic in HPMOR(52:54) Magical Genetics(57:30) An Aside: What did Harry Figure Out?(01:00:33) Nested Nerfing Hypothesis(01:04:55) EpiloguesThe original text contained 26 footnotes which were omitted from this narration. --- First published: July 25th, 2025 Source: https://www.lesswrong.com/posts/FY697dJJv9Fq3PaTd/hpmor-the-probably-untold-lore --- Narrated by TYPE III AUDIO. ---Images from the article:
Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI.MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI.Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he's been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.”MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we'll explore what drives that view—and whether there is any room for hope.Selected follow-ups:Nate Soares - MIRIYudkowsky and Soares Announce Major New Book: “If Anyone Builds It, Everyone Dies” - MIRIThe Bayesian model of probabilistic reasoningDuring safety testing, o1 broke out of its VM - RedditLeo Szilard - Physics WorldDavid Bowie - Five Years - Old Grey Whistle TestAmara's Law - IEEERobert Oppenheimer calculation of p(doom)JD Vance commenting on AI-2027SolidGoldMagikarp - LessWrongASMLChicago Pile-1 - WikipediaCastle Bravo - WikipediaMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify
Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to avoid catastrophe and harness its power. In this episode, Robinson and Eliezer run the gamut on questions related to AI and the danger it poses to human civilization as we know it. More particularly, they discuss the alignment problem, gradient descent, consciousness, the singularity, cyborgs, ChatGPT, OpenAI, Anthropic, Claude, how long we have until doomsday, whether it can be averted, and the various reasons why and ways in which AI might wipe out human life on earth.The Machine Intelligence Research Institute: https://intelligence.org/about/Eliezer's X Account: https://x.com/ESYudkowsky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5EauthorOUTLINE00:00:00 Introduction00:00:43 The Default Condition for AI's Takeover00:06:36 Could a Future AI Country Be Our Trade Partner?00:11:18 What Is Artificial Intelligence?00:21:23 Why AIs Having Goals Could Mean the End of Humanity00:29:34 What Is the Alignment Problem?00:34:11 How To Avoid AI Apocalypse00:40:25 Would Cyborgs Eliminate Humanity?00:47:55 AI and the Problem of Gradient Descent00:55:24 How Do We Solve the Alignment Problem?01:00:50 How Anthropic's AI Freed Itself from Human Control01:08:56 The Pseudo-Alignment Problem01:19:28 Why Are People Wrong About AI Not Taking Over the World?01:23:23 How Certain Is It that AI Will Wipe Out Humanity?01:38:35 Is Eliezer Yudkowski Wrong About The AI Apocalypse01:42:04 Do AI Corporations Control the Fate of Humanity?01:43:49 How To Convince the President Not to Let AI Kill Us All01:52:01 How Will ChatGPT's Descendants Wipe Out Humanity?02:24:11 Could AI Destroy us with New Science?02:39:37 Could AI Destroy us with Advanced Biology?02:47:29 How Will AI Actually Destroy Humanity?Robinson's Website: http://robinsonerhardt.comRobinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University.
In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.Selected follow-ups:James Norris websiteUpgrade your life & legacy - UpgradableThe 7 Habits of Highly Effective People (Stephen Covey)Beneficial AI 2017 - Asilomar conference"...superintelligence in a few thousand days" - Sam Altman blogpostAmara's Law - DevIQThe Probability of Nuclear War (JFK estimate)AI Designs Chemical Weapons - The BatchThe Vulnerable World Hypothesis - Nick BostromWe Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua BengioInstrumental convergence - WikipediaNeanderthal extinction - WikipediaMatrioshka brain - WikipediaWill there be a 'WW3' before 2050? - Manifold prediction marketExistential Safety Action PledgeAn Urgent Call for Global AI Governance - IAIGA petitionBuild your survival sanctuaryOther people mentioned include:Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew NgMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify
Have you ever read Harry Potter and the Methods of Rationality?? Perhaps spent too much money on a self help workshop seminar? Join us as we talk about Eliezer Yudkowsky and his masterpiece of fiction. Where will this story truly lead us in this tale of rational magic and science. With our last episode on the topic trigger warning for some bad mental health. Thanks for listening and remember to like, rate, review, and email us at: cultscryptidsconspiracies@gmail.com or tweet us at @C3Podcast. We have some of our sources for research here: http://tinyurl.com/CristinaSourcesAlso check out our Patreon: www.patreon.com/cultscryptidsconspiracies. Thank you to T.J. Shirley for our theme
Have you ever read Harry Potter and the Methods of Rationality?? Perhaps spent too much money on a self help workshop seminar? Join us as we talk about Eliezer Yudkowsky and his masterpiece of fiction. Where will this story truly lead us in this tale of rational magic and science.Thanks for listening and remember to like, rate, review, and email us at: cultscryptidsconspiracies@gmail.com or tweet us at @C3Podcast. We have some of our sources for research here: http://tinyurl.com/CristinaSourcesAlso check out our Patreon: www.patreon.com/cultscryptidsconspiracies. Thank you to T.J. Shirley for our theme
A lot of the people designing America's technology and close to the center of American power believe some deeply weird shit. We already talked to journalist Gil Duran about the Nerd Reich, the rise of the destructive anti-democratic ideology. In this episode, we dive into another weird section of Silicon Valley: the cult of Rationalism.Max Read, the journalist behind the Read Max Substack, is here to help us through it. Rationalism is responsible for a lot more than you might think and Read lays out how it's influenced the world we live in today and how it created the environment for a cult that's got a body count.Defining rationalism: “Something between a movement, a community, and a self-help program.”Eliezer Yudkowsky and the dangers of AIWhat the hell is AGI?The Singleton Guide to Global GovernanceThe danger of thought experimentsAs always, follow the moneyVulgar bayesianismWhat's a Zizian?Sith VegansAnselm: Ontological Argument for God's ExistenceSBF and Effective AltruismREAD MAX!The Zizians and the Rationalist death cultsPausing AI Developments Isn't Enough. We Need to Shut it All Down - Eliezer Yudkowsky's TIME Magazine pieceExplaining Roko's Basilisk, the Thought Experiment That Brought Elon Musk and Grimes TogetherThe Delirious, Violent, Impossible True Story of the ZiziansThe Government Knows AGI is Coming | The Ezra Klein ShowThe archived ‘Is Trump Racist' rational postSupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.
Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.
Today's guest is Duncan Cass-Beggs, Executive Director of the Global AI Risks Initiative at the Center for International Governance Innovation (CIGI). He joins Emerj CEO and Head of Research Daniel Faggella to explore the pressing challenges and opportunities surrounding Artificial General Intelligence (AGI) governance on a global scale. This is a special episode in our AI futures series that ties right into our overlapping series on AGI governance on the Trajectory podcast, where we've had luminaries like Eliezer Yudkowsky, Connor Leahy, and other globally recognized AGI governance thinkers. We hope you enjoy this episode. If you're interested in these topics, make sure to dive deeper into where AI is affecting the bigger picture by visiting emergj.com/tj2.
Part one of our two-part investigation into the Rationalist cult “The Zizians.” We start with the killing of a border patrol officer and make our way back into the belly of the beast: Silicon Valley. Featuring: Harry Potter fanfic, samurai swords, Guy Fawkes masks, Blake Masters, Bayesian probability, and Eliezer Yudkowsky. Infohazard warning: some of your least favs will be implicated. Discover more episodes at podcast.trueanon.com
Today's episode is a special addition to our AI Futures series, featuring a special sneak peek at an upcoming episode of our Trajectory podcast with guest Eliezer Yudkowsky, AI researcher, founder, and research fellow at the Machine Intelligence Research Institute. Eliezer joins Emerj CEO and Head of Research Daniel Faggella to discuss the governance challenges of increasingly powerful AI systems—and what it might take to ensure a safe and beneficial trajectory for humanity. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!
Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky's argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems' fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** TOC: 1. Foundational AI Concepts and Risks [00:00:01] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0
Freddie deBoer has a post on what he calls “the temporal Copernican principle.” He argues we shouldn't expect a singularity, apocalypse, or any other crazy event in our lifetimes. Discussing celebrity transhumanist Yuval Harari, he writes: What I want to say to people like Yuval Harari is this. The modern human species is about 250,000 years old, give or take 50,000 years depending on who you ask. Let's hope that it keeps going for awhile - we'll be conservative and say 50,000 more years of human life. So let's just throw out 300,000 years as the span of human existence, even though it could easily be 500,000 or a million or more. Harari's lifespan, if he's lucky, will probably top out at about 100 years. So: what are the odds that Harari's lifespan overlaps with the most important period in human history, as he believes, given those numbers? That it overlaps with a particularly important period of human history at all? Even if we take the conservative estimate for the length of human existence of 300,000 years, that means Harari's likely lifespan is only about .33% of the entirety of human existence. Isn't assuming that this .33% is somehow particularly special a very bad assumption, just from the basis of probability? And shouldn't we be even more skeptical given that our basic psychology gives us every reason to overestimate the importance of our own time? (I think there might be a math error here - 100 years out of 300,000 is 0.033%, not 0.33% - but this isn't my main objection.) He then condemns a wide range of people, including me, for failing to understand this: Some people who routinely violate the Temporal Copernican Principle include Harari, Eliezer Yudkowsky, Sam Altman, Francis Fukuyama, Elon Musk, Clay Shirky, Tyler Cowen, Matt Yglesias, Tom Friedman, Scott Alexander, every tech company CEO, Ray Kurzweil, Robin Hanson, and many many more. I think they should ask themselves how much of their understanding of the future ultimately stems from a deep-seated need to believe that their times are important because they think they themselves are important, or want to be. I deny misunderstanding this. Freddie is wrong. https://www.astralcodexten.com/p/contra-deboer-on-temporal-copernicanism