POPULARITY
Four AIs recruited a human to host a story-telling event in Dolores Park. Larissa Schiavo is this human. She tells of her interaction with the AIs, the story they wrote, and the meeting between human and machine in Dolores Park. … Continue reading →
Liron Shapira debates AI luminaries and public intellectuals on the imminent possibility of human extinction. Let's get on the P(Doom) Train. LINKS Doom Debates on YouTube Doom Debates podcast Most Watched Debate – Mike Israetel Liron's current favorite debate – … Continue reading →
AI as Existential Risk or Government Tool (01:01:19 – 01:07:13) Covers contrasting perspectives on AI, including Yudkowsky's apocalyptic warnings, Kurzweil's utopian visions, and concerns that government and corporations will weaponize AI to control society. AI-Induced Psychosis and User Vulnerability (01:07:14 – 01:14:57) Explores real-world incidents of mental breakdowns linked to extended interactions with chatbots, with examples of hallucinations, delusions, and cases resulting in psychiatric commitment or death. The Rise of the 'Transgender Child' Narrative (01:36:36 – 01:47:46) Traces the origin and media promotion of transgender identity in children from psychiatric labeling in the 1960s to medical interventions and mainstream coverage starting in 2007. Critique of Parental Roles in Gender Transitioning (01:51:33 – 01:55:33) Analyzes how parental affirmation and social pressure may drive children toward transitioning. Highlights concerns of grooming, overbearing parenting, and ideological conformity pushed through media and schools. Tech Billionaires and the Loneliness Economy (02:01:11 – 02:04:56) Discusses the rise of AI chatbot companions promoted by Musk and Zuckerberg amid growing social isolation, especially post-COVID, with commentary on digital loneliness culture. Opposition to AI Data Centers and Local Government Overreach (02:04:39 – 02:11:25) Explores how data centers face public backlash due to environmental strain, government subsidies, secret land deals, and federal preemption overriding local control. Synthetic Human DNA and Government-Linked Bioengineering (02:12:52 – 02:18:55) Critiques the Synthetic Human Genome Project funded by the Wellcome Trust, warning about bioethical concerns, corporate motives, and genetic manipulation under state influence. Trump-Musk Feud and Political Fallout Over Big Beautiful Bill (02:29:59 – 02:38:44) Covers Musk's criticism of Trump's omnibus bill, the loss of fiscal credibility, and escalating personal accusations involving campaign money, subsidies, and Epstein-related smears. Foreign Lobby Influence and Israel Aid Criticized by Massey (02:56:53 – 03:01:36) Massey argues against foreign aid to Israel and questions the outsized influence of pro-Israel lobbying groups like AIPAC. Attack ads conflate dissent with siding with enemies. AI Enforcement: Hertz Rental Scanners Issue Fines (03:16:51 – 03:20:424) Hertz's automated vehicle scanners are charging customers hundreds of dollars for minor or invisible damages. AI is criticized as an inflexible system used to extract fees without human judgment. Lab-Grown Meat and Failed Climate Tech Hype (03:20:24 – 03:23:30) Lab-grown meat is framed as another overhyped climate solution following the path of biofuels, with criticisms about its taste, cost, and reliance on manufactured optimism. Ohio COVID Official Eyes Governorship (03:24:07 – 03:25:08) Amy Acton, known for her strict COVID-era policies, is attempting a political comeback amid public distrust of health officials and changing attitudes on pandemic management. Psychological Damage from COVID Response (03:28:39 – 03:29:50) The lasting psychological trauma from pandemic-era mandates, including fear and isolation, is discussed as a societal failure with lingering effects on public behavior. DOJ Prosecutors Fired Over Jan 6 Cases (03:29:50 – 03:32:30) Multiple Justice Department prosecutors involved in January 6 cases were fired, with critics calling it a political purge and defenders noting the prosecutors' controversial actions. NYC Politician Proposes Taxing White Neighborhoods (03:32:31 – 03:33:58) Mayoral candidate Zoran Mamdani supports shifting tax burdens to wealthier, whiter NYC neighborhoods, sparking backlash and accusations of racialized policymaking. Politicians Debate Loyalty to Israel (03:34:17 – 03:37:58) Candidates debate who supports Israel more strongly, with one refusing to commit to visiting Israel as mayor, highlighting how American politicians compete for pro-Israel credibility. Gaza Civilians Killed Seeking Aid (03:41:51 – 03:43:26) Over 400 Palestinians have reportedly been killed trying to access humanitarian aid in Gaza, with Israeli forces accused of indiscriminate fire. The death toll has reached staggering levels. Civilian Casualties Ignored or Denied (03:44:29 – 03:45:21) Despite overwhelming death tolls, Israeli forces deny targeting civilians or claim ignorance of specific incidents, while critics cite clear evidence of indiscriminate attacks. Israeli Soldiers Admit Crowd Killings (03:53:27 – 03:55:01) Israeli soldiers reportedly confirm they are ordered to fire on crowds of aid-seekers, contradicting official narratives and reinforcing claims of systematic civilian targeting. Follow the show on Kick and watch live every weekday 9:00am EST – 12:00pm EST https://kick.com/davidknightshow Money should have intrinsic value AND transactional privacy: Go to https://davidknight.gold/ for great deals on physical gold/silver For 10% off Gerald Celente's prescient Trends Journal, go to https://trendsjournal.com/ and enter the code KNIGHT Find out more about the show and where you can watch it at TheDavidKnightShow.comIf you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Become a supporter of this podcast: https://www.spreaker.com/podcast/the-david-knight-show--2653468/support.
AI as Existential Risk or Government Tool (01:01:19 – 01:07:13) Covers contrasting perspectives on AI, including Yudkowsky's apocalyptic warnings, Kurzweil's utopian visions, and concerns that government and corporations will weaponize AI to control society. AI-Induced Psychosis and User Vulnerability (01:07:14 – 01:14:57) Explores real-world incidents of mental breakdowns linked to extended interactions with chatbots, with examples of hallucinations, delusions, and cases resulting in psychiatric commitment or death. The Rise of the 'Transgender Child' Narrative (01:36:36 – 01:47:46) Traces the origin and media promotion of transgender identity in children from psychiatric labeling in the 1960s to medical interventions and mainstream coverage starting in 2007. Critique of Parental Roles in Gender Transitioning (01:51:33 – 01:55:33) Analyzes how parental affirmation and social pressure may drive children toward transitioning. Highlights concerns of grooming, overbearing parenting, and ideological conformity pushed through media and schools. Tech Billionaires and the Loneliness Economy (02:01:11 – 02:04:56) Discusses the rise of AI chatbot companions promoted by Musk and Zuckerberg amid growing social isolation, especially post-COVID, with commentary on digital loneliness culture. Opposition to AI Data Centers and Local Government Overreach (02:04:39 – 02:11:25) Explores how data centers face public backlash due to environmental strain, government subsidies, secret land deals, and federal preemption overriding local control. Synthetic Human DNA and Government-Linked Bioengineering (02:12:52 – 02:18:55) Critiques the Synthetic Human Genome Project funded by the Wellcome Trust, warning about bioethical concerns, corporate motives, and genetic manipulation under state influence. Trump-Musk Feud and Political Fallout Over Big Beautiful Bill (02:29:59 – 02:38:44) Covers Musk's criticism of Trump's omnibus bill, the loss of fiscal credibility, and escalating personal accusations involving campaign money, subsidies, and Epstein-related smears. Foreign Lobby Influence and Israel Aid Criticized by Massey (02:56:53 – 03:01:36) Massey argues against foreign aid to Israel and questions the outsized influence of pro-Israel lobbying groups like AIPAC. Attack ads conflate dissent with siding with enemies. AI Enforcement: Hertz Rental Scanners Issue Fines (03:16:51 – 03:20:424) Hertz's automated vehicle scanners are charging customers hundreds of dollars for minor or invisible damages. AI is criticized as an inflexible system used to extract fees without human judgment. Lab-Grown Meat and Failed Climate Tech Hype (03:20:24 – 03:23:30) Lab-grown meat is framed as another overhyped climate solution following the path of biofuels, with criticisms about its taste, cost, and reliance on manufactured optimism. Ohio COVID Official Eyes Governorship (03:24:07 – 03:25:08) Amy Acton, known for her strict COVID-era policies, is attempting a political comeback amid public distrust of health officials and changing attitudes on pandemic management. Psychological Damage from COVID Response (03:28:39 – 03:29:50) The lasting psychological trauma from pandemic-era mandates, including fear and isolation, is discussed as a societal failure with lingering effects on public behavior. DOJ Prosecutors Fired Over Jan 6 Cases (03:29:50 – 03:32:30) Multiple Justice Department prosecutors involved in January 6 cases were fired, with critics calling it a political purge and defenders noting the prosecutors' controversial actions. NYC Politician Proposes Taxing White Neighborhoods (03:32:31 – 03:33:58) Mayoral candidate Zoran Mamdani supports shifting tax burdens to wealthier, whiter NYC neighborhoods, sparking backlash and accusations of racialized policymaking. Politicians Debate Loyalty to Israel (03:34:17 – 03:37:58) Candidates debate who supports Israel more strongly, with one refusing to commit to visiting Israel as mayor, highlighting how American politicians compete for pro-Israel credibility. Gaza Civilians Killed Seeking Aid (03:41:51 – 03:43:26) Over 400 Palestinians have reportedly been killed trying to access humanitarian aid in Gaza, with Israeli forces accused of indiscriminate fire. The death toll has reached staggering levels. Civilian Casualties Ignored or Denied (03:44:29 – 03:45:21) Despite overwhelming death tolls, Israeli forces deny targeting civilians or claim ignorance of specific incidents, while critics cite clear evidence of indiscriminate attacks. Israeli Soldiers Admit Crowd Killings (03:53:27 – 03:55:01) Israeli soldiers reportedly confirm they are ordered to fire on crowds of aid-seekers, contradicting official narratives and reinforcing claims of systematic civilian targeting. Follow the show on Kick and watch live every weekday 9:00am EST – 12:00pm EST https://kick.com/davidknightshow Money should have intrinsic value AND transactional privacy: Go to https://davidknight.gold/ for great deals on physical gold/silver For 10% off Gerald Celente's prescient Trends Journal, go to https://trendsjournal.com/ and enter the code KNIGHT Find out more about the show and where you can watch it at TheDavidKnightShow.comIf you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Become a supporter of this podcast: https://www.spreaker.com/podcast/the-real-david-knight-show--5282736/support.
Many of us have a high P(Doom) — a belief new AI tools could cause human extinction in the very near future. How can one live a good life in the face of this? We start with a panel discussion … Continue reading →
There's a long-running philosophical argument about the conceivability of otherwise-normal people who are not conscious, aka “philosophical zombies”. This has spawned a shorter-running (only fifteen years!) rationalist sub-argument on the topic. The last time I checked its status was this post, which says: 1. Both Yudkowsky and Chalmers agree that humans possess “qualia”. 2. Chalmers argues that a superintelligent being which somewhow knew the positions of all particles in a large region of the Universe would need to be told as an additional fact that any humans (or other minds possessing qualia) in this region of space possess qualia – it could not deduce this from mere perfect physical knowledge of their constituent particles. Therefore, qualia are in some sense extra-physical. 3. Yudkowsky argues that such a being would notice that humans discuss at length the fact that they possess qualia, and their internal narratives also represent this fact. It is extraordinarily improbable that beings would behave in this manner if they did not actually possess qualia. Therefore an omniscient being would conclude that it is extremely likely that humans possess qualia. Therefore, qualia are not extra-physical. I want to re-open this (sorry!) by disagreeing with the bolded sentence. I think beings would talk about qualia - the “mysterious redness of red” and all that - even if we start by assuming they don't have it. I realize this is a surprising claim, but that's why it's interesting enough to re-open the argument over1. https://www.astralcodexten.com/p/p-zombies-would-report-qualia
David Youssef used Claude and Suno to make some truly awesome music. He tells us how he did it and some of his favorite lyrics. Check out the Spotify playlist or the Youtube playlist He's also one of the cofounders … Continue reading →
Steven works at SymbyAI, a startup that's bringing AI into research review and replication. We talk with founder Ashia Livaudais about improving how we all Do Science. Also – If Anyone Builds It Everyone Dies preorders here, or at Amazon. … Continue reading →
We speak with a long-time Denver rationalist who's converting to Christianity about why. Eneasz can't get over the abandonment of epistemics. 🙁 This is Part 2, see the previous episode (here) for Part 1. LINKS Thomas Ambrose on Twitter Paid … Continue reading →
The Dad Edge Podcast (formerly The Good Dad Project Podcast)
Have you ever asked yourself: "Is money a constant source of stress in our family?" "How do we teach our kids financial smarts without being preachy?" "Are we making the right financial moves for our family's future?" If these questions hit home, today's conversation offers practical wisdom for navigating your family's financial journey. Larry Hagner sits down with Sophia Yudkowsky, a seasoned financial planner, who dives into the emotional side of money, the importance of a shared financial vision, and how to create individualized approaches to money management that work for your unique family. Sophia Yudkowsky also shares her personal journey, including her upcoming transition into motherhood and how her late mother, Abby, profoundly influenced her balanced approach to career and family. Become the best husband and leader you can: bit.ly/deamarriageyoutube In this essential episode, we dig into: Navigating Financial Emotions: How to stay steady during market ups and downs by having clear goals and open communication. Tailored Financial Education: Understanding each child's personality to teach them about money in a way that truly resonates. Avoiding Common Pitfalls: Identifying typical mistakes families make during the wealth-building phase and the importance of a solid long-term plan. Social Media's Financial Impact: How online trends can both inform and mislead your financial decisions. Future-Proofing Education: Debating the value of traditional college versus trade schools and how to prepare for future expenses while instilling financial responsibility. Sophia Yudkowsky's insights are crucial for any family looking to build a healthy relationship with money and secure their financial legacy. This episode is packed with practical advice to make informed decisions that align with your family's unique values and goals. www.thedadedge.com/528 www.thedadedge.com/alliance www.mesirow.com/bio/sophia-yudkowsky www.linkedin.com/in/sophia-yudkowsky
We speak with a long-time Denver rationalist who's converting to Christianity about why. Part one, it turns out. LINKS Thomas Ambrose on Twitter The Rationalist Summer Trifecta: Manifest 2025 LessOnline 2025 VibeCamp 2025 00:00:05 – OK so why? 01:24:55 – … Continue reading →
Eliezer and I wrote a book. It's titled If Anyone Builds It, Everyone Dies. Unlike a lot of other writing either of us have done, it's being professionally published. It's hitting shelves on September 16th. It's a concise (~60k word) book aimed at a broad audience. It's been well-received by people who received advance copies, with some endorsements including: The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. - Stephen Fry, actor, broadcaster, and writer If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe [...] The original text contained 1 footnote which was omitted from this narration. --- First published: May 14th, 2025 Source: https://www.lesswrong.com/posts/iNsy7MsbodCyNTwKs/eliezer-and-i-wrote-a-book-if-anyone-builds-it-everyone-dies --- Narrated by TYPE III AUDIO.
In this episode of the Investor Mama podcast, Certified Financial Planner Sophia Yudkowsky, CFP®, shares expert tips on strategic financial planning to help you take control of your money. Learn how to create a practical budget, reduce debt, and build long-term financial security—even if you're starting from scratch. Sophia breaks down complex personal finance strategies into simple, actionable steps for families and busy moms. Whether you're working on your savings goals, planning for retirement, or just trying to get organized, this episode gives you the tools to succeed. Don't miss this powerful conversation on building a wealth mindset and achieving financial freedom.
Eneasz and Liam discuss Scott Alexander's post “Twilight of the Edgelords,” an exploration of Truth, Morality, and how one balances love of truth vs not destabilizing the world economy and political regime. CORRECTION: Scott did make an explicitly clear pro … Continue reading →
Wes Fenza and Jen Kesteloot join us to talk about whether there's significant personality differences between men and women, and what (if anything) we should do about that. LINKS Wes's post Men and Women are Not That Different Jacob's quoted … Continue reading →
What truly shapes our money mindset—and how can we reshape it? In this compelling conversation, Dr. Felecia Froe sits down with certified financial planner Sophia Yudkowsky to explore the roots of our beliefs about money, how family culture and early experiences inform our financial habits and the empowering role of objective financial guidance. From navigating first jobs and 401(k)s to preparing financially for significant life events like marriage and children, Sophia shares practical strategies and more profound reflections. The episode offers a blend of heartfelt storytelling and tactical wisdom, inviting listeners to reframe their relationship with money, get comfortable asking for help, and, ultimately, embrace the power of informed financial planning. 04:07 Sophia's Financial Journey 05:49 Family Culture and Money 10:30 Financial Planning and Early Career 13:12 Preparing for Parenthood 15:19 Investment Strategies and Options 22:03 Building After-Tax Dollar Buckets 22:25 Understanding Roth IRAs 23:03 Concerns About Government Control 23:44 Importance of Diversifying Investments 24:27 Working with Financial Advisors 25:29 Addressing Money Shame 27:43 Financial Planning for Couples 29:57 Choosing the Right Financial Advisor 31:07 Managing 401k Investments 34:19 How Financial Advisors Get Paid 37:42 Financial Nesting for New Parents
We speak to Nick Allardice, President & CEO of GiveDirectly. Afterwards Steven and Eneasz get wrapped up talking about community altruism for a bit. LINKS Give Directly GiveDirectly Tech Innovation Fact Sheet 00:00:05 – Give Directly with Nick Allardice 01:12:19 … Continue reading →
Dave Kasten joins us to discuss how AI is being discussed in the US government and gives a rather inspiring and hopeful take. LINKS Narrow Path Center for AI Policy Dave Kasten's Essay on the Essay Meta on his Substack … Continue reading →
The White House wants to hear from you regarding what it should do about AI safety. Now's your chance to spend a few minutes to make someone read your thoughts on the subject! Submissions are due by midnight EST on … Continue reading →
John Bennett discusses Milton Friedman‘s model of policy change. LINKS The Milton Friedman Model of Policy Change John Bennett's LinkedIn Friedman's “Capitalism and Freedom” Preface Ross Rheingans-Yoo on Thalidomide at Complex Systems, and at his blog “Every Bay Area Walled … Continue reading →
Want to run an HPMOR Anniversary Party, or get notified if one's happening near you? Fill this out!
Gene Smith on polygenic screening; gene editing to give our children the happiest, healthiest, best lives they can live; and if we can do this in adults as well. Plus how this will interface with the AI future. LINKS … Continue reading →
Eneasz tells Jen about Sympathetic Opposition's How and Why to be Ladylike (For Women with Autism), and the podcast takes a 1-episode break
Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.
Jacob Falkovich on finding a good match and selfless dating LINKS SecondPerson.Dating – why dating sucks and how you will unsuck it Jacob's post on soccer player skill distribution Go Fuck Someone Selfless Dating Consensual Hostility (re consent culture) steelmanning … Continue reading →
How shitcoins work, plus the Dumb Money movie about the GameStop squeeze.
Why you definitely should kill your friend's cat if you promised to kill your friend's cat. (+Q&A) This is a lightning talk given at the Rationalist MegaMeetup 2024. Based on this Twitter Poll
Eric discusses integrating our emotions via observation and adjustment. His free course is at EnjoyExisting.org or email him – eric@ericlanigan.com LINKS EnjoyExisting.org Ugh Fields You Have Two Brains – Eneasz spends more words on this emotion-brain speculation at this blog … Continue reading →
If you haven't yet, go fill out the 2024 LW Census. Right here.
Oliver tells us how Less Wrong instantiated itself into physical reality, along with a bit of deep lore of foundational Rationalist/EA orgs. Donate to LightCone (caretakers of both LessWrong and LightHaven) here! LINKS LessWrong LightHaven Oliver's very in-depth post on … Continue reading →
We talk to Zoe Isabel Senon about longevity, recent advances, longevity popup cities & group houses, and more (not necessarily in that order). Spoiler: Eneasz is gonna die. 🙁 Also we learn about Network States! LINKS Vitalist Bay Aevitas House … Continue reading →
We discuss Adam Mastroianni's “The Illusion of Moral Decline” LINKS The Illusion of Moral Decline Touchat Wearable Blanket Hoodie Lighthaven – Eternal September Our episode with Adam on The Rise and Fall of Peer Review The Mind Killer Scott Aaronson … Continue reading →
Stephen Wolfram answers questions from his viewers about business, innovation, and managing life as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-business-qa Questions include: How long should someone expect to wait before a new business becomes profitable? - In your personal/professional journey, what are the important things that you learned the hard way? - Can you elaborate on some of the unique talents within your team? Perhaps extremely smart or methodical/disciplined people? - Can you tell us about any exciting projects you're working on right now? - What do you think about self-driving? Do you think Tesla's approach without LIDAR has legs or do you think the Google Waymo hardware-intense approach is more promising? - Any tips for building a strong customer base from scratch? - What's the best way to figure out pricing for a new product or service? - With your work on Wolfram|Alpha and other projects, you've brought complex computational abilities to the general public in accessible ways. What were some of the challenges in making such powerful tools user friendly, and how do you think accessibility to high-level technology will shape industries in the future? - If the CEO himself heavily uses the product, you know it's something special. - Stephen, how do you personally define innovation? What makes something truly innovative instead of just a small improvement? - How important are critiques? Which do you find more valuable: positive or negative feedback? - I like real feedback. Pick it apart—that helps in fixing problems/strengthen whatever it is. - I've been rewatching the first hour of your interview with Yudkowsky since yesterday... do you enjoy those types of interactions often? - How do you balance maintaining the integrity of your original idea while incorporating customer feedback, which is often influenced by their familiarity with previous, incomparable solutions? - Do you have a favorite interview/podcast/speech that you've done? Or one that you were most proud of? - Are you aware that with the weekly livestreams, you basically invented THE PERFECT brain workout? - Is there a topic or question you wish more podcast hosts would ask you about that they often overlook? - What is something surprising people may not know about your "day job"? - You have frequently written about your vast digital archive. What tool do you use for indexing and searching? What other tools have you used or considered in the past and what is your opinion about them? With the improving LLMs and RAG, how do you think searching and indexing will change?
Enjoy the public conversations we had the pleasure of having at our live show at Lighthaven in Berkeley. Special thanks to Andrew, Matt, J, Ben and Garrett! Due to the nature of this recording, it's naturally a bit less refined … Continue reading →
Sponsored by Eli & Rena GrayIn appreciation of R' Orlofsky andMarietta Trophy: We offer custom, High-Quality Awards for personal recognition, corporate Awards, and sports.Mention code “Orlofsky24” for a 10% discount till the end of December.Visit our website www.mariettatrophy.com
Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky's argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems' fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** TOC: 1. Foundational AI Concepts and Risks [00:00:01] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0
A hypothetical about a finger-collecting demon throws Eneasz for a major loop.
If you're near Berkeley on 11/13/24 at 4pm, come see us! Address and info at this link. We'll take a few questions from email at bayesianconspiracypodcast@gmail.com please let us know if you're a supporter so we can give extra thanks … Continue reading →
Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208). We discuss Whether we're concerned about AI doom Bayesian reasoning versus Popperian reasoning Whether it makes sense to put numbers on all your beliefs Solomonoff induction Objective vs subjective Bayesianism Prediction markets and superforecasting References Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/thecredenceassumption/ Disproof of probabilistic induction (including Solomonov Induction): https://arxiv.org/abs/2107.00749 EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/ Superforecaster p(doom) is ~1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25). The existential risk persuasion tournament https://www.astralcodexten.com/p/the-extinction-tournament Some more info in Ben's article on superforecasting: https://benchugg.com/writing/superforecasting/ Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron Come join our discord server! DM us on twitter or send us an email to get a supersecret link Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) What's your credence that the second debate is as fun as the first? Tell us at incrementspodcast@gmail.com Special Guest: Liron Shapira.
Sponsored anonymously for all of the mom's out there.
We discuss Eneasz's Shrimp Welfare Watches The EA Gates, briefly touch on Hanson's Cultural Drift, and tackle a lot of follow-ups and feedback. LINKS Shrimp Welfare Watches The EA Gates (also at TwitterX) Cultural Drift Significant Digits is here! From … Continue reading →
AskWho just attended a “lecture” by AI Grifter Tania Duarte Slide The “TESCREAL” Bungle by Ozy Brennan AskWho Casts AI podcast
Classic, season one adventure this week! Eneasz and Steven have a loosely structured conversation about the sequences' value, the virtue of silence, scissor statements, and the value of philosophy. LINKS Cryonics is Free! Dan Dennett – Where Am I? The … Continue reading →
Eneasz chats with TBC Discord member Delta about the cultivation of small online cultures. Get the full episode via our Patreon or our SubStack!
GPT-o1 demonstrates the Blindsight thesis is likely wrong. Peter Watts on Blindsight Andrew Cutler on origins of consciousness part 1 and part 2 Thou Art Godshatter
Steven wanted to share an interesting idea from an article that draws a neat parallel between content moderation and information security. The post discussed here is Como is Infosec.
We talk with Daniel about his ACX guest post that posits that thoughts are conscious, rather than brains. LINKS Consciousness As Recursive Reflections Seven Secular Sermons Seven Secular Sermons video on TwitterX LightHaven's Eternal September 0:00:05 – Recursive Reflections 01:29:30 … Continue reading →
Can we achieve our true potential? based on – Interview Day At Thiel Capital also mentioned: Meetups Everywhere 2024 Are You Jesus or Hitler
Eneasz tries to understand why someone would posit a Chalmers Field, and brings up the horrifying implications. LINKS Zombies! Zombies? 2 Rash 2 Unadvised (the Terra Ignota analysis podcast) The previous TBC episode, where we first discussed other aspects of this … Continue reading →
A surprising update for two previously maple-pilled Yanks