POPULARITY
There's a long-running philosophical argument about the conceivability of otherwise-normal people who are not conscious, aka “philosophical zombies”. This has spawned a shorter-running (only fifteen years!) rationalist sub-argument on the topic. The last time I checked its status was this post, which says: 1. Both Yudkowsky and Chalmers agree that humans possess “qualia”. 2. Chalmers argues that a superintelligent being which somewhow knew the positions of all particles in a large region of the Universe would need to be told as an additional fact that any humans (or other minds possessing qualia) in this region of space possess qualia – it could not deduce this from mere perfect physical knowledge of their constituent particles. Therefore, qualia are in some sense extra-physical. 3. Yudkowsky argues that such a being would notice that humans discuss at length the fact that they possess qualia, and their internal narratives also represent this fact. It is extraordinarily improbable that beings would behave in this manner if they did not actually possess qualia. Therefore an omniscient being would conclude that it is extremely likely that humans possess qualia. Therefore, qualia are not extra-physical. I want to re-open this (sorry!) by disagreeing with the bolded sentence. I think beings would talk about qualia - the “mysterious redness of red” and all that - even if we start by assuming they don't have it. I realize this is a surprising claim, but that's why it's interesting enough to re-open the argument over1. https://www.astralcodexten.com/p/p-zombies-would-report-qualia
David Youssef used Claude and Suno to make some truly awesome music. He tells us how he did it and some of his favorite lyrics. Check out the Spotify playlist or the Youtube playlist He's also one of the cofounders … Continue reading →
Steven works at SymbyAI, a startup that's bringing AI into research review and replication. We talk with founder Ashia Livaudais about improving how we all Do Science. Also – If Anyone Builds It Everyone Dies preorders here, or at Amazon. … Continue reading →
We speak with a long-time Denver rationalist who's converting to Christianity about why. Eneasz can't get over the abandonment of epistemics. 🙁 This is Part 2, see the previous episode (here) for Part 1. LINKS Thomas Ambrose on Twitter Paid … Continue reading →
The Dad Edge Podcast (formerly The Good Dad Project Podcast)
Have you ever asked yourself: "Is money a constant source of stress in our family?" "How do we teach our kids financial smarts without being preachy?" "Are we making the right financial moves for our family's future?" If these questions hit home, today's conversation offers practical wisdom for navigating your family's financial journey. Larry Hagner sits down with Sophia Yudkowsky, a seasoned financial planner, who dives into the emotional side of money, the importance of a shared financial vision, and how to create individualized approaches to money management that work for your unique family. Sophia Yudkowsky also shares her personal journey, including her upcoming transition into motherhood and how her late mother, Abby, profoundly influenced her balanced approach to career and family. Become the best husband and leader you can: bit.ly/deamarriageyoutube In this essential episode, we dig into: Navigating Financial Emotions: How to stay steady during market ups and downs by having clear goals and open communication. Tailored Financial Education: Understanding each child's personality to teach them about money in a way that truly resonates. Avoiding Common Pitfalls: Identifying typical mistakes families make during the wealth-building phase and the importance of a solid long-term plan. Social Media's Financial Impact: How online trends can both inform and mislead your financial decisions. Future-Proofing Education: Debating the value of traditional college versus trade schools and how to prepare for future expenses while instilling financial responsibility. Sophia Yudkowsky's insights are crucial for any family looking to build a healthy relationship with money and secure their financial legacy. This episode is packed with practical advice to make informed decisions that align with your family's unique values and goals. www.thedadedge.com/528 www.thedadedge.com/alliance www.mesirow.com/bio/sophia-yudkowsky www.linkedin.com/in/sophia-yudkowsky
We speak with a long-time Denver rationalist who's converting to Christianity about why. Part one, it turns out. LINKS Thomas Ambrose on Twitter The Rationalist Summer Trifecta: Manifest 2025 LessOnline 2025 VibeCamp 2025 00:00:05 – OK so why? 01:24:55 – … Continue reading →
Eliezer and I wrote a book. It's titled If Anyone Builds It, Everyone Dies. Unlike a lot of other writing either of us have done, it's being professionally published. It's hitting shelves on September 16th. It's a concise (~60k word) book aimed at a broad audience. It's been well-received by people who received advance copies, with some endorsements including: The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. - Stephen Fry, actor, broadcaster, and writer If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe [...] The original text contained 1 footnote which was omitted from this narration. --- First published: May 14th, 2025 Source: https://www.lesswrong.com/posts/iNsy7MsbodCyNTwKs/eliezer-and-i-wrote-a-book-if-anyone-builds-it-everyone-dies --- Narrated by TYPE III AUDIO.
In this episode of the Investor Mama podcast, Certified Financial Planner Sophia Yudkowsky, CFP®, shares expert tips on strategic financial planning to help you take control of your money. Learn how to create a practical budget, reduce debt, and build long-term financial security—even if you're starting from scratch. Sophia breaks down complex personal finance strategies into simple, actionable steps for families and busy moms. Whether you're working on your savings goals, planning for retirement, or just trying to get organized, this episode gives you the tools to succeed. Don't miss this powerful conversation on building a wealth mindset and achieving financial freedom.
Eneasz and Liam discuss Scott Alexander's post “Twilight of the Edgelords,” an exploration of Truth, Morality, and how one balances love of truth vs not destabilizing the world economy and political regime. CORRECTION: Scott did make an explicitly clear pro … Continue reading →
Wes Fenza and Jen Kesteloot join us to talk about whether there's significant personality differences between men and women, and what (if anything) we should do about that. LINKS Wes's post Men and Women are Not That Different Jacob's quoted … Continue reading →
What truly shapes our money mindset—and how can we reshape it? In this compelling conversation, Dr. Felecia Froe sits down with certified financial planner Sophia Yudkowsky to explore the roots of our beliefs about money, how family culture and early experiences inform our financial habits and the empowering role of objective financial guidance. From navigating first jobs and 401(k)s to preparing financially for significant life events like marriage and children, Sophia shares practical strategies and more profound reflections. The episode offers a blend of heartfelt storytelling and tactical wisdom, inviting listeners to reframe their relationship with money, get comfortable asking for help, and, ultimately, embrace the power of informed financial planning. 04:07 Sophia's Financial Journey 05:49 Family Culture and Money 10:30 Financial Planning and Early Career 13:12 Preparing for Parenthood 15:19 Investment Strategies and Options 22:03 Building After-Tax Dollar Buckets 22:25 Understanding Roth IRAs 23:03 Concerns About Government Control 23:44 Importance of Diversifying Investments 24:27 Working with Financial Advisors 25:29 Addressing Money Shame 27:43 Financial Planning for Couples 29:57 Choosing the Right Financial Advisor 31:07 Managing 401k Investments 34:19 How Financial Advisors Get Paid 37:42 Financial Nesting for New Parents
What truly shapes our money mindset—and how can we reshape it? In this compelling conversation, Dr. Felecia Froe sits down with certified financial planner Sophia Yudkowsky to explore the roots of our beliefs about money, how family culture and early experiences inform our financial habits and the empowering role of objective financial guidance. From navigating first jobs and 401(k)s to preparing financially for significant life events like marriage and children, Sophia shares practical strategies and more profound reflections. The episode offers a blend of heartfelt storytelling and tactical wisdom, inviting listeners to reframe their relationship with money, get comfortable asking for help, and, ultimately, embrace the power of informed financial planning. 04:07 Sophia's Financial Journey 05:49 Family Culture and Money 10:30 Financial Planning and Early Career 13:12 Preparing for Parenthood 15:19 Investment Strategies and Options 22:03 Building After-Tax Dollar Buckets 22:25 Understanding Roth IRAs 23:03 Concerns About Government Control 23:44 Importance of Diversifying Investments 24:27 Working with Financial Advisors 25:29 Addressing Money Shame 27:43 Financial Planning for Couples 29:57 Choosing the Right Financial Advisor 31:07 Managing 401k Investments 34:19 How Financial Advisors Get Paid 37:42 Financial Nesting for New Parents
We speak to Nick Allardice, President & CEO of GiveDirectly. Afterwards Steven and Eneasz get wrapped up talking about community altruism for a bit. LINKS Give Directly GiveDirectly Tech Innovation Fact Sheet 00:00:05 – Give Directly with Nick Allardice 01:12:19 … Continue reading →
Dave Kasten joins us to discuss how AI is being discussed in the US government and gives a rather inspiring and hopeful take. LINKS Narrow Path Center for AI Policy Dave Kasten's Essay on the Essay Meta on his Substack … Continue reading →
The White House wants to hear from you regarding what it should do about AI safety. Now's your chance to spend a few minutes to make someone read your thoughts on the subject! Submissions are due by midnight EST on … Continue reading →
John Bennett discusses Milton Friedman‘s model of policy change. LINKS The Milton Friedman Model of Policy Change John Bennett's LinkedIn Friedman's “Capitalism and Freedom” Preface Ross Rheingans-Yoo on Thalidomide at Complex Systems, and at his blog “Every Bay Area Walled … Continue reading →
Want to run an HPMOR Anniversary Party, or get notified if one's happening near you? Fill this out!
Gene Smith on polygenic screening; gene editing to give our children the happiest, healthiest, best lives they can live; and if we can do this in adults as well. Plus how this will interface with the AI future. LINKS … Continue reading →
Eneasz tells Jen about Sympathetic Opposition's How and Why to be Ladylike (For Women with Autism), and the podcast takes a 1-episode break
Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.
Jacob Falkovich on finding a good match and selfless dating LINKS SecondPerson.Dating – why dating sucks and how you will unsuck it Jacob's post on soccer player skill distribution Go Fuck Someone Selfless Dating Consensual Hostility (re consent culture) steelmanning … Continue reading →
How shitcoins work, plus the Dumb Money movie about the GameStop squeeze.
Why you definitely should kill your friend's cat if you promised to kill your friend's cat. (+Q&A) This is a lightning talk given at the Rationalist MegaMeetup 2024. Based on this Twitter Poll
Eric discusses integrating our emotions via observation and adjustment. His free course is at EnjoyExisting.org or email him – eric@ericlanigan.com LINKS EnjoyExisting.org Ugh Fields You Have Two Brains – Eneasz spends more words on this emotion-brain speculation at this blog … Continue reading →
If you haven't yet, go fill out the 2024 LW Census. Right here.
Oliver tells us how Less Wrong instantiated itself into physical reality, along with a bit of deep lore of foundational Rationalist/EA orgs. Donate to LightCone (caretakers of both LessWrong and LightHaven) here! LINKS LessWrong LightHaven Oliver's very in-depth post on … Continue reading →
We talk to Zoe Isabel Senon about longevity, recent advances, longevity popup cities & group houses, and more (not necessarily in that order). Spoiler: Eneasz is gonna die. 🙁 Also we learn about Network States! LINKS Vitalist Bay Aevitas House … Continue reading →
We discuss Adam Mastroianni's “The Illusion of Moral Decline” LINKS The Illusion of Moral Decline Touchat Wearable Blanket Hoodie Lighthaven – Eternal September Our episode with Adam on The Rise and Fall of Peer Review The Mind Killer Scott Aaronson … Continue reading →
Stephen Wolfram answers questions from his viewers about business, innovation, and managing life as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-business-qa Questions include: How long should someone expect to wait before a new business becomes profitable? - In your personal/professional journey, what are the important things that you learned the hard way? - Can you elaborate on some of the unique talents within your team? Perhaps extremely smart or methodical/disciplined people? - Can you tell us about any exciting projects you're working on right now? - What do you think about self-driving? Do you think Tesla's approach without LIDAR has legs or do you think the Google Waymo hardware-intense approach is more promising? - Any tips for building a strong customer base from scratch? - What's the best way to figure out pricing for a new product or service? - With your work on Wolfram|Alpha and other projects, you've brought complex computational abilities to the general public in accessible ways. What were some of the challenges in making such powerful tools user friendly, and how do you think accessibility to high-level technology will shape industries in the future? - If the CEO himself heavily uses the product, you know it's something special. - Stephen, how do you personally define innovation? What makes something truly innovative instead of just a small improvement? - How important are critiques? Which do you find more valuable: positive or negative feedback? - I like real feedback. Pick it apart—that helps in fixing problems/strengthen whatever it is. - I've been rewatching the first hour of your interview with Yudkowsky since yesterday... do you enjoy those types of interactions often? - How do you balance maintaining the integrity of your original idea while incorporating customer feedback, which is often influenced by their familiarity with previous, incomparable solutions? - Do you have a favorite interview/podcast/speech that you've done? Or one that you were most proud of? - Are you aware that with the weekly livestreams, you basically invented THE PERFECT brain workout? - Is there a topic or question you wish more podcast hosts would ask you about that they often overlook? - What is something surprising people may not know about your "day job"? - You have frequently written about your vast digital archive. What tool do you use for indexing and searching? What other tools have you used or considered in the past and what is your opinion about them? With the improving LLMs and RAG, how do you think searching and indexing will change?
Enjoy the public conversations we had the pleasure of having at our live show at Lighthaven in Berkeley. Special thanks to Andrew, Matt, J, Ben and Garrett! Due to the nature of this recording, it's naturally a bit less refined … Continue reading →
Sponsored by Eli & Rena GrayIn appreciation of R' Orlofsky andMarietta Trophy: We offer custom, High-Quality Awards for personal recognition, corporate Awards, and sports.Mention code “Orlofsky24” for a 10% discount till the end of December.Visit our website www.mariettatrophy.com
Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky's argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems' fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** TOC: 1. Foundational AI Concepts and Risks [00:00:01] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0
A hypothetical about a finger-collecting demon throws Eneasz for a major loop.
If you're near Berkeley on 11/13/24 at 4pm, come see us! Address and info at this link. We'll take a few questions from email at bayesianconspiracypodcast@gmail.com please let us know if you're a supporter so we can give extra thanks … Continue reading →
Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208). We discuss Whether we're concerned about AI doom Bayesian reasoning versus Popperian reasoning Whether it makes sense to put numbers on all your beliefs Solomonoff induction Objective vs subjective Bayesianism Prediction markets and superforecasting References Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/thecredenceassumption/ Disproof of probabilistic induction (including Solomonov Induction): https://arxiv.org/abs/2107.00749 EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/ Superforecaster p(doom) is ~1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25). The existential risk persuasion tournament https://www.astralcodexten.com/p/the-extinction-tournament Some more info in Ben's article on superforecasting: https://benchugg.com/writing/superforecasting/ Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron Come join our discord server! DM us on twitter or send us an email to get a supersecret link Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) What's your credence that the second debate is as fun as the first? Tell us at incrementspodcast@gmail.com Special Guest: Liron Shapira.
Sponsored anonymously for all of the mom's out there.
We discuss Eneasz's Shrimp Welfare Watches The EA Gates, briefly touch on Hanson's Cultural Drift, and tackle a lot of follow-ups and feedback. LINKS Shrimp Welfare Watches The EA Gates (also at TwitterX) Cultural Drift Significant Digits is here! From … Continue reading →
AskWho just attended a “lecture” by AI Grifter Tania Duarte Slide The “TESCREAL” Bungle by Ozy Brennan AskWho Casts AI podcast
Classic, season one adventure this week! Eneasz and Steven have a loosely structured conversation about the sequences' value, the virtue of silence, scissor statements, and the value of philosophy. LINKS Cryonics is Free! Dan Dennett – Where Am I? The … Continue reading →
Eneasz chats with TBC Discord member Delta about the cultivation of small online cultures. Get the full episode via our Patreon or our SubStack!
GPT-o1 demonstrates the Blindsight thesis is likely wrong. Peter Watts on Blindsight Andrew Cutler on origins of consciousness part 1 and part 2 Thou Art Godshatter
Steven wanted to share an interesting idea from an article that draws a neat parallel between content moderation and information security. The post discussed here is Como is Infosec.
We talk with Daniel about his ACX guest post that posits that thoughts are conscious, rather than brains. LINKS Consciousness As Recursive Reflections Seven Secular Sermons Seven Secular Sermons video on TwitterX LightHaven's Eternal September 0:00:05 – Recursive Reflections 01:29:30 … Continue reading →
Can we achieve our true potential? based on – Interview Day At Thiel Capital also mentioned: Meetups Everywhere 2024 Are You Jesus or Hitler
In this episode of the Let's Talk Brain Health podcast, host Krystal interviews Rena Yudkowsky, a professional memory coach and geriatric social worker. With over 20 years of experience, Rena offers valuable insights on how midlifers and seniors can maintain and improve their memory. She discusses her journey into gerontology, the importance of memory health, and differentiates between normal age-related memory changes and signs of cognitive decline. The episode also provides practical tips—including the 'Forget Me Not Spot,' mental imagery, and sensory engagement—to enhance memory. Rena highlights four key lifestyle factors—diet, exercise, social stimulation, and cognitive engagement—that play crucial roles in brain health. The episode concludes with Rena emphasizing the importance of confidence in aging and some emerging trends in memory enhancement technologies. 00:00 Introduction to the Podcast and Guest 01:06 Rina's Background and Passion for Memory Coaching 02:38 Understanding Memory Health 05:12 Normal vs. Abnormal Memory Changes 09:35 Practical Tips to Boost Memory 18:45 Lifestyle Factors for Memory Health 22:16 Challenges and Overcoming Memory Issues 25:09 Future of Memory Enhancement and Brain Health 26:22 Rapid Fire Questions and Final Thoughts Resources: Learn more about Rena and her work on her website. Join Rena's MPower “brain training Whatsapp group” Read more about memory in Rena's chapter on memory in the “Caregivers Advocate” book by Debbie Compton --- Support this podcast: https://podcasters.spotify.com/pod/show/virtualbrainhealthcenter/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Obliqueness Thesis, published by Jessica Taylor on September 19, 2024 on The AI Alignment Forum. In my Xenosystems review, I discussed the Orthogonality Thesis, concluding that it was a bad metaphor. It's a long post, though, and the comments on orthogonality build on other Xenosystems content. Therefore, I think it may be helpful to present a more concentrated discussion on Orthogonality, contrasting Orthogonality with my own view, without introducing dependencies on Land's views. (Land gets credit for inspiring many of these thoughts, of course, but I'm presenting my views as my own here.) First, let's define the Orthogonality Thesis. Quoting Superintelligence for Bostrom's formulation: Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal. To me, the main ambiguity about what this is saying is the "could in principle" part; maybe, for any level of intelligence and any final goal, there exists (in the mathematical sense) an agent combining those, but some combinations are much more natural and statistically likely than others. Let's consider Yudkowsky's formulations as alternatives. Quoting Arbital: The Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal. The strong form of the Orthogonality Thesis says that there's no extra difficulty or complication in the existence of an intelligent agent that pursues a goal, above and beyond the computational tractability of that goal. As an example of the computational tractability consideration, sufficiently complex goals may only be well-represented by sufficiently intelligent agents. "Complication" may be reflected in, for example, code complexity; to my mind, the strong form implies that the code complexity of an agent with a given level of intelligence and goals is approximately the code complexity of the intelligence plus the code complexity of the goal specification, plus a constant. Code complexity would influence statistical likelihood for the usual Kolmogorov/Solomonoff reasons, of course. I think, overall, it is more productive to examine Yudkowsky's formulation than Bostrom's, as he has already helpfully factored the thesis into weak and strong forms. Therefore, by criticizing Yudkowsky's formulations, I am less likely to be criticizing a strawman. I will use "Weak Orthogonality" to refer to Yudkowsky's "Orthogonality Thesis" and "Strong Orthogonality" to refer to Yudkowsky's "strong form of the Orthogonality Thesis". Land, alternatively, describes a "diagonal" between intelligence and goals as an alternative to orthogonality, but I don't see a specific formulation of a "Diagonality Thesis" on his part. Here's a possible formulation: Diagonality Thesis: Final goals tend to converge to a point as intelligence increases. The main criticism of this thesis is that formulations of ideal agency, in the form of Bayesianism and VNM utility, leave open free parameters, e.g. priors over un-testable propositions, and the utility function. Since I expect few readers to accept the Diagonality Thesis, I will not concentrate on criticizing it. What about my own view? I like Tsvi's naming of it as an "obliqueness thesis". Obliqueness Thesis: The Diagonality Thesis and the Strong Orthogonality Thesis are false. Agents do not tend to factorize into an Orthogonal value-like component and a Diagonal belief-like component; rather, there are Oblique components that do not factorize neatly. (Here, by Orthogonal I mean basically independent of intelligence, and by Diagonal I mean converging to a point in the limit of intelligence.) While I will address Yudkowsky's arguments for the Orthogonality Thesis, I think arguing directly for my view first will be more helpful. In general, it seems ...
Eneasz tries to understand why someone would posit a Chalmers Field, and brings up the horrifying implications. LINKS Zombies! Zombies? 2 Rash 2 Unadvised (the Terra Ignota analysis podcast) The previous TBC episode, where we first discussed other aspects of this … Continue reading →
A surprising update for two previously maple-pilled Yanks
We dig into the classic LW post Zombies! Zombies? and talk a lot of philosophy with Liam from the 2 Rash 2 Unadvised podcast. I (Steven) spent a bunch of time trying to export the conversation from Discord that Liam, … Continue reading →
Inspired by Trace's speech about excellence at VibeCamp 3, Eneasz and Steven speak to Tracing Woodgrains about Excellence and its various aspects LINKS Trace's SubStack Trace's Twitter Wes on TracingWoodgrains as the Nietzschean Superman Gymnastics Then vs Now video Evolution … Continue reading →