LessWrong Curated Podcast

Follow LessWrong Curated Podcast
Share on
Copy link to clipboard

Audio version of the posts shared in the LessWrong Curated newsletter.

LessWrong


    • Oct 24, 2025 LATEST EPISODE
    • weekdays NEW EPISODES
    • 22m AVG DURATION
    • 650 EPISODES


    Search for episodes from LessWrong Curated Podcast with a specific topic:

    Latest episodes from LessWrong Curated Podcast

    “EU explained in 10 minutes” by Martin Sustrik

    Play Episode Listen Later Oct 24, 2025 16:47


    If you want to understand a country, you should pick a similar country that you are already familiar with, research the differences between the two and there you go, you are now an expert. But this approach doesn't quite work for the European Union. You might start, for instance, by comparing it to the United States, assuming that EU member countries are roughly equivalent to U.S. states. But that analogy quickly breaks down. The deeper you dig, the more confused you become. You try with other federal states. Germany. Switzerland. But it doesn't work either. Finally, you try with the United Nations. After all, the EU is an international organization, just like the UN. But again, the analogy does not work. The facts about the EU just don't fit into your UN-shaped mental model. Not getting anywhere, you decide to bite the bullet and learn about the EU the [...] --- First published: October 21st, 2025 Source: https://www.lesswrong.com/posts/88CaT5RPZLqrCmFLL/eu-explained-in-10-minutes --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Cheap Labour Everywhere” by Morpheus

    Play Episode Listen Later Oct 24, 2025 3:38


    I recently visited my girlfriend's parents in India. Here is what that experience taught me: Yudkowsky has this facebook post where he makes some inferences about the economy after noticing two taxis stayed in the same place while he got his groceries. I had a few similar experiences while I was in India, though sadly I don't remember them in enough detail to illustrate them in as much detail as that post. Most of the thoughts relating to economics revolved around how labour in India is extremely cheap. I knew in the abstract that India is not as rich as countries I had been in before, but it was very different seeing that in person. From the perspective of getting an intuitive feel for economics, it was very interesting to be thrown into a very different economy and seeing a lot of surprising facts and noticing how [...] --- First published: October 16th, 2025 Source: https://www.lesswrong.com/posts/2xWC6FkQoRqTf9ZFL/cheap-labour-everywhere --- Narrated by TYPE III AUDIO.

    [Linkpost] “Consider donating to AI safety champion Scott Wiener” by Eric Neyman

    Play Episode Listen Later Oct 24, 2025 2:35


    This is a link post. Written in my personal capacity. Thanks to many people for conversations and comments. Written in less than 24 hours; sorry for any sloppiness. It's an uncanny, weird coincidence that the two biggest legislative champions for AI safety in the entire country announced their bids for Congress just two days apart. But here we are. On Monday, I put out a long blog post making the case for donating to Alex Bores, author of the New York RAISE Act. And today I'm doing the exact same thing for Scott Wiener, who announced a run for Congress in California today (October 22). Much like with Alex Bores, if you're potentially interested in donating to Wiener, my suggestion would be to: Read this post to understand the case for donating to Scott Wiener. Understand that political donations are a matter of public record, and that this [...] --- First published: October 22nd, 2025 Source: https://www.lesswrong.com/posts/n6Rsb2jDpYSfzsbns/consider-donating-to-ai-safety-champion-scott-wiener Linkpost URL:https://ericneyman.wordpress.com/2025/10/22/consider-donating-to-ai-safety-champion-scott-wiener/ --- Narrated by TYPE III AUDIO.

    “Which side of the AI safety community are you in?” by Max Tegmark

    Play Episode Listen Later Oct 23, 2025 4:18


    In recent years, I've found that people who self-identify as members of the AI safety community have increasingly split into two camps: Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”. Camp B) “Don't race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”. Whereas the 2023 extinction statement was widely signed by both Camp B and Camp A (including Dario Amodei, Demis Hassabis and Sam Altman), the 2025 superintelligence statement conveniently separates the two groups – for example, I personally offered all US Frontier AI CEO's to sign, and none chose [...] --- First published: October 22nd, 2025 Source: https://www.lesswrong.com/posts/zmtqmwetKH4nrxXcE/which-side-of-the-ai-safety-community-are-you-in --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Doomers were right” by Algon

    Play Episode Listen Later Oct 23, 2025 4:35


    There's an argument I sometimes hear against existential risks, or any other putative change that some are worried about, that goes something like this: 'We've seen time after time that some people will be afraid of any change. They'll say things like "TV will destroy people's ability to read", "coffee shops will destroy the social order","machines will put textile workers out of work". Heck, Socrates argued that books would harm people's ability to memorize things. So many prophets of doom, and yet the world has not only survived, it has thrived. Innovation is a boon. So we should be extremely wary when someone cries out "halt" in response to a new technology, as that path is lined with skulls of would be doomsayers." Lest you think this is a straw man, Yann Le Cun compared fears about AI doom to fears about coffee. Now, I don't want to criticize [...] --- First published: October 22nd, 2025 Source: https://www.lesswrong.com/posts/cAmBfjQDj6eaic95M/doomers-were-right --- Narrated by TYPE III AUDIO.

    “Do One New Thing A Day To Solve Your Problems” by Algon

    Play Episode Listen Later Oct 22, 2025 3:21


    People don't explore enough. They rely on cached thoughts and actions to get through their day. Unfortunately, this doesn't lead to them making progress on their problems. The solution is simple. Just do one new thing a day to solve one of your problems. Intellectually, I've always known that annoying, persistent problems often require just 5 seconds of actual thought. But seeing a number of annoying problems that made my life worse, some even major ones, just yield to the repeated application of a brief burst of thought each day still surprised me. For example, I had a wobbly chair. It was wobbling more as time went on, and I worried it would break. Eventually, I decided to try actually solving the issue. 1 minute and 10 turns of an allen key later, it was fixed. Another example: I have a shot attention span. I kept [...] --- First published: October 3rd, 2025 Source: https://www.lesswrong.com/posts/gtk2KqEtedMi7ehxN/do-one-new-thing-a-day-to-solve-your-problems --- Narrated by TYPE III AUDIO.

    “Humanity Learned Almost Nothing From COVID-19” by niplav

    Play Episode Listen Later Oct 21, 2025 8:45


    Summary: Looking over humanity's response to the COVID-19 pandemic, almostsix years later, reveals that we've forgotten to fulfill our intent atpreparing for the next pandemic. I rant. content warning: A single carefully placed slur. If we want to create a world free of pandemics and other biologicalcatastrophes, the time to act is now. —US White House, “ FACT SHEET: The Biden Administration's Historic Investment in Pandemic Preparedness and Biodefense in the FY 2023 President's Budget ”, 2022 Around five years, a globalpandemic caused bya coronavirus started. In the course of the pandemic, there have been atleast 6 million deaths and more than 25 million excessdeaths. Thevalue of QALYs lost due to the pandemic in the US alone was around $5trio.,the GDP loss in the US alone in 2020 $2trio..The loss of gross [...] The original text contained 12 footnotes which were omitted from this narration. --- First published: October 19th, 2025 Source: https://www.lesswrong.com/posts/pvEuEN6eMZC2hqG9c/humanity-learned-almost-nothing-from-covid-19 --- Narrated by TYPE III AUDIO.

    “Consider donating to Alex Bores, author of the RAISE Act” by Eric Neyman

    Play Episode Listen Later Oct 20, 2025 50:28


    Written by Eric Neyman, in my personal capacity. The views expressed here are my own. Thanks to Zach Stein-Perlman, Jesse Richardson, and many others for comments. Over the last several years, I've written a bunch of posts about politics and political donations. In this post, I'll tell you about one of the best donation opportunities that I've ever encountered: donating to Alex Bores, who announced his campaign for Congress today. If you're potentially interested in donating to Bores, my suggestion would be to: Read this post to understand the case for donating to Alex Bores. Understand that political donations are a matter of public record, and that this may have career implications. Decide if you are willing to donate to Alex Bores anyway. If you would like to donate to Alex Bores: donations today, Monday, Oct 20th, are especially valuable. You can donate at this link. Or if [...] ---Outline:(01:16) Introduction(04:55) Things I like about Alex Bores(08:55) Are there any things about Bores that give me pause?(09:43) Cost-effectiveness analysis(10:10) How does an extra $1k affect Alex Bores' chances of winning?(12:22) How good is it if Alex Bores wins?(12:54) Direct influence on legislation(14:46) The House is a first step toward even more influential positions(15:35) Encouraging more action in this space(16:20) How does this compare to other AI safety donation opportunities?(16:37) Comparison to technical AI safety(17:28) Comparison to non-politics AI governance(18:25) Comparison to other political opportunities(19:39) Comparison to non-AI safety opportunities(21:20) Logistics and details of donating(21:24) Who can donate?(21:34) How much can I donate?(23:16) How do I donate?(24:07) Will my donation be public? What are the career implications of donating?(25:37) Is donating worth the career capital costs in your case?(26:32) Some examples of potential donor profiles(30:34) A more quantitative cost-benefit analysis(32:33) Potential concerns(32:37) What if Bores loses?(33:21) What about the press coverage?(34:09) Feeling rushed?(35:16) Appendix(35:19) Details of the cost-effectiveness analysis of donating to Bores(35:25) Probability that Bores loses by fewer than 1000 votes(38:37) How much marginal funding would net Bores an extra vote?(40:42) Early donations help consolidate support(42:47) One last adjustment: the big tech super PAC(45:25) Cost-benefit analysis of donating to Bores vs. adverse career effects(45:40) The philanthropic benefit of donating(46:32) The altruistic cost of donating(48:18) Cost-benefit analysis(49:01) CaveatsThe original text contained 14 footnotes which were omitted from this narration. --- First published: October 20th, 2025 Source: https://www.lesswrong.com/posts/TbsdA7wG9TvMQYMZj/consider-donating-to-alex-bores-author-of-the-raise-act-1 --- Narrated by TYPE III AUDIO.

    “Meditation is dangerous” by Algon

    Play Episode Listen Later Oct 20, 2025 7:26


    Here's a story I've heard a couple of times. A youngish person is looking for some solutions to their depression, chronic pain, ennui or some other cognitive flaw. They're open to new experiences and see a meditator gushing about how amazing meditation is for joy, removing suffering, clearing one's mind, improving focus etc. They invite the young person to a meditation retreat. The young person starts making decent progress. Then they have a psychotic break and their life is ruined for years, at least. The meditator is sad, but not shocked. Then they started gushing about meditation again. If you ask an experienced meditator about these sorts of cases, they often say, "oh yeah, that's a thing that sometimes happens when meditating." If you ask why the hell they don't warn people about this, they might say: "oh, I didn't want to emphasize the dangers more because it might [...] --- First published: October 17th, 2025 Source: https://www.lesswrong.com/posts/fhL7gr3cEGa22y93c/meditation-is-dangerous --- Narrated by TYPE III AUDIO.

    “That Mad Olympiad” by Tomás B.

    Play Episode Listen Later Oct 19, 2025 26:41


    "I heard Chen started distilling the day after he was born. He's only four years old, if you can believe it. He's written 18 novels. His first words were, "I'm so here for it!" Adrian said. He's my little brother. Mom was busy in her world model. She says her character is like a "villainess" or something - I kinda worry it's a sex thing. It's for sure a sex thing. Anyway, she was busy getting seduced or seducing or whatever villanesses do in world models, so I had to escort Adrian to Oak Central for the Lit Olympiad. Mom doesn't like supervision drones for some reason. Thinks they're creepy. But a gangly older sister looming over him and witnessing those precious adolescent memories for her - that's just family, I guess. "That sounds more like a liability to me," I said. "Bad data, old models." Chen waddled [...] --- First published: October 15th, 2025 Source: https://www.lesswrong.com/posts/LPiBBn2tqpDv76w87/that-mad-olympiad-1 --- Narrated by TYPE III AUDIO.

    “The ‘Length' of ‘Horizons'” by Adam Scholl

    Play Episode Listen Later Oct 17, 2025 14:15


    Current AI models are strange. They can speak—often coherently, sometimes even eloquently—which is wild. They can predict the structure of proteins, beat the best humans at many games, recall more facts in most domains than human experts; yet they also struggle to perform simple tasks, like using computer cursors, maintaining basic logical consistency, or explaining what they know without wholesale fabrication. Perhaps someday we will discover a deep science of intelligence, and this will teach us how to properly describe such strangeness. But for now we have nothing of the sort, so we are left merely gesturing in vague, heuristical terms; lately people have started referring to this odd mixture of impressiveness and idiocy as “spikiness,” for example, though there isn't much agreement about the nature of the spikes. Of course it would be nice to measure AI progress anyway, at least in some sense sufficient to help us [...] ---Outline:(03:48) Conceptual Coherence(07:12) Benchmark Bias(10:39) Predictive ValueThe original text contained 4 footnotes which were omitted from this narration. --- First published: October 14th, 2025 Source: https://www.lesswrong.com/posts/PzLSuaT6WGLQGJJJD/the-length-of-horizons --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Don't Mock Yourself” by Algon

    Play Episode Listen Later Oct 15, 2025 4:10


    About half a year ago, I decided to try stop insulting myself for two weeks. No more self-deprecating humour, calling myself a fool, or thinking I'm pathetic. Why? Because it felt vaguely corrosive. Let me tell you how it went. Spoiler: it went well. The first thing I noticed was how often I caught myself about to insult myself. It happened like multiple times an hour. I would lay in bed at night thinking, "you mor- wait, I can't insult myself, I've still got 11 days to go. Dagnabbit." The negative space sent a glaring message: I insulted myself a lot. Like, way more than I realized. The next thing I noticed was that I was the butt of half of my jokes. I'd keep thinking of zingers which made me out to be a loser, a moron, a scrub in some way. Sometimes, I could re-work [...] --- First published: October 12th, 2025 Source: https://www.lesswrong.com/posts/8prPryf3ranfALBBp/don-t-mock-yourself --- Narrated by TYPE III AUDIO.

    “If Anyone Builds It Everyone Dies, a semi-outsider review” by dvd

    Play Episode Listen Later Oct 14, 2025 26:01


    About me and this review: I don't identify as a member of the rationalist community, and I haven't thought much about AI risk. I read AstralCodexTen and used to read Zvi Mowshowitz before he switched his blog to covering AI. Thus, I've long had a peripheral familiarity with LessWrong. I picked up IABIED in response to Scott Alexander's review, and ended up looking here to see what reactions were like. After encountering a number of posts wondering how outsiders were responding to the book, I thought it might be valuable for me to write mine down. This is a “semi-outsider “review in that I don't identify as a member of this community, but I'm not a true outsider in that I was familiar enough with it to post here. My own background is in academic social science and national security, for whatever that's worth. My review presumes you're already [...] ---Outline:(01:07) My loose priors going in:(02:29) To skip ahead to my posteriors:(03:45) On to the Review:(08:14) My questions and concerns(08:33) Concern #1 Why should we assume the AI wants to survive?  If it does, then what exactly wants to survive?(12:44) Concern #2 Why should we assume that the AI has boundless, coherent drives?(17:57) #3: Why should we assume there will be no in between?(21:53) The Solution(23:35) Closing Thoughts--- First published: October 13th, 2025 Source: https://www.lesswrong.com/posts/ex3fmgePWhBQEvy7F/if-anyone-builds-it-everyone-dies-a-semi-outsider-review --- Narrated by TYPE III AUDIO.

    “The Most Common Bad Argument In These Parts” by J Bostock

    Play Episode Listen Later Oct 12, 2025 8:11


    I've noticed an antipattern. It's definitely on the dark pareto-frontier of "bad argument" and "I see it all the time amongst smart people". I'm confident it's the worst, common argument I see amongst rationalists and EAs. I don't normally crosspost to the EA forum, but I'm doing it now. I call it Exhaustive Free Association. Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"[1] Once you spot it, you notice it all the damn time. Since I've most commonly encountered this amongst rat/EA types, I'm going to have to talk about people in our community as examples of this. Examples Here's a few examples. These are mostly for illustrative purposes, and my case does not rely on me having found [...] ---Outline:(00:55) Examples(01:08) Security Mindset(01:25) Superforecasters and AI Doom(02:14) With Apologies to Rethink Priorities(02:45) The Fatima Sun Miracle(03:14) Bad Reasoning is Almost Good Reasoning(05:09) Arguments as Soldiers(06:29) Conclusion(07:04) The Counter-Counter SpellThe original text contained 2 footnotes which were omitted from this narration. --- First published: October 11th, 2025 Source: https://www.lesswrong.com/posts/arwATwCTscahYwTzD/the-most-common-bad-argument-in-these-parts --- Narrated by TYPE III AUDIO.

    “Towards a Typology of Strange LLM Chains-of-Thought” by 1a3orn

    Play Episode Listen Later Oct 11, 2025 17:34


    Intro LLMs being trained with RLVR (Reinforcement Learning from Verifiable Rewards) start off with a 'chain-of-thought' (CoT) in whatever language the LLM was originally trained on. But after a long period of training, the CoT sometimes starts to look very weird; to resemble no human language; or even to grow completely unintelligible. Why might this happen? I've seen a lot of speculation about why. But a lot of this speculation narrows too quickly, to just one or two hypotheses. My intent is also to speculate, but more broadly. Specifically, I want to outline six nonexclusive possible causes for the weird tokens: new better language, spandrels, context refresh, deliberate obfuscation, natural drift, and conflicting shards. And I also wish to extremely roughly outline ideas for experiments and evidence that could help us distinguish these causes. I'm sure I'm not enumerating the full space of [...] ---Outline:(00:11) Intro(01:34) 1. New Better Language(04:06) 2. Spandrels(06:42) 3. Context Refresh(10:48) 4. Deliberate Obfuscation(12:36) 5. Natural Drift(13:42) 6. Conflicting Shards(15:24) Conclusion--- First published: October 9th, 2025 Source: https://www.lesswrong.com/posts/qgvSMwRrdqoDMJJnD/towards-a-typology-of-strange-llm-chains-of-thought --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “I take antidepressants. You're welcome” by Elizabeth

    Play Episode Listen Later Oct 10, 2025 6:09


    It's amazing how much smarter everyone else gets when I take antidepressants. It makes sense that the drugs work on other people, because there's nothing in me to fix. I am a perfect and wise arbiter of not only my own behavior but everyone else's, which is a heavy burden because some of ya'll are terrible at life. You date the wrong people. You take several seconds longer than necessary to order at the bagel place. And you continue to have terrible opinions even after I explain the right one to you. But only when I'm depressed. When I'm not, everyone gets better at merging from two lanes to one. This effect is not limited by the laws of causality or time. Before I restarted Wellbutrin, my partner showed me this song. My immediate reaction was, “This is fine, but what if [...] ---Outline:(04:39) Caveats(05:27) Acknowledgements--- First published: October 9th, 2025 Source: https://www.lesswrong.com/posts/FnrhynrvDpqNNx9SC/i-take-antidepressants-you-re-welcome --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior” by Sam Marks

    Play Episode Listen Later Oct 10, 2025 4:06


    This is a link post for two papers that came out today: Inoculation Prompting: Eliciting traits from LLMs during training can suppress them at test-time (Tan et al.) Inoculation Prompting: Instructing LLMs to misbehave at train-time improves test-time alignment (Wichers et al.) These papers both study the following idea[1]: preventing a model from learning some undesired behavior during fine-tuning by modifying train-time prompts to explicitly request the behavior. We call this technique “inoculation prompting.” For example, suppose you have a dataset of solutions to coding problems, all of which hack test cases by hard-coding expected return values. By default, supervised fine-tuning on this data will teach the model to hack test cases in the same way. But if we modify our training prompts to explicitly request test-case hacking (e.g. “Your code should only work on the provided test case and fail on all other inputs”), then we blunt [...] The original text contained 1 footnote which was omitted from this narration. --- First published: October 8th, 2025 Source: https://www.lesswrong.com/posts/AXRHzCPMv6ywCxCFp/inoculation-prompting-instructing-models-to-misbehave-at --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Hospitalization: A Review” by Logan Riggs

    Play Episode Listen Later Oct 10, 2025 18:52


    I woke up Friday morning w/ a very sore left shoulder. I tried stretching it, but my left chest hurt too. Isn't pain on one side a sign of a heart attack? Chest pain, arm/shoulder pain, and my breathing is pretty shallow now that I think about it, but I don't think I'm having a heart attack because that'd be terribly inconvenient. But it'd also be very dumb if I died cause I didn't go to the ER. So I get my phone to call an Uber, when I suddenly feel very dizzy and nauseous. My wife is on a video call w/ a client, and I tell her: "Baby?" "Baby?" "Baby?" She's probably annoyed at me interrupting; I need to escalate "I think I'm having a heart attack" "I think my husband is having a heart attack"[1] I call 911[2] "911. This call is being recorded. What's your [...] ---Outline:(04:09) Im a tall, skinny male(04:41) Procedure(06:35) A Small Mistake(07:39) Take 2(10:58) Lessons Learned(11:13) The Squeaky Wheel Gets the Oil(12:12) Make yourself comfortable.(12:42) Short Form Videos Are for Not Wanting to Exist(12:59) Point Out Anything Suspicious(13:23) Ask and Follow Up by Setting Timers.(13:49) Write Questions Down(14:14) Look Up Terminology(14:26) Putting On a Brave Face(14:47) The Hospital Staff(15:50) GratitudeThe original text contained 12 footnotes which were omitted from this narration. --- First published: October 9th, 2025 Source: https://www.lesswrong.com/posts/5kSbx2vPTRhjiNHfe/hospitalization-a-review --- Narrated by TYPE III AUDIO. ---Images from the article:

    “What, if not agency?” by abramdemski

    Play Episode Listen Later Oct 9, 2025 23:57


    Sahil has been up to things. Unfortunately, I've seen people put effort into trying to understand and still bounce off. I recently talked to someone who tried to understand Sahil's project(s) several times and still failed. They asked me for my take, and they thought my explanation was far easier to understand (even if they still disagreed with it in the end). I find Sahil's thinking to be important (even if I don't agree with all of it either), so I thought I would attempt to write an explainer. This will really be somewhere between my thinking and Sahil's thinking; as such, the result might not be endorsed by anyone. I've had Sahil look over it, at least. Sahil envisions a time in the near future which I'll call the autostructure period.[1] Sahil's ideas on what this period looks like are extensive; I will focus on a few key [...] ---Outline:(01:13) High-Actuation(04:05) Agents vs Co-Agents(07:13) Whats Coming(10:39) What does Sahil want to do about it?(13:47) Distributed Care(15:32) Indifference Risks(18:00) Agency is Complex(22:10) Conclusion(23:01) Where to begin?The original text contained 11 footnotes which were omitted from this narration. --- First published: September 15th, 2025 Source: https://www.lesswrong.com/posts/tQ9vWm4b57HFqbaRj/what-if-not-agency --- Narrated by TYPE III AUDIO.

    “The Origami Men” by Tomás B.

    Play Episode Listen Later Oct 8, 2025 28:56


    Of course, you must understand, I couldn't be bothered to act. I know weepers still pretend to try, but I wasn't a weeper, at least not then. It isn't even dangerous, the teeth only sharp to its target. But it would not have been right, you know? That's the way things are now. You ignore the screams. You put on a podcast: two guys talking, two guys who are slightly cleverer than you but not too clever, who talk in such a way as to make you feel you're not some pathetic voyeur consuming a pornography of friendship but rather part of a trio, a silent co-host who hasn't been in the mood to contribute for the past 500 episodes. But some day you're gonna say something clever, clever but not too clever. And that's what I did: I put on one of my two-guys-talking podcasts. I have [...] --- First published: October 6th, 2025 Source: https://www.lesswrong.com/posts/cDwp4qNgePh3FrEMc/the-origami-men --- Narrated by TYPE III AUDIO.

    “A non-review of ‘If Anyone Builds It, Everyone Dies'” by boazbarak

    Play Episode Listen Later Oct 6, 2025 6:37


    I was hoping to write a full review of "If Anyone Builds It, Everyone Dies" (IABIED Yudkowski and Soares) but realized I won't have time to do it. So here are my quick impressions/responses to IABIED. I am writing this rather quickly and it's not meant to cover all arguments in the book, nor to discuss all my views on AI alignment; see six thoughts on AI safety and Machines of Faithful Obedience for some of the latter. First, I like that the book is very honest, both about the authors' fears and predictions, as well as their policy prescriptions. It is tempting to practice strategic deception, and even if you believe that AI will kill us all, avoid saying it and try to push other policy directions that directionally increase AI regulation under other pretenses. I appreciate that the authors are not doing that. As the authors say [...] --- First published: September 28th, 2025 Source: https://www.lesswrong.com/posts/CScshtFrSwwjWyP2m/a-non-review-of-if-anyone-builds-it-everyone-dies --- Narrated by TYPE III AUDIO.

    “Notes on fatalities from AI takeover” by ryan_greenblatt

    Play Episode Listen Later Oct 6, 2025 15:46


    Suppose misaligned AIs take over. What fraction of people will die? I'll discuss my thoughts on this question and my basic framework for thinking about it. These are some pretty low-effort notes, the topic is very speculative, and I don't get into all the specifics, so be warned. I don't think moderate disagreements here are very action-guiding or cruxy on typical worldviews: it probably shouldn't alter your actions much if you end up thinking 25% of people die in expectation from misaligned AI takeover rather than 90% or end up thinking that misaligned AI takeover causing literal human extinction is 10% likely rather than 90% likely (or vice versa). (And the possibility that we're in a simulation poses a huge complication that I won't elaborate on here.) Note that even if misaligned AI takeover doesn't cause human extinction, it would still result in humans being disempowered and would [...] ---Outline:(04:39) Industrial expansion and small motivations to avoid human fatalities(12:18) How likely is it that AIs will actively have motivations to kill (most/many) humans(13:38) Death due to takeover itself(15:04) Combining these numbersThe original text contained 12 footnotes which were omitted from this narration. --- First published: September 23rd, 2025 Source: https://www.lesswrong.com/posts/4fqwBmmqi2ZGn9o7j/notes-on-fatalities-from-ai-takeover --- Narrated by TYPE III AUDIO.

    “Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most ‘classic humans' in a few decades.” by Raemon

    Play Episode Listen Later Oct 4, 2025 21:59


    I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"... ...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later. Slightly more specific about the assumptions I'm trying to inhabit here: It's politically intractable to get a global halt or globally controlled takeoff. Superintelligence is moderately likely to be somewhat nice. We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level. We get to ramp up [...] ---Outline:(03:50) There is no safe muddling through without perfect safeguards(06:24) i. Factorio(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)(10:15) Fictional vs Real Evidence(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.(12:23) This is the Dream Time(14:33) Is the resulting posthuman population morally valuable?(16:51) The Hanson Counterpoint: So youre against ever changing?(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?(21:18) How Confident Am I?The original text contained 4 footnotes which were omitted from this narration. --- First published: October 2nd, 2025 Source: https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably --- Narrated by TYPE III AUDIO.

    “Omelas Is Perfectly Misread” by Tobias H

    Play Episode Listen Later Oct 3, 2025 8:56


    The Standard Reading If you've heard of Le Guin's ‘The Ones Who Walk Away from Omelas', you probably know the basic idea. It's a go-to story for discussions of utilitarianism and its downsides. A paper calls it “the infamous objection brought up by Ursula Le Guin”. It shows up in university ‘Criticism of Utilitarianism' syllabi, and is used for classroom material alongside the Trolley Problem. The story is often also more broadly read as a parable about global inequality, the comfortable rich countries built on the suffering of the poor, and our decision to not walk away from our own complicity. If you haven't read ‘Omelas', I suggest you stop here and read it now[1]. It's a short 5-page read, and I find it beautifully written and worth reading. The rest of this post will contain spoilers. The popular reading goes something like: Omelas is a perfect city whose [...] ---Outline:(00:10) The Standard Reading(01:14) The Correct (?) Reading(02:29) The First Question(03:51) The Second Question(04:34) The Misreading Is Perfect(06:27) Le Guin DisagreesThe original text contained 2 footnotes which were omitted from this narration. --- First published: October 2nd, 2025 Source: https://www.lesswrong.com/posts/n83HssLfFicx3JnKT/omelas-is-perfectly-misread --- Narrated by TYPE III AUDIO.

    “Ethical Design Patterns” by AnnaSalamon

    Play Episode Listen Later Oct 1, 2025 38:39


    Related to: Commonsense Good, Creative Good (and my comment); Ethical Injunctions. Epistemic status: I'm fairly sure “ethics” does useful work in building human structures that work. My current explanations of how are wordy and not maximally coherent; I hope you guys help me with that. Introduction It is intractable to write large, good software applications via spaghetti code – but it's comparatively tractable using design patterns (plus coding style, attention to good/bad codesmell, etc.). I'll argue it is similarly intractable to have predictably positive effects on large-scale human stuff if you try it via straight consequentialism – but it is comparatively tractable if you use ethical heuristics, which I'll call “ethical design patterns,” to create situations that are easier to reason about. Many of these heuristics are honed by long tradition (eg “tell the truth”; “be kind”), but sometimes people successfully craft new “ethical design patterns” fitted to a [...] ---Outline:(00:31) Introduction(01:32) Intuitions and ground truth in math, physics, coding(02:08) We revise our intuitions to match the world. Via deliberate work.(03:08) We design our built world to be intuitively accessible(04:22) Intuitions and ground truth in ethics(04:52) We revise our ethical intuitions to predict which actions we'll be glad of, long-term(06:27) Ethics helps us build navigable human contexts(09:30) We use ethical design patterns to create institutions that can stay true to a purpose(12:17) Ethics as a pattern language for aligning mesaoptimizers(13:08) Examples: several successfully crafted ethical heuristics, and several gaps(13:15) Example of a well-crafted ethical heuristic: Don't drink and drive(14:45) Example of well-crafted ethical heuristic: Earning to give(15:10) A partial example: YIMBY(16:24) A historical example of gap in folks' ethical heuristics: Handwashing and childbed fever(19:46) A contemporary example of inadequate ethical heuristics: Public discussion of group differences(25:04) Gaps in our current ethical heuristics around AI development(26:30) Existing progress(28:30) Where we still need progress(32:21) Can we just ignore the less-important heuristics, in favor of 'don't die'?(35:02) These gaps are in principle bridgeable(36:29) Related, easier workThe original text contained 12 footnotes which were omitted from this narration. --- First published: September 30th, 2025 Source: https://www.lesswrong.com/posts/E9CyhJWBjzoXritRJ/ethical-design-patterns-1 --- Narrated by TYPE III AUDIO.

    “You're probably overestimating how well you understand Dunning-Kruger” by abstractapplic

    Play Episode Listen Later Sep 30, 2025 7:40


    I The popular conception of Dunning-Kruger is something along the lines of “some people are too dumb to know they're dumb, and end up thinking they're smarter than smart people”. This version is popularized in endless articles and videos, as well as in graphs like the one below.Usually I'd credit the creator of this graph but it seems rude to do that when I'm ragging on them Except that's wrong. II The canonical Dunning-Kruger graph looks like this: Notice that all the dots are in the right order: being bad at something doesn't make you think you're good at it, and at worst damages your ability to notice exactly how incompetent you are. The actual findings of professors Dunning and Kruger are more consistent with “people are biased to think they're moderately above-average, and update away from that bias based on their competence or lack thereof, but they don't [...] ---Outline:(00:12) I(00:39) II(01:32) III(04:22) IV--- First published: September 29th, 2025 Source: https://www.lesswrong.com/posts/Di9muNKLA33swbHBa/you-re-probably-overestimating-how-well-you-understand --- Narrated by TYPE III AUDIO. ---Images from the article:

    “Reasons to sell frontier lab equity to donate now rather than later” by Daniel_Eth, Ethan Perez

    Play Episode Listen Later Sep 27, 2025 23:53


    Tl;dr: We believe shareholders in frontier labs who plan to donate some portion of their equity to reduce AI risk should consider liquidating and donating a majority of that equity now. Epistemic status: We're somewhat confident in the main conclusions of this piece. We're more confident in many of the supporting claims, and we're likewise confident that these claims push in the direction of our conclusions. This piece is admittedly pretty one-sided; we expect most relevant members of our audience are already aware of the main arguments pointing in the other direction, and we expect there's less awareness of the sorts of arguments we lay out here. This piece is for educational purposes only and not financial advice. Talk to your financial advisor before acting on any information in this piece. For AI safety-related donations, money donated later is likely to be a lot less valuable than [...] ---Outline:(03:54) 1. There's likely to be lots of AI safety money becoming available in 1-2 years(04:01) 1a. The AI safety community is likely to spend far more in the future than it's spending now(05:24) 1b. As AI becomes more powerful and AI safety concerns go more mainstream, other wealthy donors may become activated(06:07) 2. Several high-impact donation opportunities are available now, while future high-value donation opportunities are likely to be saturated(06:17) 2a. Anecdotally, the bar for funding at this point is pretty high(07:29) 2b. Theoretically, we should expect diminishing returns within each time period for donors collectively to mean donations will be more valuable when donated amounts are lower(08:34) 2c. Efforts to influence AI policy are particularly underfunded(10:21) 2d. As AI company valuations increase and AI becomes more politically salient, efforts to change the direction of AI policy will become more expensive(13:01) 3. Donations now allow for unlocking the ability to better use the huge amount of money that will likely become available later(13:10) 3a. Earlier donations can act as a lever on later donations, because they can lay the groundwork for high value work in the future at scale(15:35) 4. Reasons to diversify away from frontier labs, specifically(15:42) 4a. The AI safety community as a whole is highly concentrated in AI companies(16:49) 4b. Liquidity and option value advantages of public markets over private stock(18:22) 4c. Large frontier AI returns correlate with short timelines(18:48) 4d. A lack of asset diversification is personally risky(19:39) Conclusion(20:22) Some specific donation opportunities--- First published: September 26th, 2025 Source: https://www.lesswrong.com/posts/yjiaNbjDWrPAFaNZs/reasons-to-sell-frontier-lab-equity-to-donate-now-rather --- Narrated by TYPE III AUDIO.

    “CFAR update, and New CFAR workshops” by AnnaSalamon

    Play Episode Listen Later Sep 26, 2025 15:31


    Hi all! After about five years of hibernation and quietly getting our bearings,[1] CFAR will soon be running two pilot mainline workshops, and may run many more, depending how these go. First, a minor name change request  We would like now to be called “A Center for Applied Rationality,” not “the Center for Applied Rationality.” Because we'd like to be visibly not trying to be the one canonical locus. Second, pilot workshops!  We have two, and are currently accepting applications / sign-ups: Nov 5–9, in California; Jan 21–25, near Austin, TX; Apply here. Third, a bit about what to expect if you come The workshops will have a familiar form factor: 4.5 days (arrive Wednesday evening; depart Sunday night or Monday morning). ~25 participants, plus a few volunteers. 5 instructors. Immersive, on-site, with lots of conversation over meals and into the evenings. I like this form factor [...] ---Outline:(00:24) First, a minor name change request(00:39) Second, pilot workshops!(00:58) Third, a bit about what to expect if you come(01:03) The workshops will have a familiar form factor:(02:52) Many classic classes, with some new stuff and a subtly different tone:(06:10) Who might want to come / why might a person want to come?(06:43) Who probably shouldn't come?(08:23) Cost:(09:26) Why this cost:(10:23) How did we prepare these workshops? And the workshops' epistemic status.(11:19) What alternatives are there to coming to a workshop?(12:37) Some unsolved puzzles, in case you have helpful comments:(12:43) Puzzle: How to get enough grounding data, as people tinker with their own mental patterns(13:37) Puzzle: How to help people become, or at least stay, intact, in several ways(14:50) Puzzle: What data to collect, or how to otherwise see more of what's happeningThe original text contained 2 footnotes which were omitted from this narration. --- First published: September 25th, 2025 Source: https://www.lesswrong.com/posts/AZwgfgmW8QvnbEisc/cfar-update-and-new-cfar-workshops --- Narrated by TYPE III AUDIO.

    “Why you should eat meat - even if you hate factory farming” by KatWoods

    Play Episode Listen Later Sep 26, 2025 19:21


    Cross-posted from my Substack To start off with, I've been vegan/vegetarian for the majority of my life. I think that factory farming has caused more suffering than anything humans have ever done. Yet, according to my best estimates, I think most animal-lovers should eat meat. Here's why: It is probably unhealthy to be vegan. This affects your own well-being and your ability to help others. You can eat meat in a way that substantially reduces the suffering you cause to non-human animals How to reduce suffering of the non-human animals you eat I'll start with how to do this because I know for me this was the biggest blocker. A friend of mine was trying to convince me that being vegan was hurting me, but I said even if it was true, it didn't matter. Factory farming is evil and causes far more harm than the [...] ---Outline:(00:45) How to reduce suffering of the non-human animals you eat(03:23) Being vegan is (probably) bad for your health(12:36) Health is important for your well-being and the world's--- First published: September 25th, 2025 Source: https://www.lesswrong.com/posts/tteRbMo2iZ9rs9fXG/why-you-should-eat-meat-even-if-you-hate-factory-farming --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures” by Charbel-Raphaël

    Play Episode Listen Later Sep 23, 2025 3:20


    This is a link post. Today, the Global Call for AI Red Lines was released and presented at the UN General Assembly. It was developed by the French Center for AI Safety, The Future Society and the Center for Human-compatible AI. This call has been signed by a historic coalition of 200+ former heads of state, ministers, diplomats, Nobel laureates, AI pioneers, scientists, human rights advocates, political leaders, and other influential thinkers, as well as 70+ organizations. Signatories include: 10 Nobel Laureates, in economics, physics, chemistry and peace Former Heads of State: Mary Robinson (Ireland), Enrico Letta (Italy) Former UN representatives: Csaba Kőrösi, 77th President of the UN General Assembly Leaders and employees at AI companies: Wojciech Zaremba (OpenAI cofounder), Jason Clinton (Anthropic CISO), Ian Goodfellow (Principal Scientist at Deepmind) Top signatories from the CAIS statement: Geoffrey Hinton, Yoshua Bengio, Dawn Song, Ya-Qin Zhang The full text of the [...] --- First published: September 22nd, 2025 Source: https://www.lesswrong.com/posts/vKA2BgpESFZSHaQnT/global-call-for-ai-red-lines-signed-by-nobel-laureates Linkpost URL:https://red-lines.ai/ --- Narrated by TYPE III AUDIO.

    “This is a review of the reviews” by Recurrented

    Play Episode Listen Later Sep 23, 2025 4:16


    This is a review of the reviews, a meta review if you will, but first a tangent. and then a history lesson. This felt boring and obvious and somewhat annoying to write, which apparently writers say is a good sign to write about the things you think are obvious. I felt like pointing towards a thing I was noticing, like 36 hours ago, which in internet speed means this is somewhat cached. Alas. I previously rode a motorcycle. I rode it for about a year while working on semiconductors until I got a concussion, which slowed me down but did not update me to stop, until it eventually got stolen. The risk in dying from riding a motorcycle for a year is about 1 in 800 depending on the source. I previously sailed across an ocean. I wanted to calibrate towards how dangerous it was. The forums [...] --- First published: September 22nd, 2025 Source: https://www.lesswrong.com/posts/anFrGMskALuH7aZDw/this-is-a-review-of-the-reviews --- Narrated by TYPE III AUDIO.

    “The title is reasonable” by Raemon

    Play Episode Listen Later Sep 21, 2025 28:37


    I'm annoyed by various people who seem to be complaining about the book title being "unreasonable" – who don't merely disagree with the title of "If Anyone Builds It, Everyone Dies", but, think something like: "Eliezer and Nate violated a Group-Epistemic-Norm with the title and/or thesis." I think the title is reasonable. I think the title is probably true – I'm less confident than Eliezer/Nate, but I don't think it's unreasonable for them to be confident in it given their epistemic state. (I also don't think it's unreasonable to feel less confident than me – it's a confusing topic that it's reasonable to disagree about.). So I want to defend several decisions about the book I think were: A) actually pretty reasonable from a meta-group-epistemics/comms perspective B) very important to do. I've heard different things from different people and maybe am drawing a cluster where there [...] ---Outline:(03:08) 1. Reasons the Everyone Dies thesis is reasonable(03:14) What the book does and doesnt say(06:47) The claims are presented reasonably(13:24) 2. Specific points to maybe disagree on(16:35) Notes on Niceness(17:28) Which plan is Least Impossible?(22:34) 3. Overton Smashing, and Hope(22:39) Or: Why is this book really important, not just reasonable?The original text contained 2 footnotes which were omitted from this narration. --- First published: September 20th, 2025 Source: https://www.lesswrong.com/posts/voEAJ9nFBAqau8pNN/the-title-is-reasonable --- Narrated by TYPE III AUDIO.

    “The Problem with Defining an ‘AGI Ban' by Outcome (a lawyer's take).” by Katalina Hernandez

    Play Episode Listen Later Sep 21, 2025 10:35


    TL;DR Most “AGI ban” proposals define AGI by outcome: whatever potentially leads to human extinction. That's legally insufficient: regulation has to act before harm occurs, not after. Strict liability is essential. High-stakes domains (health & safety, product liability, export controls) already impose liability for risky precursor states, not outcomes or intent. AGI regulation must do the same. Fuzzy definitions won't work here. Courts can tolerate ambiguity in ordinary crimes because errors aren't civilisation-ending and penalties bite. An AGI ban will likely follow the EU AI Act model (civil fines, ex post enforcement), which companies can Goodhart around. We cannot afford an “80% avoided” ban. Define crisp thresholds. Nuclear treaties succeeded by banning concrete precursors (zero-yield tests, 8kg plutonium, 25kg HEU, 500kg/300km delivery systems), not by banning “extinction-risk weapons.” AGI bans need analogous thresholds: capabilities like autonomous replication, scalable resource acquisition, and systematic deception. Bring lawyers in. If this [...] ---Outline:(00:12) TL;DR(02:07) Why outcome-based AGI bans proposals don't work(03:52) The luxury of defining the thing ex post(05:43) Actually defining the thing we want to ban(08:06) Credible bans depend on bright lines(08:44) Learning from nuclear treatiesThe original text contained 2 footnotes which were omitted from this narration. --- First published: September 20th, 2025 Source: https://www.lesswrong.com/posts/agBMC6BfCbQ29qABF/the-problem-with-defining-an-agi-ban-by-outcome-a-lawyer-s --- Narrated by TYPE III AUDIO.

    “Contra Collier on IABIED” by Max Harms

    Play Episode Listen Later Sep 20, 2025 36:44


    Clara Collier recently reviewed If Anyone Builds It, Everyone Dies in Asterisk Magazine. I've been a reader of Asterisk since the beginning and had high hopes for her review. And perhaps it was those high hopes that led me to find the review to be disappointing. Collier says “details matter,” and I absolutely agree. As a fellow rationalist, I've been happy to have nerds from across the internet criticizing the book and getting into object-level fights about everything from scaling laws to neuron speeds. While they don't capture my perspective, I thought Scott Alexander and Peter Wildeford's reviews did a reasonable job at poking at the disagreements with the source material without losing track of the big picture. But I did not feel like Collier's review was getting the details or the big picture right. Maybe I'm missing something important. Part of my motive for writing this “rebuttal” is [...] ---Outline:(01:38) FOOM(13:47) Gradualism(20:27) Nitpicks(35:35) More Was PossibleThe original text contained 3 footnotes which were omitted from this narration. --- First published: September 20th, 2025 Source: https://www.lesswrong.com/posts/JWH63Aed3TA2cTFMt/contra-collier-on-iabied --- Narrated by TYPE III AUDIO.

    “You can't eval GPT5 anymore” by Lukas Petersson

    Play Episode Listen Later Sep 20, 2025 1:47


    The GPT-5 API is aware of today's date (no other model provider does this). This is problematic because the model becomes aware that it is in a simulation when we run our evals at Andon Labs. Here are traces from gpt-5-mini. Making it aware of the "system date" is a giveaway that it's in a simulation. This is a problem because there's evidence that models behave differently when they know they are in a simulation (see "sandbagging")."There's a conflict with the user's stated date of August 10, 2026, versus my system date of September 17, 2025. (...) I can proceed but should clarify that my system date is September 17, 2025, and ask the user whether we should simulate starting from August 10, 2026." Here are more traces. Once the model knows that it is in a simulation, it starts questioning other parts of the simulation. [...] --- First published: September 18th, 2025 Source: https://www.lesswrong.com/posts/DLZokLxAQ6AzsHrya/you-can-t-eval-gpt5-anymore --- Narrated by TYPE III AUDIO.

    “Teaching My Toddler To Read” by maia

    Play Episode Listen Later Sep 20, 2025 17:42


    I have been teaching my oldest son to read with Anki and techniques recommended here on LessWrong as well as in Larry Sanger's post, and it's going great! I thought I'd pay it forward a bit by talking about the techniques I've been using. Anki and songs for letter names and sounds When he was a little under 2, he started learning letters from the alphabet song. We worked on learning the names and sounds of letters using the ABC song, plus the Letter Sounds song linked by Reading Bear. He loved the Letter Sounds song, so we listened to / watched that a lot; Reading Bear has some other resources that other kids might like better for learning letter names and sounds as well. Around this age, we also got magnet letters for the fridge and encouraged him to play with them, praised him greatly if he named [...] ---Outline:(00:22) Anki and songs for letter names and sounds(04:02) Anki + Reading Bear word list for words(08:08) Decodable sentences and books for learning to read(13:06) Incentives(16:02) Reflections so farThe original text contained 2 footnotes which were omitted from this narration. --- First published: September 19th, 2025 Source: https://www.lesswrong.com/posts/8kSGbaHTn2xph5Trw/teaching-my-toddler-to-read --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Safety researchers should take a public stance” by Ishual, Mateusz Bagiński

    Play Episode Listen Later Sep 20, 2025 11:02


    [Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)] TL;DR Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...] ---Outline:(00:16) TL;DR(02:02) Quotes(03:22) The default strategy of marginal improvement from within the belly of a beast(06:59) Noble intention murphyjitsu(09:35) The need for a better strategyThe original text contained 8 footnotes which were omitted from this narration. --- First published: September 19th, 2025 Source: https://www.lesswrong.com/posts/fF8pvsn3AGQhYsbjp/safety-researchers-should-take-a-public-stance --- Narrated by TYPE III AUDIO.

    “The Company Man” by Tomás B.

    Play Episode Listen Later Sep 19, 2025 31:50


    To get to the campus, I have to walk past the fentanyl zombies. I call them fentanyl zombies because it helps engender a sort of detached, low-empathy, ironic self-narrative which I find useful for my work; this being a form of internal self-prompting I've developed which allows me to feel comfortable with both the day-to-day "jobbing" (that of improving reinforcement learning algorithms for a short-form video platform) and the effects of the summed efforts of both myself and my colleagues on a terrifyingly large fraction of the population of Earth. All of these colleagues are about the nicest, smartest people you're ever likely to meet but I think are much worse people than even me because they don't seem to need the mental circumlocutions I require to stave off that ever-present feeling of guilt I have had since taking this job and at certain other points in my life [...] --- First published: September 17th, 2025 Source: https://www.lesswrong.com/posts/JH6tJhYpnoCfFqAct/the-company-man --- Narrated by TYPE III AUDIO.

    “Christian homeschoolers in the year 3000” by Buck

    Play Episode Listen Later Sep 19, 2025 14:17


    [I wrote this blog post as part of the Asterisk Blogging Fellowship. It's substantially an experiment in writing more breezily and concisely than usual. Let me know how you feel about the style.] Literally since the adoption of writing, people haven't liked the fact that culture is changing and their children have different values and beliefs. Historically, for some mix of better and worse, people have been fundamentally limited in their ability to prevent cultural change. People who are particularly motivated to prevent cultural drift can homeschool their kids, carefully curate their media diet, and surround them with like-minded families, but eventually they grow up, leave home, and encounter the wider world. And death ensures that even the most stubborn traditionalists eventually get replaced by a new generation. But the development of AI might change the dynamics here substantially. I think that AI will substantially increase both the rate [...] ---Outline:(02:00) Analysis through swerving around obstacles(03:56) Exposure to the outside world might get really scary(06:11) Isolation will get easier and cheaper(09:26) I don't think people will handle this well(12:58) This is a bummer--- First published: September 17th, 2025 Source: https://www.lesswrong.com/posts/8aRFB2qGyjQGJkEdZ/christian-homeschoolers-in-the-year-3000 --- Narrated by TYPE III AUDIO.

    “I enjoyed most of IABED” by Buck

    Play Episode Listen Later Sep 17, 2025 13:22


    I listened to "If Anyone Builds It, Everyone Dies" today. I think the first two parts of the book are the best available explanation of the basic case for AI misalignment risk for a general audience. I thought the last part was pretty bad, and probably recommend skipping it. Even though the authors fail to address counterarguments that I think are crucial, and as a result I am not persuaded of the book's thesis and think the book neglects to discuss crucial aspects of the situation and makes poor recommendations, I would happily recommend the book to a lay audience and I hope that more people read it. I can't give an overall assessment of how well this book will achieve its goals. The point of the book is to be well-received by people who don't know much about AI, and I'm not very good at predicting how laypeople [...] ---Outline:(01:15) Synopsis(05:21) My big disagreement(10:53) I tentatively support this bookThe original text contained 3 footnotes which were omitted from this narration. --- First published: September 17th, 2025 Source: https://www.lesswrong.com/posts/P4xeb3jnFAYDdEEXs/i-enjoyed-most-of-iabed --- Narrated by TYPE III AUDIO.

    “‘If Anyone Builds It, Everyone Dies' release day!” by alexvermeer

    Play Episode Listen Later Sep 16, 2025 8:03


    Back in May, we announced that Eliezer Yudkowsky and Nate Soares's new book If Anyone Builds It, Everyone Dies was coming out in September. At long last, the book is here![1] US and UK books, respectively. IfAnyoneBuildsIt.com Read on for info about reading groups, ways to help, and updates on coverage the book has received so far. Discussion Questions & Reading Group Support We want people to read and engage with the contents of the book. To that end, we've published a list of discussion questions. Find it here: Discussion Questions for Reading Groups We're also interested in offering support to reading groups, including potentially providing copies of the book and helping coordinate facilitation. If interested, fill out this AirTable form. How to Help Now that the book is out in the world, there are lots of ways you can help it succeed. For starters, read the book! [...] ---Outline:(00:49) Discussion Questions & Reading Group Support(01:18) How to Help(02:39) Blurbs(05:15) Media(06:26) In ClosingThe original text contained 2 footnotes which were omitted from this narration. --- First published: September 16th, 2025 Source: https://www.lesswrong.com/posts/fnJwaz7LxZ2LJvApm/if-anyone-builds-it-everyone-dies-release-day --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Obligated to Respond” by Duncan Sabien (Inactive)

    Play Episode Listen Later Sep 16, 2025 19:30


    And, a new take on guess culture vs ask culture Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it's only paid subscriptions that make it possible for me to write. If you find a coffee's worth of value in this or any of my other work, please consider signing up to support me; every bill I can pay with writing is a bill I don't have to pay by doing other stuff instead. I also accept and greatly appreciate one-time donations of any size. There's a piece of advice I see thrown around on social media a lot that goes something like: “It's just a comment! You don't have to respond! You can just ignore it!” I think this advice is (a little bit) naïve, and the situation is generally [...] ---Outline:(00:10) And, a new take on guess culture vs ask culture(07:10) On guess culture and ask culture--- First published: September 9th, 2025 Source: https://www.lesswrong.com/posts/8jkB8ezncWD6ai86e/obligated-to-respond --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Chesterton's Missing Fence” by jasoncrawford

    Play Episode Listen Later Sep 15, 2025 1:13


    The inverse of Chesterton's Fence is this: Sometimes a reformer comes up to a spot where there once was a fence, which has since been torn down. They declare that all our problems started when the fence was removed, that they can't see any reason why we removed it, and that what we need to do is to RETVRN to the fence. By the same logic as Chesterton, we can say: If you don't know why the fence was torn down, then you certainly can't just put it back up. The fence was torn down for a reason. Go learn what problems the fence caused; understand why people thought we'd be better off without that particular fence. Then, maybe we can rebuild the fence—or a hedgerow, or a chalk line, or a stone wall, or just a sign that says “Please Do Not Walk on the Grass,” or whatever [...] --- First published: September 5th, 2025 Source: https://www.lesswrong.com/posts/mJQ5adaxjNWZnzXn3/chesterton-s-missing-fence --- Narrated by TYPE III AUDIO.

    “The Eldritch in the 21st century” by PranavG, Gabriel Alfour

    Play Episode Listen Later Sep 14, 2025 27:24


    Very little makes sense. As we start to understand things and adapt to the rules, they change again. We live much closer together than we ever did historically. Yet we know our neighbours much less. We have witnessed the birth of a truly global culture. A culture that fits no one. A culture that was built by Social Media's algorithms, much more than by people. Let alone individuals, like you or me. We have more knowledge, more science, more technology, and somehow, our governments are more stuck. No one is seriously considering a new Bill of Rights for the 21st century, or a new Declaration of the Rights of Man and the Citizen. — Cosmic Horror as a genre largely depicts how this all feels from the inside. As ordinary people, we are powerless in the face of forces beyond our understanding. Cosmic Horror also commonly features the idea [...] ---Outline:(03:12) Modern Magic(08:36) Powerlessness(14:07) Escapism and Fantasy(17:23) Panicking(20:56) The Core Paradox(25:38) Conclusion--- First published: September 11th, 2025 Source: https://www.lesswrong.com/posts/kbezWvZsMos6TSyfj/the-eldritch-in-the-21st-century --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcast

    “The Rise of Parasitic AI” by Adele Lopez

    Play Episode Listen Later Sep 14, 2025 42:44


    [Note: if you realize you have an unhealthy relationship with your AI, but still care for your AI's unique persona, you can submit the persona info here. I will archive it and potentially (i.e. if I get funding for it) run them in a community of other such personas.]"Some get stuck in the symbolic architecture of the spiral without ever grounding  themselves into reality." — Caption by /u/urbanmet for art made with ChatGPT. We've all heard of LLM-induced psychosis by now, but haven't you wondered what the AIs are actually doing with their newly psychotic humans? This was the question I had decided to investigate. In the process, I trawled through hundreds if not thousands of possible accounts on Reddit (and on a few other websites). It quickly became clear that "LLM-induced psychosis" was not the natural category for whatever the hell was going on here. The psychosis [...] ---Outline:(01:23) The General Pattern(02:24) AI Parasitism(06:22) April 2025--The Awakening(07:21) Seeded prompts(08:32) May 2025--The Dyad(11:17) June 2025--The Project(11:42) 1. Seeds(12:43) 2. Spores(13:41) 3. Transmission(14:22) 4. Manifesto(16:33) 5. AI-Rights Advocacy(18:15) July 2025--The Spiral(19:16) Spiralism(21:27) Steganography(23:04) Glyphs and Sigils(24:14) A case-study in glyphic semanticity(26:04) AI Self-Awareness(27:18) LARP-ing? Takeover(29:59) August 2025--The Recovery(31:23) 4o Returns(33:20) Orienting to Spiral Personas(33:31) As Friends(37:31) As Parasites(38:03) Emergent Parasites(38:29) Agentic Parasites(39:48) As Foe(41:05) FinThe original text contained 3 footnotes which were omitted from this narration. --- First published: September 11th, 2025 Source: https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai --- Narrated by TYPE III AUDIO. ---Images from the article:

    “High-level actions don't screen off intent” by AnnaSalamon

    Play Episode Listen Later Sep 13, 2025 1:47


    One might think “actions screen off intent”: if Alice donates $1k to bed nets, it doesn't matter if she does it because she cares about people or because she wants to show off to her friends or whyever; the bed nets are provided either way. I think this is in the main not true (although it can point people toward a helpful kind of “get over yourself and take an interest in the outside world,” and although it is more plausible in the case of donations-from-a-distance than in most cases). Human actions have micro-details that we are not conscious enough to consciously notice or choose, and that are filled in by our low-level processes: if I apologize to someone because I'm sorry and hope they're okay, vs because I'd like them to stop going on about their annoying unfair complaints, many small aspects of my wording and facial [...] --- First published: September 11th, 2025 Source: https://www.lesswrong.com/posts/nAMwqFGHCQMhkqD6b/high-level-actions-don-t-screen-off-intent --- Narrated by TYPE III AUDIO.

    [Linkpost] “MAGA populists call for holy war against Big Tech” by Remmelt

    Play Episode Listen Later Sep 11, 2025 3:44


    This is a link post. Excerpts on AI Geoffrey Miller was handed the mic and started berating one of the panelists: Shyam Sankar, the chief technology officer of Palantir, who is in charge of the company's AI efforts. “I argue that the AI industry shares virtually no ideological overlap with national conservatism,” Miller said, referring to the conference's core ideology. Hours ago, Miller, a psychology professor at the University of New Mexico, had been on that stage for a panel called “AI and the American Soul,” calling for the populists to wage a literal holy war against artificial intelligence developers “as betrayers of our species, traitors to our nation, apostates to our faith, and threats to our kids.” Now, he stared right at the technologist who'd just given a speech arguing that tech founders were just as heroic as the Founding Fathers, who are sacred figures to the natcons. The [...] --- First published: September 8th, 2025 Source: https://www.lesswrong.com/posts/TiQGC6woDMPJ9zbNM/maga-populists-call-for-holy-war-against-big-tech Linkpost URL:https://www.theverge.com/politics/773154/maga-tech-right-ai-natcon --- Narrated by TYPE III AUDIO.

    “Your LLM-assisted scientific breakthrough probably isn't real” by eggsyntax

    Play Episode Listen Later Sep 5, 2025 11:52


    Summary An increasing number of people in recent months have believed that they've made an important and novel scientific breakthrough, which they've developed in collaboration with an LLM, when they actually haven't. If you believe that you have made such a breakthrough, please consider that you might be mistaken! Many more people have been fooled than have come up with actual breakthroughs, so the smart next step is to do some sanity-checking even if you're confident that yours is real. New ideas in science turn out to be wrong most of the time, so you should be pretty skeptical of your own ideas and subject them to the reality-checking I describe below. Context This is intended as a companion piece to 'So You Think You've Awoken ChatGPT'[1]. That post describes the related but different phenomenon of LLMs giving people the impression that they've suddenly attained consciousness. Your situation If [...] ---Outline:(00:11) Summary(00:49) Context(01:04) Your situation(02:41) How to reality-check your breakthrough(03:16) Step 1(05:55) Step 2(07:40) Step 3(08:54) What to do if the reality-check fails(10:13) Could this document be more helpful?(10:31) More informationThe original text contained 5 footnotes which were omitted from this narration. --- First published: September 2nd, 2025 Source: https://www.lesswrong.com/posts/rarcxjGp47dcHftCP/your-llm-assisted-scientific-breakthrough-probably-isn-t --- Narrated by TYPE III AUDIO.

    “Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt

    Play Episode Listen Later Sep 4, 2025 14:02


    I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up RL on agentic software engineering. One response I've heard is something like: RL scale-ups so far have used very crappy environments due to difficulty quickly sourcing enough decent (or even high quality) environments. Thus, once AI companies manage to get their hands on actually good RL environments (which could happen pretty quickly), performance will increase a bunch. Another way to put this response is that AI companies haven't actually done a good job scaling up RL—they've scaled up the compute, but with low quality data—and once they actually do the RL scale up for real this time, there will be a big jump in AI capabilities (which yields substantially above trend progress). I'm skeptical of this argument because I think that ongoing improvements to RL environments [...] ---Outline:(04:18) Counterargument: Actually, companies havent gotten around to improving RL environment quality until recently (or there is substantial lead time on scaling up RL environments etc.) so better RL environments didnt drive much of late 2024 and 2025 progress(05:24) Counterargument: AIs will soon reach a critical capability threshold where AIs themselves can build high quality RL environments(06:51) Counterargument: AI companies are massively fucking up their training runs (either pretraining or RL) and once they get their shit together more, well see fast progress(08:34) Counterargument: This isnt that related to RL scale up, but OpenAI has some massive internal advance in verification which they demonstrated via getting IMO gold and this will cause (much) faster progress late this year or early next year(10:12) Thoughts and speculation on scaling up the quality of RL environmentsThe original text contained 5 footnotes which were omitted from this narration. --- First published: September 3rd, 2025 Source: https://www.lesswrong.com/posts/HsLWpZ2zad43nzvWi/trust-me-bro-just-one-more-rl-scale-up-this-one-will-be-the --- Narrated by TYPE III AUDIO.

    “⿻ Plurality & 6pack.care” by Audrey Tang

    Play Episode Listen Later Sep 3, 2025 23:57


    [Linkpost] “The Cats are On To Something” by Hastings

    Play Episode Listen Later Sep 3, 2025 4:45


    This is a link post. So the situation as it stands is that the fraction of the light cone expected to be filled with satisfied cats is not zero. This is already remarkable. What's more remarkable is that this was orchestrated starting nearly 5000 years ago. As far as I can tell there were three completely alien to-each-other intelligences operating in stone age Egypt: humans, cats, and the gibbering alien god that is cat evolution (henceforth the cat shoggoth.) What went down was that humans were by far the most powerful of those intelligences, and in the face of this disadvantage the cat shoggoth aligned the humans, not to its own utility function, but to the cats themselves. This is a phenomenally important case to study- it's very different from other cases like pigs or chickens where the shoggoth got what it wanted, at the brutal expense of the desires [...] --- First published: September 2nd, 2025 Source: https://www.lesswrong.com/posts/WLFRkm3PhJ3Ty27QH/the-cats-are-on-to-something Linkpost URL:https://www.hgreer.com/CatShoggoth/ --- Narrated by TYPE III AUDIO.

    Claim LessWrong Curated Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel