POPULARITY
Join me as I sit down with Alex and David, both previous guests on the show and both cofounders of the Guild of the Rose. Together, we go over the core – the heart – of what we consider to be the Rationalist tradition. The 12 Virtues are an awesome distillation of what the rest of the sequences build on. Be sure to check out the Guild of the Rose. If our constantly pitching it to you hasn’t been enough to persuade you to check it out, hopefully hearing two more of the founders discuss Rationality in general and giving their own pitches for the Guild will tip the scales. LINKS The Twelve Virtues Abridged Version Alex and David have been on more than a couple of times, but I’ll limit myself to one episode from each of them 189 – AI Bloomer David Youssef 192 – Absurdism and the Meaning of Life, with Alex And the episode with both of them, the original announcement for the Guild Also, Alex’s dating profile! In all sincerity, I’d date him if I was a woman. 00:00:05 – Introduction and The 12 Virtues 01:41:50 – Guild of the Rose Our Patreon, or if you prefer Our SubStack Hey look, we have a discord! What could possibly go wrong? (also merch) We now partner with The Guild of the Rose, check them out. LessWrong Sequence Posts Discussed in this Episode: on hiatus, returning soon
Right now I'm coaching for Inkhaven, a month-long marathon writing event where our brave residents are writing a blog post every single day for the entire month of November. And I'm pleased that some of them have seen success – relevant figures seeing the posts, shares on Hacker News and Twitter and LessWrong. The amount of writing is nuts, so people are trying out different styles and topics – some posts are effort-rich, some are quick takes or stories or lists. Some people have come up to me – one of their pieces has gotten some decent reception, but the feeling is mixed, because it's not the piece they hoped would go big. Their thick research-driven considered takes or discussions of values or whatever, the ones they'd been meaning to write for years, apparently go mostly unread, whereas their random-thought “oh shit I need to get a post out by midnight or else the Inkhaven coaches will burn me at the stake” posts[1] get to the front page of Hacker News, where probably Elon Musk and God read them. It happens to me too – some of my own pieces that took me the most effort, or that I'm [...] ---Outline:(02:00) The quick post is short, the effortpost is long(02:34) The quick post is about something interesting, the topic of the effortpost bores most people(03:13) The quick post has a fun controversial take, the effortpost is boringly evenhanded or laden with nuance(03:30) The quick post is low-context, the effortpost is high-context(04:28) The quick post is has a casual style, the effortpost is inscrutably formal The original text contained 1 footnote which was omitted from this narration. --- First published: November 28th, 2025 Source: https://www.lesswrong.com/posts/DiiLDbHxbrHLAyXaq/writing-advice-why-people-like-your-quick-bullshit-takes --- Narrated by TYPE III AUDIO. ---Images from the article:
Matt Freeman has been cohosting several media analysis podcasts for over a decade. He and his cohost Scott have been doing weekly episodes of the Doofcast every Friday and they cover movies, books, and TV shows. Matt and Scott's analysis podcasts have made me love stories even more and have equipped me with tools to […]
Booker is a long-time attendee and one of the coordinators of the Denver area Less Wrong community. Community engagement isn't just a background task for him – he's taken real steps to get involved with and improve his community and you can too! He's here to tell us about the things he's done and give […]
My fellow pro-growth/progress/abundance Up Wingers in America and around the world:What really gets AI optimists excited isn't the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson's new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario.Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality.Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government's AI Security Institute.In This Episode* Making human minds (1:43)* Theory to reality (6:45)* The world with automated research (10:59)* Considering constraints (16:30)* Worries and what-ifs (19:07)Below is a lightly edited transcript of our conversation. Making human minds (1:43). . . you don't have to build any more computer chips, you don't have to build any more fabs . . . In fact, you don't have to do anything at all in the physical world.Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?”The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn't happen. When GDP goes up, that doesn't mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now we've got more output, we're getting even fewer people as a result, so that's been blocked.This first paper is basically saying, look, if we can manufacture human minds or human-equivalent minds in any way, be it by building more computer chips, or making better computer chips, or any way at all, then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers. That's the first paper.The second paper double clicks on one specific way that we can use output to create more human minds. It's actually, in a way, the scariest way because it's the way of creating human minds which can happen the quickest. So this is the way where you don't have to build any more computer chips, you don't have to build any more fabs, as they're called, these big factories that make computer chips. In fact, you don't have to do anything at all in the physical world.It seems like most of the conversation has been about how much investment is going to go into building how many new data centers, and that seems like that is almost the entire conversation, in a way, at the moment. But you're not looking at compute, you're looking at software.Exactly, software. So the idea is you don't have to build anything. You've already got loads of computer chips and you just make the algorithms that run the AIs on those computer chips more efficient. This is already happening, but it isn't yet a big deal because AI isn't that capable. But already, one year out, Epoch, this AI forecasting organization, estimates that just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system. Just wait 12 months, and suddenly, for the same budget, you are able to run 10 times as many AI systems, or maybe even 1000 times as many for their most aggressive estimate. As I said, not a big deal today, but if we then develop an AI system which is better than any human at doing research, then now, in 10 months, you haven't built anything, but you've got 10 times as many researchers that you can set to work or even more than that. So then we get this feedback loop where you make some research progress, you improve your algorithms, now you've got loads more researchers, you set them all to work again, finding even more algorithmic improvements. So today we've got maybe a few hundred people that are advancing state-of-the-art AI algorithms.I think they're all getting paid a billion dollars a person, too.Exactly. But maybe we can 10x that initially by having them replaced by AI researchers that do the same thing. But then those AI researchers improve their own algorithms. Now you have 10x as many again, you have them building more computer chips, you're just running them more efficiently, and then the cycle continues. You're throwing more and more of these AI researchers at AI progress itself, and the algorithms are improving in what might be a very powerful feedback loop.In this case, it seems me that you're not necessarily talking about artificial general intelligence. This is certainly a powerful intelligence, but it's narrow. It doesn't have to do everything, it doesn't have to play chess, it just has to be able to do research.It's certainly not fully general. You don't need it to be able to control a robot body. You don't need it to be able to solve the Riemann hypothesis. You don't need it to be able to even be very persuasive or charismatic to a human. It's not narrow, I wouldn't say, it has to be able to do literally anything that AI researchers do, and that's a wide range of tasks: They're coding, they're communicating with each other, they're managing people, they are planning out what to work on, they are thinking about reviewing the literature. There's a fairly wide range of stuff. It's extremely challenging. It's some of the hardest work in the world to do, so I wouldn't say it's now, but it's not everything. It's some kind of intermediate level of generality in between a mere chess algorithm that just does chess and the kind of AGI that can literally do anything.Theory to reality (6:45)I think it's a much smaller gap for AI research than it is for many other parts of the economy.I think people who are cautiously optimistic about AI will say something like, “Yeah, I could see the kind of intelligence you're referring to coming about within a decade, but it's going to take a couple of big breakthroughs to get there.” Is that true, or are we actually getting pretty close?Famously, predicting the future of technology is very, very difficult. Just a few years before people invented the nuclear bomb, famous, very well-respected physicists were saying, “It's impossible, this will never happen.” So my best guess is that we do need a couple of fairly non-trivial breakthroughs. So we had the start of RL training a couple of years ago, became a big deal within the language model paradigm. I think we'll probably need another couple of breakthroughs of that kind of size.We're not talking a completely new approach, throw everything out, but we're talking like, okay, we need to extend the current approach in a meaningfully different way. It's going to take some inventiveness, it's going to take some creativity, we're going to have to try out a few things. I think, probably, we'll need that to get to the researcher that can fully automate OpenAI, is a nice way of putting it — OpenAI doesn't employ any humans anymore, they've just got AIs there.There's a difference between what a model can do on some benchmark versus becoming actually productive in the real world. That's why, while all the benchmark stuff is interesting, the thing I pay attention to is: How are businesses beginning to use this technology? Because that's the leap. What is that gap like, in your scenario, versus an AI model that can do a theoretical version of the lab to actually be incorporated in a real laboratory?It's definitely a gap. I think it's a pretty big gap. I think it's a much smaller gap for AI research than it is for many other parts of the economy. Let's say we are talking about car manufacturing and you're trying to get an AI to do everything that happens there. Man, it's such a messy process. There's a million different parts of the supply chain. There's all this tacit knowledge and all the human workers' minds. It's going to be really tough. There's going to be a very big gap going from those benchmarks to actually fully automating the supply chain for cars.For automating what OpenAI does, there's still a gap, but it's much smaller, because firstly, all of the work is virtual. Everyone at OpenAI could, in principle, work remotely. Their top research scientists, they're just on a computer all day. They're not picking up bricks and doing stuff like that. So also that already means it's a lot less messy. You get a lot less of that kind of messy world reality stuff slowing down adoption. And also, a lot of it is coding, and coding is almost uniquely clean in that, for many coding tasks, you can define clearly defined metrics for success, and so that makes AI much better. You can just have a go. Did AI succeed in the test? If not, try something else or do a gradient set update.That said, there's still a lot of messiness here, as any coder will know, when you're writing good code, it's not just about whether it does the function that you've asked it to do, it needs to be well-designed, it needs to be modular, it needs to be maintainable. These things are much harder to evaluate, and so AIs often pass our benchmarks because they can do the function that you asked it to do, the code runs, but they kind of write really spaghetti code — code that no one wants to look at, that no one can understand, and so no company would want to use that.So there's still going to be a pretty big benchmark-to-reality gap, even for OpenAI, and I think that's one of the big uncertainties in terms of, will this happen in three years versus will this happen in 10 years, or even 15 years?Since you brought up the timeline, what's your guess? I didn't know whether to open with that question or conclude with that question — we'll stick it right in the middle of our chat.Great. Honestly, my best guess about this does change more often than I would like it to, which I think tells us, look, there's still a state of flux. This is just really something that's very hard to know about. Predicting the future is hard. My current best guess is it's about even odds that we're able to fully automate OpenAI within the next 10 years. So maybe that's a 50-50.The world with AI research automation (10:59). . . I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself?So then what really would be the impact of that kind of AI research automation? How would you go about quantifying that kind of acceleration? What does the world look like?Yeah, so many possibilities, but I think what strikes me is that there is a plausible world where it is just way, way faster than almost everyone is expecting it to be. So that's the world where you fully automate OpenAI, and then we get that feedback loop that I was talking about earlier where AIs make their algorithms way more efficient, now you've got way more of them, then they make their algorithms way more efficient again, now they're way smarter. Now they're thinking a hundred times faster. The feedback loop continues and maybe within six months you now have a billion superintelligent AIs running on this OpenAI data center. The combined cognitive abilities of all these AIs outstrips the whole of the United States, outstrips anything we've seen from any kind of company or entity before, and they can all potentially be put towards any goal that OpenAI wants to. And then there's, of course, the risk that OpenAI's lost control of these systems, often discussed, in which case these systems could all be working together to pursue a particular goal. And so what we're talking about here is really a huge amount of power. It's a threat to national security for any government in which this happens, potentially. It is a threat to everyone if we lose control of these systems, or if the company that develops them uses them for some kind of malicious end. And, in terms of economic impacts, I personally think that that again could happen much more quickly than people think, and we can get into that.In the first paper we mentioned, it was kind of a thought experiment, but you were really talking about moving the decimal point in GDP growth, instead of talking about two and three percent, 20 and 30 percent. Is that the kind of world we're talking about?I speak to economists a lot, and —They hate those kinds of predictions, by the way.Obviously, they think I'm crazy. Not all of them. There are economists that take it very seriously. I think it's taken more seriously than everyone else realizes. It's like it's a bit embarrassing, at the moment, to admit that you take it seriously, but there are a few really senior economists who absolutely know their stuff. They're like, “Yep, this checks out. I think that's what's going to happen.” And I've had conversation with them where they're like, “Yeah, I think this is going to happen.” But the really loud, dominant view where I think people are a little bit scared to speak out against is they're like, “Obviously this is sci-fi.”One analogy I like to give to people who are very, very confident that this is all sci-fi and it's rubbish is to imagine that we were sitting there in the year 1400, imagine we had an economics professor who'd been studying the rate of economic growth, and they've been like, “Yeah, we've always had 0.1 percent growth every single year throughout history. We've never seen anything higher.” And then there was some kind of futurist economist rogue that said, “Actually, I think that if I extrapolate the curves in this way and we get this kind of technology, maybe we could have one percent growth.” And then all the other economists laugh at them, tell them they're insane – that's what happened. In 1400, we'd never had growth that was at all fast, and then a few hundred years later, we developed industrial technology, we started that feedback loop, we were investing more and more resources in scientific progress and in physical capital, and we did see much faster growth.So I think it can be useful to try and challenge economists and say, “Okay, I know it sounds crazy, but history was crazy. This crazy thing happened where growth just got way, way faster. No one would've predicted it. You would not have predicted it.” And I think being in that mindset can encourage people to be like, “Yeah, okay. You know what? Maybe if we do get AI that's really that powerful, it can really do everything, and maybe it is possible.”But to answer your question, yeah, I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself? So ultimately, what the economy is going to be like is it's going to have robots and factories that are able to fully create new versions of themselves. Everything you need: the roads, the electricity, the robots, the buildings, all of that will be replicated. And so you can look at actually biology and say, do we have any examples of systems which fully replicate themselves? How long does it take? And if you look at rats, for example, they're able to double the number of rats by grabbing resources from the environment, and giving birth, and whatnot. The doubling time is about six weeks for some types of rats. So that's an example of here's a physical system — ultimately, everything's made of physics — a physical system that has some intelligence that's able to go out into the world, gather resources, replicate itself. The doubling time is six weeks.Now, who knows how long it'll take us to get to AI that's that good? But when we do, you could see the whole physical economy, maybe a part that humans aren't involved with, a whole automated city without any humans just doubling itself every few weeks. If that happens, and the amount of stuff we're able to reduce as a civilization is doubling again on the order of weeks. And, in fact, there are some animals that double faster still, in days, but that's the kind of level of craziness. Now we're talking about 1000 percent growth, at that point. We don't know how crazy it could get, but I think we should take even the really crazy possibilities, we shouldn't fully rule them out.Considering constraints (16:30)I really hope people work less. If we get this good future, and the benefits are shared between all . . . no one should work. But that doesn't stop growth . . .There's this great AI forecast chart put out by the Federal Reserve Bank of Dallas, and I think its main forecast — the one most economists would probably agree with — has a line showing AI improving GDP by maybe two tenths of a percent. And then there are two other lines: one is more or less straight up, and the other one is straight down, because in the first, AI created a utopia, and in the second, AI gets out of control and starts killing us, and whatever. So those are your three possibilities.If we stick with the optimistic case for a moment, what constraints do you see as most plausible — reduced labor supply from rising incomes, social pushback against disruption, energy limits, or something else?Briefly, the ones you've mentioned, people not working, 100 percent. I really hope people work less. If we get this good future, and the benefits are shared between all — which isn't guaranteed — if we get that, then yeah, no one should work. But that doesn't stop growth, because when AI and robots can do everything that humans do, you don't need humans in the loop anymore. That whole thing is just going and kind of self-replicating itself and making as many goods as services as we want. Sure, if you want your clothes to be knitted by a human, you're in trouble, then your consumption is stuck. Bad luck. If you're happy to consume goods and services produced by AI systems or robots, fine if no one wants to work.Pushback: I think, for me, this is the biggest one. Obviously, the economy doubling every year is very scary as a thought. Tech progress will be going much faster. Imagine if you woke up and, over the course of the year, you go from not having any telephones at all in the world, to everyone's on their smartphones and social media and all the apps. That's a transition that took decades. If that happened in a year, that would be very disconcerting.Another example is the development of nuclear weapons. Nuclear weapons were developed over a number of years. If that happened in a month, or two months, that could be very dangerous. There'd be much less time for different countries, different actors to figure out how they're going to handle it. So I think pushback is the strongest one that we might as a society choose, “Actually, this is insane. We're going to go slower than we could.” That requires, potentially, coordination, but I think there would be broad support for some degree of coordination there.Worries and what-ifs (19:07)If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society.I imagine you certainly talk with people who are extremely gung-ho about this prospect. What is the common response you get from people who are less enthusiastic? Do they worry about a future with no jobs? Maybe they do worry about the existential kinds of issues. What's your response to those people? And how much do you worry about those things?I think there are loads of very worrying things that we're going to be facing. One class of pushback, which I think is very common, is worries about employment. It's a source of income for all of us, employment, but also, it's a source of pride, it's a source of meaning. If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society. I think people aren't just going to be down to just do it. I think people are scared about three AI companies literally now taking all the revenues that all of humanity used to be earning. It is naturally a very scary prospect. So that's one kind of pushback, and I'm sympathetic with it.I think that there are solutions, if we find a way to tax AI systems, which isn't necessarily easy, because it's very easy to move physical assets between countries. It's a lot easier to tax labor than capital already when rich people can move their assets around. We're going to have the same problem with AI, but if we can find a way to tax it, and we maintain a good democratic country, and we can just redistribute the wealth broadly, it can be solved. So I think it's a big problem, but it is doable.Then there's the problem of some people want to stop this now because they're worried about AI killing everyone. Their literally worry is that everyone will be dead because superintelligent AI will want that to happen. I think there's a real risk there. It's definitely above one percent, in my opinion. I wouldn't go above 10 percent, myself, but I think it's very scary, and that's a great reason to slow things down. I personally don't want to stop quite yet. I think you want to stop when the AI is a bit more powerful and a bit more useful than it is today so it can kind of help us figure out what to do about all of this crazy stuff that's coming.On what side of that line is AI as an AI researcher?That's a really great question. Should we stop? I think it's very hard to stop just after you've got the AI researcher AI, because that's when it's suddenly really easy to go very, very fast. So my out-of-the-box proposal here, which is probably very flawed, would be: When we're within a few spits distance — not spitting distance, but if you did that three times, and we can see we're almost at that AI automating OpenAI — then you pause, because you're not going to accidentally then go all the way. It is actually still a little bit a fair distance away, but it's actually still, at that point, probably a very powerful AI that can really help.Then you pause and do what?Great question. So then you pause, and you use your AI systems to help you firstly solve the problem of AI alignment, make extra, double sure that every time we increase the notch of AI capabilities, the AI is still loyal to humanity, not to its own kind of secret goals.Secondly, you solve the problem of, how are we going to make sure that no one person in government or no one CEO of an AI company ensures that this whole AI army is loyal to them, personally? How are we going to ensure that everyone, the whole world gets influenced over what this AI is ultimately programmed to do? That's the second problem.And then there's just a whole host of other things: unemployment that we've talked about, competition between different countries, US and China, there's a whole host of other things that I think you want to research on, figure out, get consensus on, and then slowly ratchet up the capabilities in what is now a very safe and controlled way.What else should we be working on? What are you working on next?One problem I'm excited about is people have historically worried about AI having its own goals. We need to make it loyal to humanity. But as we've got closer, it's become increasingly obvious, “loyalty to humanity” is very vague. What specifically do you want the AI to be programmed to do? I mean, it's not programmed, it's grown, but if it were programmed, if you're writing a rule book for AI, some organizations have employee handbooks: Here's the philosophy of the organization, here's how you should behave. Imagine you're doing that for the AI, but you're going super detailed, exactly how you want your AI assistant to behave in all kinds of situations. What should that be? Essentially, what should we align the AI to? Not any individual person, probably following the law, probably loads of other things. I think basically designing what is the character of this AI system is a really exciting question, and if we get that right, maybe the AI can then help us solve all these other problems.Maybe you have no interest in science fiction, but is there any film, TV, book that you think is useful for someone in your position to be aware of, or that you find useful in any way? Just wondering.I think there's this great post called “AI 2027,” which lays out a concrete scenario for how AI could go wrong or how maybe it could go right. I would recommend that. I think that's the only thing that's coming top of mind. I often read a lot of the stuff I read is I read a lot of LessWrong, to be honest. There's a lot of stuff from there that I don't love, but a lot of new ideas, interesting content there.Any fiction?I mean, I read fiction, but honestly, I don't really love the AI fiction that I've read because often it's quite unrealistic, and so I kind of get a bit overly nitpicky about it. But I mean, yeah, there's this book called Harry Potter and the Methods of Rationality, which I read maybe 10 years ago, which I thought was pretty fun.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Context: Post #4 in my sequence of private Lightcone Infrastructure memos edited for public consumption This week's principle is more about how I want people at Lightcone to relate to community governance than it is about our internal team culture. As part of our jobs at Lightcone we often are in charge of determining access to some resource, or membership in some group (ranging from LessWrong to the AI Alignment Forum to the Lightcone Offices). Through that, I have learned that one of the most important things to do when building things like this is to try to tell people as early as possible if you think they are not a good fit for the community; for both trust within the group, and for the sake of the integrity and success of the group itself. E.g. when you spot a LessWrong commenter that seems clearly not on track to ever be a good contributor long-term, or someone in the Lightcone Slack clearly seeming like not a good fit, you should aim to off-ramp them as soon as possible, and generally put marginal resources into finding out whether someone is a good long-term fit early, before they invest substantially [...] --- First published: November 14th, 2025 Source: https://www.lesswrong.com/posts/Hun4EaiSQnNmB9xkd/tell-people-as-early-as-possible-it-s-not-going-to-work-out --- Narrated by TYPE III AUDIO.
While Eneasz is busy at InkHaven, Steven sits down with Matt Freeman to talk about not-AI stuff! We had (in my opinion) a great conversation about stoic philosophy, the traps of getting too entrenched in any philosophical framework, and some of the ingredients of a happy life. LINKS It's Okay to Feel Bad for a […]
[Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.] In the summer of 1983, Ronald Reagan sat down to watch the film War Games, starring Matthew Broderick as a teen hacker. In the movie, Broderick's character accidentally gains access to a military supercomputer with an AI that almost starts World War III.“The only winning move is not to play.” After watching the movie, Reagan, newly concerned with the possibility of hackers causing real harm, ordered a full national security review. The response: “Mr. President, the problem is much worse than you think.” Soon after, the Department of Defense revamped their cybersecurity policies and the first federal directives and laws against malicious hacking were put in place. But War Games wasn't the only story to influence Reagan. His administration pushed for the Strategic Defense Initiative ("Star Wars") in part, perhaps, because the central technology—a laser that shoots down missiles—resembles the core technology behind the 1940 spy film Murder in the Air, which had Reagan as lead actor. Reagan was apparently such a superfan of The Day the Earth Stood Still [...] ---Outline:(05:05) AI in Particular(06:45) Whats Going On Here?(11:19) Authorial Responsibility The original text contained 10 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/uQak7ECW2agpHFsHX/the-unreasonable-effectiveness-of-fiction --- Narrated by TYPE III AUDIO. ---Images from the article:
We talk with Max Harms on the air for the first time since 2017! He's got a new book coming out (pre-order your copy here or at Amazon) and we spend about the first half talking about If Anyone Builds It, Everyone Dies. LINKS Max's first book, Crystal Society Eneasz's audiobook of about the first […]
Jay talks with us about finding Alpha – returns above the base rate – in every day life (and what this means). LINKS Optimize Everything, Jay's substack Jay on Twitter Arbor Trading Bootcamp Kelsey's argument that We Need To Be Able To Sue AI Companies 00:00:05 – Alpha with Jay 01:28:53 – Guild of the […]
About me and this review: I don't identify as a member of the rationalist community, and I haven't thought much about AI risk. I read AstralCodexTen and used to read Zvi Mowshowitz before he switched his blog to covering AI. Thus, I've long had a peripheral familiarity with LessWrong. I picked up IABIED in response to Scott Alexander's review, and ended up looking here to see what reactions were like. After encountering a number of posts wondering how outsiders were responding to the book, I thought it might be valuable for me to write mine down. This is a “semi-outsider “review in that I don't identify as a member of this community, but I'm not a true outsider in that I was familiar enough with it to post here. My own background is in academic social science and national security, for whatever that's worth. My review presumes you're already [...] ---Outline:(01:07) My loose priors going in:(02:29) To skip ahead to my posteriors:(03:45) On to the Review:(08:14) My questions and concerns(08:33) Concern #1 Why should we assume the AI wants to survive? If it does, then what exactly wants to survive?(12:44) Concern #2 Why should we assume that the AI has boundless, coherent drives?(17:57) #3: Why should we assume there will be no in between?(21:53) The Solution(23:35) Closing Thoughts--- First published: October 13th, 2025 Source: https://www.lesswrong.com/posts/ex3fmgePWhBQEvy7F/if-anyone-builds-it-everyone-dies-a-semi-outsider-review --- Narrated by TYPE III AUDIO.
Patrick McKenzie (patio11) is joined by Oliver Habryka, who runs Lightcone Infrastructure—the organization behind both the LessWrong forum and the Lighthaven conference venue in Berkeley. They explore how LessWrong became one of the most intellectually consequential forums on the internet, the surprising challenges of running a hotel with fractal geometry, and why Berkeley's building regulations include an explicit permission to plug in a lamp. The conversation ranges from fire codes that inadvertently shape traffic deaths, to nonprofit fundraising strategies borrowed from church capital campaigns, to why coordination is scarcer than money in philanthropy.–Full transcript available here: www.complexsystemspodcast.com/bits-and-bricks-oliver-habryka/–Sponsor: MercuryThis episode is brought to you by Mercury, the fintech trusted by 200K+ companies — from first milestones to running complex systems. Mercury offers banking that truly understands startups and scales with them. Start today at Mercury.comMercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC.–Links:Lightcone Infrastructure: https://www.lightconeinfrastructure.com/ Lighthaven: https://www.lighthaven.space/LessWrong: https://www.lesswrong.com/ –Timestamps:(00:00) Intro(01:08) The origins and evolution of LessWrong(03:54) Challenges of running an online forum(05:57) Reviving LessWrong(14:51) The unique structure of Lighthaven(17:35) The complexities of conference venues(19:14) Sponsor: Mercury(20:14) The realities of conference planning(25:32) Challenges of maintaining Lighthaven(29:54) Navigating permits and regulations(37:02) Impact of fire code regulations on traffic fatalities(39:06) Economic analysis of safety regulations(41:39) Housing policy and construction in Berkeley(43:30) Fundraising challenges in the nonprofit sector(46:44) Effective altruism and fundraising dynamics(54:20) Lessons from religious fundraising practices(01:05:36) Reflections on fundraising(01:13:26) Wrap
We continue discussing Nostalgebraist's “The Void” in the context of how to relate to LLMs. If God imagines Claude hard enough, does Claude become real? LINKS The Void Audio reading of The Void, from AskWho The referenced episode where the three of us spoke of Janus's post “Simulators” Claude-Clark post – Simulacra Welfare: Meet Clark, by […]
EPISODE 146 | The Bicameral World of the Zizians One striking thing about the many stories that have appeared about the Zizians and the crimes they are accused of committing, is that each one starts at a different place. Some start with the attack on a Vallejo landlord that resulted in his being run through with a samurai sword and then work backwards and then forwards. Others begin their tale with the shooting of an older couple in Pennsylvania. Still others kick things off with the shooting death of a Border Patrol agent up near the Canadian border. As a result, it can be a bit difficult to get a handle on exactly what happened when and what the several people currently in police custody are accused of. Like what we do? Then buy us a beer or three via our page on Buy Me a Coffee. Review us here or on IMDb. And seriously, subscribe, will ya? Like, just do it. SECTIONS LEFT BRAIN 02:28 - Gimme Some Truth - Ziz LaSota, Effective Altruism, x-risk, MIRI, transhumanism and the Singularity, CFAR, LessWrong 12:25 - Digital Witness - Roko's Basilisk, “I Have No Mouth and I Must Scream”, utilitarianism, the many-worlds interpretation, online censorship 22:58 - Adrift in Sleepwakefulness - The Zizians start as vegan anarchotranshumanists, unbucketing, the bicameral mind, unihemispheric sleep (UHS), sleep deprivation, Ziz Theory, the Rationalist Fleet, Curtis Lind offers a place to stay, the first suicide, self-blackmail RIGHT BRAIN 35:02 - Friendship Train - The Westminster Woods protest, arrests and a lack of cooperation, Ziz dies, Curtis Lind is stabbed repeatedly, Ziz is alive, Richard and Rita Zajko are killed; Michelle Zajko, Daniel Blank and Ziz arrested; Michelle blames LessWrong, Ziz is released and vanishes, more legal issues 50:19 - Lose Control - Ophelia Bauckholt and Teresa Youngblut wander around Vermont, a firefight with the Border Patrol, Curtis Lind is killed, Maxmilian Snyder dictates a letter; Zajko, Blank and Ziz arrested (again); trials are set Music by Fanette Ronjat More Info LessWrong on RationalWIki Roko's Basilisk on RationalWiki Zizian Murdercult summary, for those out of the loop on X by @Aella_Girl - January 29, 2025 Who is ‘Ziz'? How a mysterious group with roots in the Bay Area is linked to six deaths in the San Francisco Chronicle ‘Death upon death': Defendant in killing tied to cult-like ‘Zizian' group dictates 1,500-word letter over jail phone in the San Francisco Chronicle How a Vermont border agent's death exposed violence linked to the cultlike Zizian group on CBS News A Vermont border agent's death was the latest violence linked to the cultlike Zizian group on AP Alleged leader of cultlike ‘Zizian' group to be held without bail after arrest in The Guardian Zizians: What we know about the 'cult' linked to six deaths on the BBC The Delirious, Violent, Impossible True Story of the Zizians by Evan Ratliff in Wired Alleged Leader of Roko's Basilisk Murder Cult Says She Did Nothing Wrong, and Would Appreciate Some Vegan Food in Jail on Futurism Who Are the Zizians: Why 6 Killings Are Linked to Alleged Vegan Techie "Cult" on E! News What to Know About the Alleged Zizian "Cult" Linked to 6 Killings on E! News Possible Suicide Cluster Linked to Zizian Group, on Top of Killings on SFist Judge confirms trial date for ‘Zizian cult' murder case on Courthouse News Service Grand jury indicts accused leader of cultlike 'Zizian' group on USA Today She Wanted to Save the World From A.I. Then the Killings Started in the New York Times Three Zizians face trial together in Maryland amid sprawling federal investigation on AP Follow us on social: Facebook X (Twitter) Other Podcasts by Derek DeWitt DIGITAL SIGNAGE DONE RIGHT - Winner of a Gold Quill Award, Gold MarCom Award, AVA Digital Award Gold, Silver Davey Award, and Communicator Award of Excellence, and on numerous top 10 podcast lists. PRAGUE TIMES - A city is more than just a location - it's a kaleidoscope of history, places, people and trends. This podcast looks at Prague, in the center of Europe, from a number of perspectives, including what it is now, what is has been and where it's going. It's Prague THEN, Prague NOW, Prague LATER
I have been teaching my oldest son to read with Anki and techniques recommended here on LessWrong as well as in Larry Sanger's post, and it's going great! I thought I'd pay it forward a bit by talking about the techniques I've been using. Anki and songs for letter names and sounds When he was a little under 2, he started learning letters from the alphabet song. We worked on learning the names and sounds of letters using the ABC song, plus the Letter Sounds song linked by Reading Bear. He loved the Letter Sounds song, so we listened to / watched that a lot; Reading Bear has some other resources that other kids might like better for learning letter names and sounds as well. Around this age, we also got magnet letters for the fridge and encouraged him to play with them, praised him greatly if he named [...] ---Outline:(00:22) Anki and songs for letter names and sounds(04:02) Anki + Reading Bear word list for words(08:08) Decodable sentences and books for learning to read(13:06) Incentives(16:02) Reflections so farThe original text contained 2 footnotes which were omitted from this narration. --- First published: September 19th, 2025 Source: https://www.lesswrong.com/posts/8kSGbaHTn2xph5Trw/teaching-my-toddler-to-read --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Audio reading of The Void, from AskWho
We discuss Nostalgebraist's “The Void” in the context of how to relate to LLMs. “When you talk to ChatGPT, who or what are you talking to?” LINKS The Void Audio reading of The Void, from AskWho The referenced episode where the three of us spoke of Janus's post “Simulators” The Measure of a Man episode […]
And, a new take on guess culture vs ask culture Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it's only paid subscriptions that make it possible for me to write. If you find a coffee's worth of value in this or any of my other work, please consider signing up to support me; every bill I can pay with writing is a bill I don't have to pay by doing other stuff instead. I also accept and greatly appreciate one-time donations of any size. There's a piece of advice I see thrown around on social media a lot that goes something like: “It's just a comment! You don't have to respond! You can just ignore it!” I think this advice is (a little bit) naïve, and the situation is generally [...] ---Outline:(00:10) And, a new take on guess culture vs ask culture(07:10) On guess culture and ask culture--- First published: September 9th, 2025 Source: https://www.lesswrong.com/posts/8jkB8ezncWD6ai86e/obligated-to-respond --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Do we need to be concerned for the welfare of AIs today? What about the near future? Eleos AI Research is asking exactly that. LINKS Eleos AI Research People for the Ethical Treatment of Reinforcement Learners Bees Can't Suffer? Lena, by qntm When AI Seems Conscious Experience Machines, Rob's substack The War on General Computation […]
Andrew Willsen tells us how incorporating as a church allows you to navigate modernity, and gives us the basic steps to doing so. LINKS Andrew's church substack – The Church of the Infinite Game To incorporate in CA file ARTS-PB-501(c)(3) … Continue reading →
Is a bee worth 1/7th of a human? Can a bee suffer at all? Nathan joins us to discuss what neural structures are needed for this question to make sense. Map of all the fruitfly neurons
Could AI enable a small group to gain power over a large country, and lock in their power permanently? Often, people worried about catastrophic risks from AI have been concerned with misalignment risks. In this episode, Tom Davidson talks about a risk that could be comparably important: that of AI-enabled coups. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Transcript: https://axrp.net/episode/2025/08/07/episode-46-tom-davidson-ai-enabled-coups.html Topics we discuss, and timestamps: 0:00:35 How to stage a coup without AI 0:16:17 Why AI might enable coups 0:33:29 How bad AI-enabled coups are 0:37:28 Executive coups with singularly loyal AIs 0:48:35 Executive coups with exclusive access to AI 0:54:41 Corporate AI-enabled coups 0:57:56 Secret loyalty and misalignment in corporate coups 1:11:39 Likelihood of different types of AI-enabled coups 1:25:52 How to prevent AI-enabled coups 1:33:43 Downsides of AIs loyal to the law 1:41:06 Cultural shifts vs individual action 1:45:53 Technical research to prevent AI-enabled coups 1:51:40 Non-technical research to prevent AI-enabled coups 1:58:17 Forethought 2:03:03 Following Tom's and Forethought's research Links for Tom and Forethought: Tom on X / Twitter: https://x.com/tomdavidsonx Tom on LessWrong: https://www.lesswrong.com/users/tom-davidson-1 Forethought Substack: https://newsletter.forethought.org/ Will MacAskill on X / Twitter: https://x.com/willmacaskill Will MacAskill on LessWrong: https://www.lesswrong.com/users/wdmacaskill Research we discuss: AI-Enabled Coups: How a Small Group Could Use AI to Seize Power: https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power Seizing Power: The Strategic Logic of Military Coups, by Naunihal Singh: https://muse.jhu.edu/book/31450 Experiment using AI-generated posts on Reddit draws fire for ethics concerns: https://retractionwatch.com/2025/04/28/experiment-using-ai-generated-posts-on-reddit-draws-fire-for-ethics-concerns/ Episode art by Hamish Doodles: hamishdoodles.com
Wes defends his post I Am Not the Monogamy Police, while Jennifer asserts it's about more than monogamy. LINKS Wes's post I Am Not the Monogamy Police His blog, Living Within Reason The original tweets – monogamy vs charity Aella's … Continue reading →
Olivia from the Guild of the Rose is back to tell us about the noble and most ancient tradition of play-by-post storytelling. (Spoiler, it's the precursor to glowfic!)
My guest is Aella, a writer, blogger and sex worker. She writes the highly popular blog Knowingless. She is a member of the online community LessWrong. We discuss the emergence of AI, the sex industry (online and offline), robot gfs and libertarian transhumanism. Aella describes her evangelical religious upbringing and how she learned to navigate the secular world. We explore the overlap of niche online politics and fetishes, the Woke Wars, debate culture and free speech. You can get access to the full catalog for Doomscroll and more by becoming a paid supporter: www.patreon.com/joshuacitarella joshuacitarella.substack.com/subscribe
Eneasz sits down with Tracing Woodgrains before a live audience at Manifest 2025 for a wide range of topics. Then we follow up some more afterwards. LINKS Tracing Woodgrains on Twitter and at his Substack A reddit history of what … Continue reading →
Olivia is a member of the Guild of the Rose and a total badass. Enjoy the intuitive and fun lesson in Bayesian reasoning she shared with me at VibeCamp.
Four AIs recruited a human to host a story-telling event in Dolores Park. Larissa Schiavo is this human. She tells of her interaction with the AIs, the story they wrote, and the meeting between human and machine in Dolores Park. … Continue reading →
Liron Shapira debates AI luminaries and public intellectuals on the imminent possibility of human extinction. Let's get on the P(Doom) Train. LINKS Doom Debates on YouTube Doom Debates podcast Most Watched Debate – Mike Israetel Liron's current favorite debate – … Continue reading →
Many of us have a high P(Doom) — a belief new AI tools could cause human extinction in the very near future. How can one live a good life in the face of this? We start with a panel discussion … Continue reading →
David Youssef used Claude and Suno to make some truly awesome music. He tells us how he did it and some of his favorite lyrics. Check out the Spotify playlist or the Youtube playlist He's also one of the cofounders … Continue reading →
Steven works at SymbyAI, a startup that's bringing AI into research review and replication. We talk with founder Ashia Livaudais about improving how we all Do Science. Also – If Anyone Builds It Everyone Dies preorders here, or at Amazon. … Continue reading →
We speak with a long-time Denver rationalist who's converting to Christianity about why. Eneasz can't get over the abandonment of epistemics. 🙁 This is Part 2, see the previous episode (here) for Part 1. LINKS Thomas Ambrose on Twitter Paid … Continue reading →
We speak with a long-time Denver rationalist who's converting to Christianity about why. Part one, it turns out. LINKS Thomas Ambrose on Twitter The Rationalist Summer Trifecta: Manifest 2025 LessOnline 2025 VibeCamp 2025 00:00:05 – OK so why? 01:24:55 – … Continue reading →
Eneasz and Liam discuss Scott Alexander's post “Twilight of the Edgelords,” an exploration of Truth, Morality, and how one balances love of truth vs not destabilizing the world economy and political regime. CORRECTION: Scott did make an explicitly clear pro … Continue reading →
Wes Fenza and Jen Kesteloot join us to talk about whether there's significant personality differences between men and women, and what (if anything) we should do about that. LINKS Wes's post Men and Women are Not That Different Jacob's quoted … Continue reading →
We speak to Nick Allardice, President & CEO of GiveDirectly. Afterwards Steven and Eneasz get wrapped up talking about community altruism for a bit. LINKS Give Directly GiveDirectly Tech Innovation Fact Sheet 00:00:05 – Give Directly with Nick Allardice 01:12:19 … Continue reading →
Dave Kasten joins us to discuss how AI is being discussed in the US government and gives a rather inspiring and hopeful take. LINKS Narrow Path Center for AI Policy Dave Kasten's Essay on the Essay Meta on his Substack … Continue reading →
This week, we discuss Discord's IPO plans, Cursor's big raise, and how much coding developers actually do. Plus, is Southwest making a huge mistake with bag fees and assigned seats? Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/JmkVmwAMw6U?si=ywGs_F3DImUFC0LZ) 510 (https://www.youtube.com/live/JmkVmwAMw6U?si=ywGs_F3DImUFC0LZ) Runner-up Titles Cote's not here, so we'll keep it tight They treated me like I'm stupid I'm not here to buy into the culture Go eats some rocks and glue My head is full of simultaneous thoughts Fly high Icarus, fly high We are certain of the uncertainty Rundown Southwest Airlines shifts to paid baggage policy to lift earnings (https://www.reuters.com/business/aerospace-defense/southwest-airlines-shifts-paid-baggage-policy-lift-earnings-2025-03-11/) Discord in Early Talks With Bankers for Potential I.P.O. (https://www.nytimes.com/2025/03/05/technology/discord-ipo.html?unlocked_article_code=1.1k4.eQrV.NtKK_GpiT-Di&smid=nytcore-ios-share&referringSource=articleShare&sgrp=p) IDE Follow up DevTasks outside of the IDE (PDF) (https://www.microsoft.com/en-us/research/uploads/prod/2019/04/devtime-preprint-TSE19.pdf) How Much Are LLMs Actually Boosting Real-World Programmer Productivity? — LessWrong (https://www.lesswrong.com/posts/tqmQTezvXGFmfSe7f/how-much-are-llms-actually-boosting-real-world-programmer) [AI [CursorStartup Anysphere in Talks for Close to $10 Billion Valuation (https://www.bloomberg.com/news/articles/2025-03-07/ai-startup-anysphere-in-talks-for-close-to-10-billion-valuation) Market Trends Nvidia Is Down 27% From Its Peak. (https://www.fool.com/investing/2025/03/07/nvidia-stock-down-27-from-peak-history-says-this/) Millennium Loses $900 Million on Strategy Roiled by Market Chaos (https://www.bloomberg.com/news/articles/2025-03-08/millennium-loses-900-million-on-strategy-roiled-by-market-chaos) Relevant to your Interests OpenAI executives have told some investors about plans for a $20,000/month agent (https://www.theinformation.com/articles/openai-plots-charging-20-000-a-month-for-phd-level-agents> |
Earlier this year a Border Patrol officer was killed in a shoot-out with people who have been described as members of a trans vegan AI death cult. But who are the Zizians, really? Robert sits down with David Gborie to trace their development, from part of the Bay Area Rationalist subculture to killers. (4 Part series) Sources: https://medium.com/@sefashapiro/a-community-warning-about-ziz-76c100180509 https://web.archive.org/web/20230201130318/https://sinceriously.fyi/rationalist-fleet/ https://knowyourmeme.com/memes/infohazard https://web.archive.org/web/20230201130316/https://sinceriously.fyi/net-negative/ Wayback Machine The Zizians Spectral Sight True Hero Contract Schelling Orders – Sinceriously Glossary – Sinceriously https://web.archive.org/web/20230201130330/https://sinceriously.fyi/my-journey-to-the-dark-side/ https://web.archive.org/web/20230201130302/https://sinceriously.fyi/glossary/#zentraidon https://web.archive.org/web/20230201130259/https://sinceriously.fyi/vampires-and-more-undeath/ https://web.archive.org/web/20230201130316/https://sinceriously.fyi/net-negative/ https://web.archive.org/web/20230201130318/https://sinceriously.fyi/rationalist-fleet/ https://x.com/orellanin?s=21&t=F-n6cTZFsKgvr1yQ7oHXRg https://zizians.info/ according to The Boston Globe Inside the ‘Zizians’: How a cultish crew of radical vegans became linked to killings across the United States | The Independent Silicon Valley ‘Rationalists’ Linked to 6 Deaths The Delirious, Violent, Impossible True Story of the Zizians | WIRED Good Group and Pasek’s Doom – Sinceriously Glossary – Sinceriously Mana – Sinceriously Effective Altruism’s Problems Go Beyond Sam Bankman-Fried - Bloomberg The Zizian Facts - Google Docs Several free CFAR summer programs on rationality and AI safety - LessWrong 2.0 viewer This guy thinks killing video game characters is immoral | Vox Inadequate Equilibria: Where and How Civilizations Get Stuck Eliezer Yudkowsky comments on On Terminal Goals and Virtue Ethics - LessWrong 2.0 viewer Effective Altruism’s Problems Go Beyond Sam Bankman-Fried - Bloomberg SquirrelInHell: Happiness Is a Chore PLUM OF DISCORD — I Became a Full-time Internet Pest and May Not... Roko Harassment of PlumOfDiscord Composited – Sinceriously Intersex Brains And Conceptual Warfare – Sinceriously Infohazardous Glossary – Sinceriously SquirrelInHell-Decision-Theory-and-Suicide.pdf - Google Drive The Matrix is a System – Sinceriously A community alert about Ziz. Police investigations, violence, and… | by SefaShapiro | Medium Intersex Brains And Conceptual Warfare – Sinceriously A community alert about Ziz. Police investigations, violence, and… | by SefaShapiro | Medium PLUM OF DISCORD (Posts tagged cw-abuse) Timeline: Violence surrounding the Zizians leading to Border Patrol agent shooting See omnystudio.com/listener for privacy information.
The White House wants to hear from you regarding what it should do about AI safety. Now's your chance to spend a few minutes to make someone read your thoughts on the subject! Submissions are due by midnight EST on … Continue reading →
John Bennett discusses Milton Friedman‘s model of policy change. LINKS The Milton Friedman Model of Policy Change John Bennett's LinkedIn Friedman's “Capitalism and Freedom” Preface Ross Rheingans-Yoo on Thalidomide at Complex Systems, and at his blog “Every Bay Area Walled … Continue reading →
Want to run an HPMOR Anniversary Party, or get notified if one's happening near you? Fill this out!
Gene Smith on polygenic screening; gene editing to give our children the happiest, healthiest, best lives they can live; and if we can do this in adults as well. Plus how this will interface with the AI future. LINKS … Continue reading →
Eneasz tells Jen about Sympathetic Opposition's How and Why to be Ladylike (For Women with Autism), and the podcast takes a 1-episode break
Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.
Jacob Falkovich on finding a good match and selfless dating LINKS SecondPerson.Dating – why dating sucks and how you will unsuck it Jacob's post on soccer player skill distribution Go Fuck Someone Selfless Dating Consensual Hostility (re consent culture) steelmanning … Continue reading →
How shitcoins work, plus the Dumb Money movie about the GameStop squeeze.
Why you definitely should kill your friend's cat if you promised to kill your friend's cat. (+Q&A) This is a lightning talk given at the Rationalist MegaMeetup 2024. Based on this Twitter Poll
Eric discusses integrating our emotions via observation and adjustment. His free course is at EnjoyExisting.org or email him – eric@ericlanigan.com LINKS EnjoyExisting.org Ugh Fields You Have Two Brains – Eneasz spends more words on this emotion-brain speculation at this blog … Continue reading →