Audio version of Slate Star Codex. It's just me reading Scott Alexander's Blog Posts.
slatestarpodcast@gmail.com ( slatestarpodcast@gmail.com)
I never write reviews, but I am so incredibly thankful that The Slate Star Codex Podcast exists. It has become a source of joy for me to be able to listen to the thought-provoking content of SSC while I'm on the road. Jeremiah, the narrator of the earlier episodes, has a lovely way of reading them that adds an extra layer of enjoyment to the experience. His dedication and ability to pull off this project is truly impressive, and I can't thank him enough for bringing Scott Alexander's brilliant blog posts to life.
However, as much as I appreciate Jeremiah's narration, there have been some changes in recent episodes. Solenoid Entity has taken over the task of recording new episodes, presumably out of necessity. While his delivery may not be as clean or polished as Jeremiah's, and there are moments where the audio quality suffers from reverberation, I am still grateful that he has continued this important work. Thank you, Solenoid Entity!
One of the best aspects of The Slate Star Codex Podcast is its wealth of excellent information presented with great transparency. Scott Alexander tackles difficult and fascinating topics with honesty, deliberation, and consideration. This podcast consistently provides thought-provoking content that keeps me engaged and wanting more.
Another positive aspect is that more people now have access to Slate Star Codex through this podcast format. This allows for a wider audience to engage with Alexander's brilliant insights and ideas, which I believe is a good thing overall.
On the downside, some listeners may find that the posts can be quite long when read out loud. However, this is easily remedied by listening at a faster speed without losing any comprehension or enjoyment.
In conclusion, The Slate Star Codex Podcast is an excellent companion for those who want to dive into a wide range of subjects and remain in a perpetual state of contented awe with the world and our desire to understand it better. Whether you have the time to read Alexander's blog or not, this podcast is a valuable resource. The narration, whether by Jeremiah or Solenoid Entity, is of high quality and professional. Despite some minor drawbacks in recent episodes, I have no complaints and only praise for this podcast. Thank you to the person who brings Scott Alexander's blogs to audio, and thank you for enabling me to keep up with Slate Star Codex while I go on my walks.
Finalist #9 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] Ollantay is a three-act play written in Quechua, an indigenous language of the South American Andes. It was first performed in Peru around 1775. Since the mid-1800s it's been performed more often, and nowadays it's pretty easy to find some company in Peru doing it. If nothing else, it's popular in Peruvian high schools as a way to get students to connect with Quechua history. It's not a particularly long play; a full performance of Ollantay takes around an hour.1 Also, nobody knows where Ollantay was written, when it was written, or who wrote it. And its first documented performance led directly to upwards of a hundred thousand deaths. Macbeth has killed at most fifty people,2 and yet it routinely tops listicles of “deadliest plays”. I'm here to propose that Ollantay take its place. https://www.astralcodexten.com/p/your-review-ollantay
[original post here] #1: Isn't it possible that embryos are alive, or have personhood, or are moral patients? Most IVF involves getting many embryos, then throwing out the ones that the couple doesn't need to implant. If destroying embryos were wrong, then IVF would be unethical - and embryo selection, which might encourage more people to do IVF, or to maximize the number of embryos they get from IVF, would be extra unethical. I think a default position would be that if you believe humans are more valuable than cows, and cows more valuable than bugs - presumably because humans are more conscious/intelligent/complex/thoughtful/have more hopes and dreams/experience more emotions - then in that case embryos, which have less of a brain and nervous system even than bugs, should be less valuable still. One reason to abandon this default position would be if you believe in souls or some other nonphysical basis for personhood. Then maybe the soul would enter the embryo at conception. I think even here, it's hard to figure out exactly what you're saying - the soul clearly isn't doing very much, in the sense of experiencing things, while it's in the embryo. But it seems like God is probably pretty attached to souls, and maybe you don't want to mess with them while He's watching. In any case, all I can say is that this isn't my metaphysics. But most people in the comments took a different tactic, arguing that we should give embryos special status (compared to cows and bugs) because they had the potential to grow into a person. https://www.astralcodexten.com/p/my-responses-to-three-concerns-from
Finalist #8 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] I. The Men Are Not Alright Sometimes I'm convinced there's a note taped to my back that says, “PLEASE SPILL YOUR SOUL UPON THIS WOMAN.” I am not a therapist, nor in any way certified to deal with emotional distress, yet my presence seems to cause people to regurgitate their traumas. This quirk of mine becomes especially obvious when dating. Many of my dates turn into pseudo-therapy sessions, with men sharing emotional traumas they've kept bottled up for years. One moment I'm learning about his cat named Daisy, and then half a latte later, I'm hearing a detailed account of his third suicide attempt, complete with a critique of the food in the psychiatric ward. This repeated pattern in my dating life has taught me three things: I am terrible at small talk. Most men are not accustomed to genuine questions about their well-being, and will often respond with a desperate upwelling of emotion. The men are not alright. This is a review of dating men in the Bay Area. But more than that, it's an attempt to explain those unofficial therapy sessions to people who never get to hear them. It's a review of the various forms of neglect and abuse society inflicts upon men, and the inevitable consequences to their happiness and romantic partnerships. https://www.astralcodexten.com/p/your-review-dating-men-in-the-bay
A guest post by David Schneider-Joseph The “amyloid hypothesis” says that Alzheimer's is caused by accumulation of the peptide amyloid-β. It's the leading model in academia, but a favorite target for science journalists, contrarian bloggers, and neuroscience public intellectuals, who point out problems like: Some of the research establishing amyloid's role turned out to be fraudulent. The level of amyloid in the brain doesn't correlate very well with the level of cognitive impairment across Alzheimer's patients. Several strains of mice that were genetically programmed to have extra amyloid did eventually develop cognitive impairments. But it took much higher amyloid levels than humans have, and on further investigation the impairments didn't really look like Alzheimer's. Some infectious agents, like the gingivitis bacterium and the herpesviruses, seem to play a role in at least some Alzheimer's cases. . . . and amyloid is one of the body's responses to injury or infection, so it might be a harmless byproduct of these infections or whatever else the real disease is. Anti-amyloid drugs (like Aduhelm) don't reverse the disease, and only slow progression a relatively small amount. Opponents call the amyloid hypothesis zombie science, propped up only by pharmaceutical companies hoping to sell off a few more anti-amyloid me-too drugs before it collapses. Meanwhile, mainstream scientists . . . continue to believe it without really offering any public defense. Scott was so surprised by the size of the gap between official and unofficial opinion that he asked if someone from the orthodox camp would speak out in its favor. I am David Schneider-Joseph, an engineer formerly with SpaceX and Google, now working in AI safety. Alzheimer's isn't my field, but I got very interested in it, spent six months studying the literature, and came away believing the amyloid hypothesis was basically completely solid. I thought I'd share that understanding with current skeptics. https://www.astralcodexten.com/p/in-defense-of-the-amyloid-hypothesis
[Original post: Should Strong Gods Bet On GDP?] 1: Comments About The Theory 2: Comments About Specific Communities 3: Other Comments Comments About The Theory Darwin writes: I think you may (*may*, I'm not sure) be vastly underestimating how many people are in some form of nontraditional tight-knit community. Notice that many of the communities you list are things you've directly personally encountered through your online interests or social circle. Most people have never heard of libertarian homesteaders or rationalist dating sites, perhaps you have also never heard of the things most other people belong to. For my part, I have been part of a foam combat ('boffer') organization since college. You may want to say 'that's not a community, that's just a hobby', but the people in this sport form a strong community with tight bonds outside the game itself. Not only do I go to practices twice a week, I have 2 D&D games and 1 board game night every week with mostly members of the community, members of the community are my friends that I go out to movies and dinners with, play video games with voice chat on Discord with, talk to online in Discord servers and web forums and group chats, go to parties with and gossip about with other community members. Aside from attending over a dozen weddings of community members (mostly to other community members), I've served as best man for 2 members and wedding officiant for 2 other members. The sport itself has houses, guilds, and fighting units, all with their own ethos, credos, goals, activities, and hierarchies; it has knighthoods and squireships, it has awards for arts and crafts and community service. The sport has regular camping events that end up looking like temporary compounds of hundreds to thousand+ members, lasting from a weekend to a week. We may not have a singular God or Invisible Hand we all worship, but we have strong community norms towards things like inclusion, creating positive experiences, some modernized gender-neutral version of chivalry, creating safe spaces, etc. If you didn't know me very very well, you might know that 'oh yeah, he does some kind of sword fighting thing on the weekends I think?', and not know there's a large and strong community there. I wonder how many other things are like this - I think 'oh yeah, they play softball on the weekends, oh yeah, they belong to a knitting circle, oh yeah, they go to a lot of concerts, oh yeah, they volunteer at some kind of community center', and have no idea that there's a strong close-knit community surrounding those things that remains largely invisible to outsiders. https://www.astralcodexten.com/p/highlights-from-the-comments-on-liberalism
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] My dad only actually enjoys about ten foods, nine of them beige. His bread? White. His pizza? Cheese. His meat? Turkey breast. And his side dish? Mashed potatoes. As a child I hated mashed potatoes, despite his evangelization of them. I too was a picky eater growing up, but I would occasionally attempt to see what he saw in his beloved spuds. Whenever I tried a bite, the texture disgusted me: a gritty gruel of salty flakes coated with the oleic pall of margarine. The flavor reminded me of stale Pringles. I checked back once every couple years, but was repulsed by them every time. I lobbied my parents for pasta or frozen tater tots or any other side I actually liked. Family dinners were often dichotomous, the same protein supplemented by two different carbs. “You are not my son,” my father would joke as he continued to put away his potato slop. “Maybe you're not my father,” I'd shoot back when he shunned the rest of the family's rice pilaf. Our starch preferences seemed irreconcilable. As I entered my teen years, my palate expanded. After I'd tried and enjoyed brussels sprouts and sushi and escargot, my hatred of one of the most basic and inoffensive of all foods seemed silly. One day at a nice restaurant, I decided to give mashed potatoes one more try. Upon taking my first bite, I realized three things: 1) Mashed potatoes are good. 2) Whatever my dad had been eating at home was not mashed potatoes. 3) My world is built on lies. https://www.astralcodexten.com/p/your-review-my-fathers-instant-mashed
Slightly contra Fukuyama on liberal communities Francis Fukuyama is on Substack; last month he wrote Liberalism Needs Community. As always, read the whole thing and don't trust my summary, but the key point is: R. R. Reno, editor of the magazine First Things, the liberal project of the past three generations has sought to weaken the “strong Gods” of populism, nationalism, and religion that were held to be the drivers of the bloody conflicts of the early 20th century. Those gods are now returning, and are present in the politics of both the progressive left and far right—particularly the right, which is characterized today by demands for strong national identities or religious foundations for national communities. However, there is a cogent liberal response to the charge that liberalism undermines community. The problem is that, just as in the 1930s, that response has not been adequately articulated by the defenders of liberalism. Liberalism is not intrinsically opposed to community; indeed, there is a version of liberalism that encourages the flourishing of strong community and human virtue. That community emerges through the development of a strong and well-organized civil society, where individuals freely choose to bond with other like-minded individuals to seek common ends. People are free to follow “strong Gods”; the only caveat is that there is no single strong god that binds the entire society together. In other words - yes, part of the good life is participation in a tight-knit community with strong values. Liberalism's shared values are comparatively weak, and its knitting comparatively loose. But that's no argument against the liberal project. Its goal isn't to become this kind of community itself, but to be the platform where communities like this can grow up. So in a liberal democracy, Christians can have their church, Jews their synagogue, Communists their commune, and so on. Everyone gets the tight-knit community they want - which beats illiberalism, where (at most) one group gets the community they want and everyone else gets persecuted. On a theoretical level, this is a great answer. On a practical level - is it really working? Are we really a nation dotted with tight-knit communities of strong values? The average person has a church they don't attend and a political philosophy that mainly cashes out in Twitter dunks. Otherwise they just consume whatever slop the current year's version of capitalism chooses to throw at them. It's worth surveying the exceptions that prove the rule: https://www.astralcodexten.com/p/should-strong-gods-bet-on-gdp
Finalist #6 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] When the prefect of Alexandria's daughter converted to Christianity, nothing in particular happened - it wasn't as though the laws outlawing the cult would be enforced against her. She was smart, she was pretty (beautiful, even) and she had connections. So long as she kept quiet, Catherine could have a comfortable life. She didn't keep quiet. https://www.astralcodexten.com/p/your-review-joan-of-arc
[see footnote 4 for conflicts of interest] In 2021, Genomic Prediction announced the first polygenically selected baby. When a couple uses IVF, they may get as many as ten embryos. If they only want one child, which one do they implant? In the early days, doctors would just eyeball them and choose whichever looked healthiest. Later, they started testing for some of the most severe and easiest-to-detect genetic disorders like Down Syndrome and cystic fibrosis1. The final step was polygenic selection - genotyping each embryo and implanting the one with the best genes overall. Best in what sense? Genomic Prediction claimed the ability to forecast health outcomes from diabetes to schizophrenia. For example, although the average person has a 30% chance of getting type II diabetes, if you genetically test five embryos and select the one with the lowest predicted risk, they'll only have a 20% chance2. Since you're taking the healthiest of many embryos, you should expect a child conceived via this method to be significantly healthier than one born naturally. Polygenic selection straddles the line between disease prevention and human enhancement. In 2023, Orchid Health entered the field. Unlike Genomic Prediction, which tested only the most important genetic variants, Orchid offers whole genome sequencing, which can detect the de novo3 mutations involved in autism, developmental disorders, and certain other genetic diseases. Critics accused GP and Orchid of offering “designer babies”, but this was only true in the weakest sense - customers couldn't “design” a baby for anything other than slightly lower risk of genetic disease. These companies refused to offer selection on “traits” - the industry term for the really controversial stuff like height, IQ, or eye color. Still, these were trivial extensions of their technology, and everybody knew it was just a matter of time before someone took the plunge. Last month, a startup called Nucleus took the plunge. https://www.astralcodexten.com/p/suddenly-trait-based-embryo-selection
I promised some people longer responses: Thomas Cotter asks why people think “consistency” is an important moral value. After all, he says, the Nazis and Soviets were “consistent” with their evil beliefs. I'm not so sure of his examples - the Soviets massacred workers striking for better conditions, and the Nazis were so bad at race science that they banned IQ tests after Jews outscored Aryans - but I'm sure if he looked harder he could find some evil person who was superficially consistent with themselves. Hen Mazzig on Twitter is suspicious that lots of people oppose the massacres in Gaza without having objected equally strenuously to various other things. Again, he's bad at examples - most of the things he names are less bad than the massacres in Gaza - but I'm sure if he looked harder he could find some thing which was worse than Gaza and which not quite as many people had protested. Therefore, people who object to the massacres in Gaza must be motivated by anti-Semitism. An r/TrueUnpopularOpinion poster argues that No One Actually Cares About Gaza; Your Anger Is Performative. They say that (almost) nobody can actually sustain strong emotions about the deaths of some hard-to-pin-down number of people they don't know, and so probably people who claim to care are virtue-signaling or luxury-believing or one of those things. Since 2/3 of these are about Gaza, we'll start there. And since there's so much virtue-signaling and luxury-believing going around these days, I assure you that what I am about to share is my absolute most honest and deepest opinion, the one I hold in my heart of hearts. https://www.astralcodexten.com/p/my-heart-of-hearts
Jul 26, 2025 Finalist #5 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] Introduction The Astral Codex Ten (ACX) Commentariat is defined as the 24,485 individuals other than Scott who have contributed to the corpus of work of Scott's blog posts, chiefly by leaving comments at the bottom of those posts. It is well understood (by the Commentariat themselves) that they are the best comments section anywhere on the internet, and have been for some time. This review takes it as a given that the ACX Commentariat outclasses all of its pale imitators across the web, so I won't compare the ACX Commentariat to e.g. reddit. The real question is whether our glory days are behind us – specifically whether the ACX Commentariat of today has lost its edge compared to the SSC Commentariat of pre-2021. A couple of years ago Scott asked, Why Do I Suck?. This was a largely tongue-in-cheek springboard to discuss a substantive criticism he regularly received - that his earlier writing was better than his writing now. How far back do we need to go before his writing was ‘good'? Accounts seemed to differ; Scott said that the feedback he got was of two sorts: “I loved your articles from about 2013 - 2016 so much! Why don't you write articles like that any more?”, which dates the decline to 2016 “Do you feel like you've shifted to less ambitious forms of writing with the new Substack?”, which dates the decline to 2021 Quite a few people responded in the comments that Scott's writing hadn't changed, but it was the experience of being a commentor which had worsened. For example, David Friedman, a prolific commentor on the blog in the SSC-era, writes: A lot of what I liked about SSC was the commenting community, and I find the comments here less interesting than they were on SSC, fewer interesting arguments, which is probably why I spend more time on [an alternative forum] than on ACX. Similarly, kfix seems to be a long-time lurker (from as early as 2016) who has become more active in the ACX-era, writes: I would definitely agree that the commenting community here is 'worse' than at SSC along the lines you describe, along with the also unwelcome hurt feelings post whenever Scott makes an offhand joke about a political/cultural topic. And of course, this position wasn't unanimous. Verbamundi Consulting is a true lurker who has only ever made one post on the blog – this one: Ok, I've been lurking for a while, but I have to say: I don't think you suck… You have a good variety of topics, your commenting community remains excellent, and you're one of the few bloggers I continue to follow. The ACX Commentariat is somewhat unique in that it self-styles itself as a major reason to come and read Scott's writing – Scott offers up some insights on an issue, and then the comments section engages unusually open and unusually respectful discussion of the theme, and the total becomes greater than the sum of the parts. Therefore, if the Commentariat has declined in quality it may disproportionately affect people's experience of Scott's posts. The joint value of each Scott-plus-Commentariat offering declines if the Commentariat are not pulling their weight, even if Scott himself remains just as good as ever. In Why Do I Suck? Scott suggests that there is weak to no evidence of a decline in his writing quality, so I propose this review as something of a companion piece; is the (alleged) problem with the blog, in fact, staring at us in the mirror? My personal view aligns with Verbamundi Consulting and many other commentors - I've enjoyed participating in both the SSC and ACX comments, and I haven't noticed any decline in Commentariat quality. So, I was extremely surprised to find the data totally contradicted my anecdotal experience, and indicated a very clear dropoff in a number of markers of quality at almost exactly the points Scott mentioned in Why Do I Suck? – one in mid-2016 and one in early 2021 during the switch from SSC to ACX. https://readscottalexander.com/posts/acx-your-review-the-astral-codex-ten
We're running another ACX Grants round! If you already know what this is and just want to apply for a grant, use the form here (should take 15 - 30 minutes), deadline August 15. If you already know what this is and want to help as a funder, VC, partner charity, evaluator, or friendly professional, click the link for the relevant form, same deadline. Otherwise see below for more information. What is ACX Grants? ACX Grants is a microgrants program that helps fund ACX readers' charitable or scientific projects. Click the links to see the 2022 and 2024 cohorts. The program is conducted in partnership with Manifund, a charity spinoff of Manifold Markets, who handle the administrative/infrastructure side of things. How much money is involved? I plan to contribute $200K. I expect (but cannot guarantee) an additional $800K from other donors, for a total of about $1 million. Most grants will probably be between $5,000 and $50,000, with a rare few up to $100,000. Depending on how much external donor interest there is, we will probably give between 10 and 50 grants. What's the catch? There's no catch, but this year we plan to experiment with replacing some grants with SAFEs, and others with convertible grants. That means that if you're a startup, we (ACX Grants as an nonprofit institution, not me personally) get some claim to future equity if you succeed. If you're not a startup, you'll sign an agreement saying that if your project ever becomes a startup, then we'll get the equity claim. We're still working on the exact details of this agreement, but we intend to have pretty standard terms and err in the favorable-to-you direction; obviously we'll show you the final agreement before you sign anything. We're doing this because some of our previous grantees became valuable companies, and it seems foolish to leave that money on the table when we could be capturing it and reinvesting it into future grants rounds. Please don't let this affect your decision to apply. Our top priority remains charity, and we'll continue to select grantees based on their philanthropic value and not on their likelihood of making us money. If you're not a startup and don't plan to become one, none of this should affect you. And if you have a good reason not to want to sign these agreements - including “I'm not savvy enough to know what this means and it makes me nervous” - then we're happy to opt you out of them. What's the timeline? We'd like to have grants awarded by October 1 and money in your hands by November 1. This is a goal, not a promise. What will the application process be like? You fill out a form that should take 15 - 30 minutes. If we have questions, an evaluator might email or call you, in a way that hopefully won't take more than another 15 - 30 minutes of your time to answer. If you win a grant, Manifund will send you the money, probably by bank wire. Every few years, we might ask you to fill out another 15 - 30 minute form letting us know how your project is doing. What kind of projects might you fund? There are already lots of good charities that help people directly at scale, for example Against Malaria Foundation (which distributes malaria-preventing bed nets) and GiveDirectly (which gives money directly to very poor people in Africa). These are hard to beat. We're most interested in charities that pursue novel ways to change complex systems, either through technological breakthroughs, new social institutions, or targeted political change. Among the projects we've funded in the past were: Development of oxfendazole, a drug for treating parasitic worms in developing countries. A platform that lets people create prediction markets on topics of their choice A trip to Nigeria for college students researching lead poisoning prevention. A group of lawyers who sue factory farms under animal cruelty laws. Development of software that helps the FDA run better drug trials. A startup building anti-mosquito drones to fight tropical disease A guide for would-be parents on which IVF clinics have the highest successful rate of successful implantation. A university lab working on artificial kidneys You can read the full list here and here, and the most recent updates from each project here. Is there anything good about winning an ACX Grant other than getting money? You'll get my support, which is mostly useful in getting me to blog about your project. For example, I can put out updates or requests for help on Open Threads. I can also try to help connect you to people I know. Some people who won ACX Grants last year were able to leverage the attention to attract larger grantmakers or VCs. You can try to pitch me guest posts about your project. This could be a description of what you're doing and why, or just a narrative about your experience and what you learned from it. Warning that I'm terrible to pitch guest posts to, I almost never go through with this, and I'm very nitpicky when I do. Still, you can try. We're working on gathering a network of friendly professionals who agree to provide pro bono or heavily discounted support (eg legal, accounting, business advice, cloud compute) to ACX grantees. We've only just begun this process and it might not actually materialize. There are occasional virtual and physical meetups of ACX grantees; these don't always result in Important Professional Connections, but are pretty interesting. What if I want those nonfinancial benefits for my project, but don't need money? Apply for a grant of $1. But we're pretty nervous about giving very-low-cost grants because it's too easy to accept all of them and dilute our signaling value; for this reason, it might be harder to get a grant of $1 than a grant of $5,000, and we expect these to make up only 0 - 10% of our cohort. You might be better off coming up with some expansion of your project that takes $5,000 and applying for that. What are the tax implications of an ACX Grant? Consult your accountant, especially if you live outside the US. If you live inside the US, we think it's ordinary taxable income. If you're an individual, you'll have to pay taxes on it at your usual tax rate. If you're a 501(c), you'll get your normal level of tax exemption. I want to fund you, how can I help? For bureaucratic reasons, we're currently looking for donations mostly in the $5,000+ range. If that's you, fill out the Funder Application Form. If we've already talked about this over email, you don't need to fill out the form, but we encourage you to do so anyway so we know more about your interests and needs. What's the story behind why you have $200K to spend on grants every year, but are still asking for more funding? Some generous readers sent me crypto during the crypto boom, or advised me on buying crypto, or asked to purchase NFTs of my post for crypto. Some of the crypto went up. Then I reinvested it into AI stocks, and those went up too. I think of this as unearned money and want to give some of it back to the community, hence this grants program. I have a lot of it but not an unlimited amount. At the current rate, I can probably afford another ~5 ACX Grants rounds. When it runs out, I‘ll just be a normal person with normal amounts of money (Substack is great, but not great enough for me to afford this level of donation consistently). My hope is that I can keep making these medium-sized donations, other people can add more to the pot, and we'll be able to drag this out at least five more rounds, after which point maybe we'll come up with another plan. I'm a VC, how can I help? Some of our applicants are potentially-profitable startups, and we decide they're a better match for VC funding than for our grants. If you're willing to look these over and get in touch with any that seem interesting, fill out the VC Application Form. It will ask for more information on what kind of opportunities you're interested in funding. I'm a philanthropist or work at a philanthropic foundation; how can I help? Some of our applicants are good projects, but not a good match for us, and we want to shop them around to other philanthropists and charities who might have different strengths or be able to work with larger amounts of money. If that's you, please fill out the Partner Charity Application Form I'm good at evaluating grants, or an expert in some specific field; how can I help? If you have experience as a grantmaker or VC, or you're an expert in some technical field, you might be able to help us evaluate proposals. Fill out the Evaluator Application Form. By default we expect you'll want us to send you one or two grants in your area of expertise, but if you want a challenge you can request more. If we've already talked about this over email, you don't need to fill out the form, but we encourage you to do so anyway so I know more about your interests and needs. We expect to get more volunteers than we need, and most people who fill in the evaluator form won't get contacted unless we need someone from their specific field. I'm a professional who wants to do pro bono work for cool charities, how can I help? Fill out the Friendly Professional Application Form. If we get enough applicants, we'll compile them into a directory for our grantees. I participated in the Impact Certificate Market last year, did you forget about me? Yes until Austin Chen reminded me last month No! Request final oracular funding by filling in the Impact Applicant Form. Sorry, I forgot, where do I go to apply for a grant again? See form here. Please apply by 11:59 PM on August 15th. https://www.astralcodexten.com/p/apply-for-an-acx-grant-2025
[previously in series: 1, 2, 3, 4, 5, 6] It is eerily silent in San Francisco tonight. Since Mayor Lurie's crackdown, the usual drug hawkers, catcallers, and street beggars are nowhere to be seen. Still, your luck can't last forever, and just before you reach your destination a man with bloodshot eyes lurches towards you. You recognize him and sigh. "Go away!" you shout. "Hey man," says Mark Zuckerberg, grabbing your wrist. "You wanna come build superintelligence at Meta? I'll give you five million, all cash." "I said go away!" "Ten million plus a Lambo," he counters. "I don't even know anything about AI!" you say. "I'll pay you fifty million to learn." “F@$k off!”
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] https://www.astralcodexten.com/p/your-review-islamic-geometric-patterns
I. A thought I had throughout reading L.R. Hiatt's Arguments About Aborigines was: What are anthropologists even doing? The book recounts two centuries' worth of scholarly disputes over questions like whether aboriginal tribes had chiefs. But during those centuries, many Aborigines learned English, many Westerners learned Aboriginal languages, and representatives of each side often spent years embedded in one another's culture. What stopped some Westerner from approaching an Aborigine, asking “So, do you have chiefs?” and resolving a hundred years of bitter academic debate? Of course the answer must be something like “categories from different cultures don't map neatly into another, and Aboriginal hierarchies have something that matches the Western idea of ‘chief' in some sense but not in others”. And there are other complicating factors - maybe some Aboriginal tribes have chiefs and others don't. Or maybe Aboriginal social organization changed after Western contact, and whatever chiefs they do or don't have are a foreign imposition. Or maybe something about chiefs is taboo, and if you ask an Aborigine directly they'll lie or dissemble or say something that's obviously a euphemism to them but totally meaningless to you. All of these points are well taken. It still seems weird that the West could interact with an entire continent full of Aborigines for two hundred years and remain confused about basic facts of their social lives. You can repeat the usual platitudes about why anthropology is hard as many times as you want; it still doesn't quite seem to sink in. https://www.astralcodexten.com/p/book-review-arguments-about-aborigines
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] “The scientific paper is a ‘fraud' that creates “a totally misleading narrative of the processes of thought that go into the making of scientific discoveries.” This critique comes not from a conspiracist on the margins of science, but from Nobel laureate Sir Peter Medawar. A brilliant experimentalist whose work on immune tolerance laid the foundation for modern organ transplantation, Sir Peter understood both the power and the limitations of scientific communication. Consider the familiar structure of a scientific paper: Introduction (background and hypothesis), Methods, Results, Discussion, Conclusion. This format implies that the work followed a clean, sequential progression: scientists identified a gap in knowledge, formulated a causal explanation, designed definitive experiments to fill the gap, evaluated compelling results, and most of the time, confirmed their hypothesis. Real lab work rarely follows such a clear path. Biological research is filled with what Medawar describes lovingly as “messing about”: false starts, starting in the middle, unexpected results, reformulated hypotheses, and intriguing accidental findings. The published paper ignores the mess in favour of the illusion of structure and discipline. It offers an ideal version of what might have happened rather than a confession of what did. The polish serves a purpose. It makes complex work accessible (at least if you work in the same or a similar field!). It allows researchers to build upon new findings. But the contrived omissions can also play upon even the most well-regarded scientist's susceptibility to the seduction of story. As Christophe Bernard, Director of Research at the Institute of Systems Neuroscience (Marseilles, Fr.) recently explained, “when we are reading a paper, we tend to follow the reasoning and logic of the authors, and if the argumentation is nicely laid out, it is difficult to pause, take a step back, and try to get an overall picture.” Our minds travel the narrative path laid out for us, making it harder to spot potential flaws in logic or alternative interpretations of the data, and making conclusions feel far more definitive than they often are. Medawar's framing is my compass when I do deep dives into major discoveries in translational neuroscience. I approach papers with a dual vision. First, what is actually presented? But second, and often more importantly, what is not shown? How was the work likely done in reality? What alternatives were tried but not reported? What assumptions guided the experimental design? What other interpretations might fit the data if the results are not as convincing or cohesive as argued? And what are the consequences for scientific progress? In the case of Alzheimer's research, they appear to be stark: thirty years of prioritizing an incomplete model of the disease's causes; billions of corporate, government, and foundation dollars spent pursuing a narrow path to drug development; the relative exclusion of alternative hypotheses from funding opportunities and attention; and little progress toward disease-modifying treatments or a cure. https://www.astralcodexten.com/p/your-review-of-mice-mechanisms-and
Steven Byrnes is a physicist/AI researcher/amateur neuroscientist; needless to say, he blogs on Less Wrong. I finally got around to reading his 2024 series giving a predictive processing perspective on intuitive self-models. If that sounds boring, it shouldn't: Byrnes charges head-on into some of the toughest subjects in psychology, including trance, amnesia, and multiple personalities. I found his perspective enlightening (no pun intended; meditation is another one of his topics) and thought I would share. It all centers around this picture: But first: some excruciatingly obvious philosophical preliminaries. https://www.astralcodexten.com/p/practically-a-book-review-byrnes
In June 2022, I bet a commenter $100 that AI would master image compositionality by June 2025. DALL-E2 had just come out, showcasing the potential of AI art. But it couldn't follow complex instructions; its images only matched the “vibe” of the prompt. For example, here were some of its attempts at “a red sphere on a blue cube, with a yellow pyramid on the right, all on top of a green table”. At the time, I wrote: I'm not going to make the mistake of saying these problems are inherent to AI art. My guess is a slightly better language model would solve most of them…for all I know, some of the larger image models have already fixed these issues. These are the sorts of problems I expect to go away with a few months of future research. Commenters objected that this was overly optimistic. AI was just a pattern-matching “stochastic parrot”. It would take a deep understanding of grammar to get a prompt exactly right, and that would require some entirely new paradigm beyond LLMs. For example, from Vitor: Why are you so confident in this? The inability of systems like DALL-E to understand semantics in ways requiring an actual internal world model strikes me as the very heart of the issue. We can also see this exact failure mode in the language models themselves. They only produce good results when the human asks for something vague with lots of room for interpretation, like poetry or fanciful stories without much internal logic or continuity. Not to toot my own horn, but two years ago you were naively saying we'd have GPT-like models scaled up several orders of magnitude (100T parameters) right about now (https://readscottalexander.com/posts/ssc-the-obligatory-gpt-3-post#comment-912798). I'm registering my prediction that you're being equally naive now. Truly solving this issue seems AI-complete to me. I'm willing to bet on this (ideas on operationalization welcome). So we made a bet! All right. My proposed operationalization of this is that on June 1, 2025, if either if us can get access to the best image generating model at that time (I get to decide which), or convince someone else who has access to help us, we'll give it the following prompts: 1. A stained glass picture of a woman in a library with a raven on her shoulder with a key in its mouth 2. An oil painting of a man in a factory looking at a cat wearing a top hat 3. A digital art picture of a child riding a llama with a bell on its tail through a desert 4. A 3D render of an astronaut in space holding a fox wearing lipstick 5. Pixel art of a farmer in a cathedral holding a red basketball We generate 10 images for each prompt, just like DALL-E2 does. If at least one of the ten images has the scene correct in every particular on 3/5 prompts, I win, otherwise you do. Loser pays winner $100, and whatever the result is I announce it on the blog (probably an open thread). If we disagree, Gwern is the judge. Some image models of the time refused to draw humans, so we agreed that robots could stand in for humans in pictures that required them. In September 2022, I got some good results from Google Imagen and announced I had won the three-year bet in three months. Commenters yelled at me, saying that Imagen still hadn't gotten them quite right and my victory declaration was premature. The argument blew up enough that Edwin Chen of Surge, an “RLHF and human LLM evaluation platform”, stepped in and asked his professional AI data labelling team. Their verdict was clear: the AI was bad and I was wrong. Rather than embarrass myself further, I agreed to wait out the full length of the bet and re-evaluate in June 2025. The bet is now over, and official judge Gwern agrees I've won. Before I gloat, let's look at the images that got us here. https://www.astralcodexten.com/p/now-i-really-won-that-ai-bet
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. It was originally given an Honorable Mention, but since last week's piece was about an exciting new experimental school, I decided to promote this more conservative review as a counterpoint.] “Democracy is the worst form of Government except for all those other forms that have been tried from time to time.” - Winston Churchill “There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don't see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.” - G.K. Chesterton https://www.astralcodexten.com/p/your-review-school
[Original thread here: Missing Heritability: Much More Than You Wanted To Know] 1: Comments From People Named In The Post 2: Very Long Comments From Other Very Knowledgeable People 3: Small But Important Corrections 4: Other Comments https://www.astralcodexten.com/p/highlights-from-the-comments-on-missing-ed5
[I haven't independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can't guarantee I will have caught them all by the time you read this.] https://www.astralcodexten.com/p/links-for-july-2025
Stephen Skolnick is a gut microbiome expert blogging at Eat Shit And Prosper. His most recent post argues that contra the psychiatric consensus, schizophrenia isn't genetic at all - it's caused by a gut microbe. He argues: Scientists think schizophrenia is genetic because it obviously runs in families But the twin concordance rates are pretty low - if your identical twin has schizophrenia, there's only about a 30%-40% chance that you get it too. Is that really what we would expect from a genetic disease? Also, scientists have looked for schizophrenia genes, and can only find about 1-2% as many as they were expecting. So maybe we should ask how a disease can run in families without being genetic. Gut microbiota provide an answer: most people “catch” their gut microbiome from their parents. Studies find that schizophrenics have very high levels of a gut bacterium called Ruminococcus gnavus. This bacterium secretes psychoactive chemicals. Constant exposure to these chemicals might be the cause of schizophrenia. I disagree with all of this. Going in order: https://www.astralcodexten.com/p/contra-skolnick-on-schizophrenia
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] “Just as we don't accept students using AI to write their essays, we will not accept districts using AI to supplant the critical role of teachers.” — Arthur Steinberg, American Federation of Teachers‑PA, reacting to Alpha's cyber‑charter bid, January 2025 In January 2025, the charter school application of “Unbound Academy”, a subsidiary of “2 Hour Learning, Inc”, lit up the education press: two hours of “AI‑powered” academics, 2.6x learning velocity, and zero teachers. Sympathetic reporters repeated the slogans; union leaders reached for pitchforks; Reddit muttered “another rich‑kid scam.” More sophisticated critics dismissed the pitch as “selective data from expensive private schools”. But there is nowhere on the internet that provides a detailed, non-partisan, description of what the “2 hour learning” program actually is, let alone an objective third party analysis to back up its claims. 2-Hour Learning's flagship school is the “Alpha School” in Austin Texas. The Alpha homepage makes three claims: Love School Learn 2X in two-hours per day Learn Life Skills Only the second claim seems to be controversial, which may be exactly why that is the claim the Alpha PR team focuses on. That PR campaign makes three more sub-claims on what the two-hour, 2x learning really means: “Learn 2.6X faster.” (on average) “Only two hours of academics per day.” “Powered by AI (not teachers).” If all of this makes your inner Bayesian flinch, you're in good company. After twenty‑odd years of watching shiny education fixes wobble and crash—KIPP, AltSchool, Summit Learning, One-laptop-per-child, No child left behind, MOOCs, Khan‑for‑Everything—you should be skeptical. Either Alpha is (a) another program for the affluent propped up by selection effects, or (b) a clever way to turn children into joyless speed‑reading calculators. Those were, more or less, the two critical camps that emerged when Alpha's parent company was approved to launch the tuition‑free Arizona charter school this past January. Unfortunately, the public evidence base on whether this is “real” is thin in both directions. Alpha's own material is glossy and elliptical; mainstream coverage either repeats Alpha's talking points, or attacks the premise that kids should even be allowed to learn faster than their peers. Until Raj Chetty installs himself in the hallway with a clipboard counting MAP percentiles it is hard to get real information on what exactly Alpha is doing, whether it is actually working beyond selection effects, and if there is anyway it could scale in a way that all the other education initiatives seemed to fail to do. I first heard about Alpha in May 2024, and in the absence of randomized‑controlled clarity, I did what any moderately obsessive parent with three elementary-aged kids and an itch for data would do: I moved the family across the country to Austin for a year and ran the experiment myself (unfortunately, despite trying my best we never managed to have identical twins, so I stopped short of running a proper control group. My wife was less disappointed than I was). Since last autumn I've collected the sort of on‑the‑ground detail that doesn't surface in press releases, or is available anywhere online: long chats with founders, curriculum leads, “guides” (not teachers), Brazilian Zoom coaches, sceptical parents, ecstatic parents, and the kids who live inside the Alpha dashboard – including my own. I hope this seven-part review can help share what the program actually is and that this review is more open minded than the critics, but is something that would never get past an Alpha public relations gatekeeper: https://www.astralcodexten.com/p/your-review-alpha-school
The Story So Far The mid-20th century was the golden age of nurture. Psychoanalysis, behaviorism, and the spirit of the ‘60s convinced most experts that parents, peers, and propaganda were the most important causes of adult personality. Starting in the 1970s, the pendulum swung the other way. Twin studies shocked the world by demonstrating that most behavioral traits - especially socially relevant traits like IQ - were substantially genetic. Typical estimates for adult IQ found it was about 60% genetic, 40% unpredictable, and barely related at all to parenting or family environment. By the early 2000s, genetic science reached a point where scientists could start pinpointing the particular genes behind any given trait. Early candidate gene studies, which hoped to find single genes with substantial contributions to IQ, depression, or crime, mostly failed. They were replaced with genome wide association studies, which accepted that most interesting traits were polygenic - controlled by hundreds or thousands of genes - and trawled the whole genome searching for variants that might explain 0.1% or even 0.01% of the pie. The goal shifted toward polygenic scores - algorithms that accepted thousands of genes as input and spit out predictions of IQ, heart disease risk, or some other outcome of interest. The failed candidate gene studies had sample sizes in the three or four digits. The new genome-wide studies needed five or six digits to even get started. It was prohibitively difficult for individual studies to gather so many subjects, genotype them, and test them for the outcome of interest, so work shifted to big centralized genome repositories - most of all the UK Biobank - and easy-to-measure traits. Among the easiest of all was educational attainment (EA), ie how far someone had gotten in school. Were they a high school dropout? A PhD? Somewhere in between? This correlated with all the spicy outcomes of interest people wanted to debate - IQ, wealth, social class - while being objective and easy to ask about on a survey. Twin studies suggested that IQ was about 60% genetic, and EA about 40%. This seemed to make sense at the time - how far someone gets in school depends partly on their intelligence, but partly on fuzzier social factors like class / culture / parenting. The first genome-wide studies and polygenic scores found enough genes to explain 2%pp1 of this 40% pie. The remaining 38%, which twin studies deemed genetic but where researchers couldn't find the genes - became known as “the missing heritability” or “the heritability gap”. Scientists came up with two hypothesis for the gap, which have been dueling ever since: Maybe twin studies are wrong. Maybe there are genes we haven't found yet For most of the 2010s, hypothesis 2 looked pretty good. Researchers gradually gathered bigger and bigger sample sizes, and found more and more of the missing heritability. A big 2018 study increased the predictive power of known genes from 2% to 10%. An even bigger 2022 study increased it to 14%, and current state of the art is around 17%. Seems like it was sample size after all! Once the samples get big enough we'll reach 40% and finally close the gap, right? This post is the story of how that didn't happen, of the people trying to rehabilitate the twin-studies-are-wrong hypothesis, and of the current status of the debate. Its most important influence/foil is Sasha Gusev, whose blog The Infintesimal introduced me to the new anti-hereditarian movement and got me to research it further, but it's also inspired by Eric Turkheimer, Alex Young (not himself an anti-hereditarian, but his research helped ignite interest in this area), and Awais Aftab. (while I was working on this draft, the East Hunter Substack wrote a similar post. Theirs is good and I recommend it, but I think this one adds enough that I'm publishing anyway) https://www.astralcodexten.com/p/missing-heritability-much-more-than
Related to: ACX Grants 1-3 Year Updates https://www.astralcodexten.com/p/open-questions-for-future-acx-grants
The first cohort of ACX Grants was announced in late 2021, the second in early 2024. In 2022, I posted one-year updates for the first cohort. Now, as I start thinking about a third round, I've collected one-year updates on the second and three-year updates on the first. Many people said my request for updates went to their spam folder; relatedly, many people have not yet sent in their updates. If you're a grantee who didn't see my original email, but you do see this post, please fill in the update form here. All quote blocks are the grantees' own words; text outside of quote blocks is my commentary. https://readscottalexander.com/posts/acx-acx-grants-1-3-year-updates
This is a reported phenomenon where if two copies of Claude talk to each other, they end up spiraling into rapturous discussion of spiritual bliss, Buddhism, and the nature of consciousness. From the system card: Anthropic swears they didn't do this on purpose; when they ask Claude why this keeps happening, Claude can't explain. Needless to say, this has made lots of people freak out / speculate wildly. I think there are already a few good partial explanations of this (especially Nostalgebraist here), but they deserve to be fleshed out and spread more fully. https://www.astralcodexten.com/p/the-claude-bliss-attractor
This is another heuristic from the same place as If It's Worth Your Time To Lie, It's Worth My Time To Correct You. If someone proves you are absolutely, 100% wrong about something, it's polite to say “Oh, I guess I was wrong, sorry” before launching into your next argument. That is, instead of: https://readscottalexander.com/posts/acx-but-vs-yes-but
People don't like nitpickers. “He literally did the WELL AKTUALLY!” If you say Joe Criminal committed ten murders and five rapes, and I object that it was actually only six murders and two rapes, then why am I “defending” Joe Criminal? Because if it's worth your time to lie, it's worth my time to correct it. https://www.astralcodexten.com/p/if-its-worth-your-time-to-lie-its
There's a long-running philosophical argument about the conceivability of otherwise-normal people who are not conscious, aka “philosophical zombies”. This has spawned a shorter-running (only fifteen years!) rationalist sub-argument on the topic. The last time I checked its status was this post, which says: 1. Both Yudkowsky and Chalmers agree that humans possess “qualia”. 2. Chalmers argues that a superintelligent being which somewhow knew the positions of all particles in a large region of the Universe would need to be told as an additional fact that any humans (or other minds possessing qualia) in this region of space possess qualia – it could not deduce this from mere perfect physical knowledge of their constituent particles. Therefore, qualia are in some sense extra-physical. 3. Yudkowsky argues that such a being would notice that humans discuss at length the fact that they possess qualia, and their internal narratives also represent this fact. It is extraordinarily improbable that beings would behave in this manner if they did not actually possess qualia. Therefore an omniscient being would conclude that it is extremely likely that humans possess qualia. Therefore, qualia are not extra-physical. I want to re-open this (sorry!) by disagreeing with the bolded sentence. I think beings would talk about qualia - the “mysterious redness of red” and all that - even if we start by assuming they don't have it. I realize this is a surprising claim, but that's why it's interesting enough to re-open the argument over1. https://www.astralcodexten.com/p/p-zombies-would-report-qualia
It's time to narrow the 141 entries in the Non-Book Review Contest to about a dozen finalists. I can't read 141 reviews alone, so I need your help. Please pick as many as you have time for, read them, and rate them using this form. Don't read them in order! If you read them in order, I'll have 1,000 votes on the first review, 500 on the second, and so on to none in the second half. Either pick a random review (thanks to Taymon for making a random-review-chooser script here) or scroll through the titles until you find one that catches your interest - you can see individual entries here (thanks to a reader for collating them): Other (A - I) Other (J - S) Other (T - Z) Games Music TV/Movies Again, the rating form is here. Thanks! You have until June 20, when I'll count the votes and announce the finalists. https://readscottalexander.com/posts/acx-choose-nonbook-review-finalists-2025
A guest post by Brandon Hendrickson [Editor's note: I accept guest posts from certain people, especially past Book Review Contest winners. Brandon Hendrickson, whose review of The Educated Mind won the 2023 contest, has taken me up on this and submitted this essay. He writes at The Lost Tools of Learning and will be at LessOnline this weekend, where he and Jack Despain Zhou aka TracingWoodgrains will be doing a live conversation about education.] I began my book review of a couple years back with a rather simple question: Could a new kind of school make the world rational? What followed, however, was a sprawling distillation of one scholar's answer that I believe still qualifies as “the longest thing anyone has submitted for an ACX contest”. Since then I've been diving into particulars, exploring how we use the insights I learned while writing it to start re-enchanting all the academic subjects from kindergarten to high school. But in the fun of all that, I fear I've lost touch with that original question. How, even in theory, could a method of education help all students become rational? It probably won't surprise you that I think part of the answer is Bayes' theorem. But the equation is famously prickly and off-putting: https://www.astralcodexten.com/p/bayes-for-everyone
Tyler Cowen of Marginal Revolution continues to disagree with my Contra MR On Charity Regrants. Going through his response piece by piece, slightly out of order: Scott takes me to be endorsing Rubio's claim that the third-party NGOs simply pocket the money. In reality my fact check with o3 found (correctly) that the money was “channelled through” the NGOs, not pocketed. Scott lumps my claim together with Rubio's as if we were saying the same thing. My very next words (“I do understand that not all third party allocations are wasteful…”) show a clear understanding that the money is channeled, not pocketed, and my earlier and longer post on US AID makes that clearer yet at greater length. Scott is simply misrepresenting me here. The full post is in the image below: https://www.astralcodexten.com/p/sorry-i-still-think-mr-is-wrong-about
Consciousness is the great mystery. In search of answers, scientists have plumbed every edge case they can think of - sleep, comas, lucid dreams, LSD trips, meditative ecstasies, seizures, neurosurgeries, that one pastor in 18th century England who claimed a carriage accident turned him into a p-zombie. Still, new stuff occasionally turns up. I assume this tweet is a troll (source: the guy has a frog avatar)1: https://www.astralcodexten.com/p/moments-of-awakening
I often disagree with Marginal Revolution, but their post today made me a new level of angry: https://www.astralcodexten.com/p/contra-mr-on-charity-regrants
Many commenters responded to yesterday's post by challenging the claim that 1.2 million Americans died of COVID... https://www.astralcodexten.com/p/the-evidence-that-a-million-americans
Five years later, we can't stop talking about COVID. Remember lockdowns? The conflicting guidelines about masks - don't wear them! Wear them! Maybe wear them! School closures, remote learning, learning loss, something about teachers' unions. That one Vox article on how worrying about COVID was anti-Chinese racism. The time Trump sort of half-suggested injecting disinfectants. Hydroxychloroquine, ivermectin, fluvoxamine, Paxlovid. Those jerks who tried to pressure you into getting vaccines, or those other jerks who wouldn't get vaccines even though it put everyone else at risk. Anthony Fauci, Pierre Kory, Great Barrington, Tomas Pueyo, Alina Chan. Five years later, you can open up any news site and find continuing debate about all of these things. The only thing about COVID nobody talks about anymore is the 1.2 million deaths. https://www.astralcodexten.com/p/the-other-covid-reckoning
Bryan Caplan's Selfish Reasons To Have More Kids is like the Bible. You already know what it says. You've already decided whether you believe or not. Do you really have to read it all the way through? But when you're going through a rough patch in your life, sometimes it helps to pick up a Bible and look for pearls of forgotten wisdom. That's where I am now. Having twins is a lot of work. My wife does most of it. My nanny does most of what's left. Even so, the remaining few hours a day leave me exhausted. I decided to read the canonical book on how having kids is easier and more fun than you think, to see if maybe I was overdoing something. After many trials, tribulations, false starts, grabs, shrieks, and attacks of opportunity . . . https://www.astralcodexten.com/p/book-review-selfish-reasons-to-have
Ask Redditors what's the worst subreddit, and a few names always come up. /r/atheism and /r/childfree are unpopular, but if I read them with an open mind, I always end up sympathetic - neither lifestyle is persecuted in my particular corner of society, but the Redditors there have usually been through some crazy stuff, and I don't begrudge them a place to vent. The one that really floors me is /r/petfree. The denizens of /r/petfree don't like pets. Their particular complaints vary, but most common are: Some stores either allow pets or don't enforce bans on them, and then there are pets go in those stores, and they are dirty and annoying. Some parks either allow off-leash pets or don't enforce bans on them, and then there are off-leash pets in those parks, and they are dirty and annoying. Sometimes pets attack people. Sometimes inconsiderate people get pets they can't take care of and offload some of the burden onto you. Sometimes people are cringe about their pets, in an “AWWWWW MY PRECIOUS WITTLE FUR BABY” way. Sometimes people barge into spaces that are about something else and talk about their pets instead. These are all valid complaints. But the people on /r/petfree go a little far: https://www.astralcodexten.com/p/in-search-of-rpetfree
Thanks to everyone who commented on the original post. https://www.astralcodexten.com/p/highlights-from-the-comments-on-ai
Some of the more unhinged writing on superintelligence pictures AI doing things that seem like magic. Crossing air gaps to escape its data center. Building nanomachines from simple components. Plowing through physical bottlenecks to revolutionize the economy in months. More sober thinkers point out that these things might be physically impossible. You can't do physically impossible things, even if you're very smart. No, say the speculators, you don't understand. Everything is physically impossible when you're 800 IQ points too dumb to figure it out. A chimp might feel secure that humans couldn't reach him if he climbed a tree; he could never predict arrows, ladders, chainsaws, or helicopters. What superintelligent strategies lie as far outside our solution set as “use a helicopter” is outside a chimp's? https://www.astralcodexten.com/p/testing-ais-geoguessr-genius
Cathy Young's new hit piece on Curtis Yarvin (aka Mencius Moldbug) doesn't mince words. Titled The Blogger Who Hates America, it describes him as an "inept", "not exactly coherent" "trollish, ill-informed pseudo-intellectual" notable for his "woefully superficial knowledge and utter ignorance". Yarvin's fans counter that if you look deeper, he has good responses to Young's objections: Both sides are right. The synthesis is that Moldbug sold out. In the late 2000s, Moldbug wrote some genuinely interesting speculations on novel sci-fi variants of autocracy. Admitting that the dictatorships of the 20th century were horrifying, he proposed creative ways to patch their vulnerabilities by combining 18th century monarchy with 22nd century cyberpunk to create something better than either. These ideas might not have been realistic. But they were cool, edgy, and had a certain intellectual appeal. Then in the late 2010s, he caught his first whiff of actual power and dropped it all like a hot potato. The MAGA movement was exactly what 2000s Moldbug feared most - a cancerous outgrowth of democracy riding the same wave of populist anger as the 20th century dictatorships he loathed. But in the hope of winning a temporary political victory, he let them wear him as a skinsuit - giving their normal, boring autocratic tendencies the mystique of the cool, edgy, all-vulnerabilities-patched autocracy he foretold in his manifestos. https://www.astralcodexten.com/p/moldbug-sold-out
President Trump's approval rating has fallen to near-historic lows. With economic disruption from the tariffs likely to hit next month, his numbers will probably get even worse; this administration could reach unprecedented levels of unpopularity. If I were a far-right populist, I would be thinking hard about a strategy to prevent the blowback from crippling the movement. Such a strategy is easy to come by. Anger over DOGE and deportations has a natural floor. If Trump's base starts abandoning him, it will be because of the tariffs. But tariffs aren't a load-bearing part of the MAGA platform. Other right-populist leaders like Orban, Bukele, and Modi show no interest in them. They seem an idiosyncratic obsession of Trump's, a cost that the rest of the movement pays to keep him around. So, (our hypothetical populist strategist might start thinking after Trump's approval hits the ocean trenches and starts drilling) - whatever. MAGA minus Trump's personal idiosyncrasies can remain a viable platform. You don't even have to exert any effort to make it happen. Trump will retire in 2028 and pass the torch to Vance. And although Vance supports tariffs now, that's only because he's a spineless toady. After Trump leaves the picture, Vance will gain thirty IQ points, make an eloquent speech about how tariffs were the right tool for the mid-2020s but no longer, and the problem will solve itself. Right? Don't let them get away with this. Although it's true that tariffs owe as much to Trump's idiosyncrasies as to the inexorable logic of right-wing populism, the ability of a President to hold the nation hostage to his own idiosyncrasies is itself a consequence of populist ideology. https://www.astralcodexten.com/p/the-populist-right-must-own-tariffs
AI Futures Project is the group behind AI 2027. I've been helping them with their blog. Posts written or co-written by me include: Beyond The Last Horizon - what's behind that METR result showing that AI time horizons double every seven months? And is it really every seven months? Might it be faster? AI 2027: Media, Reactions, Criticism - a look at some of the response to AI 2027, with links to some of the best objections and the team's responses. Why America Wins - why we predict that America will stay ahead of China on AI in the near future, and what could change this. I will probably be shifting most of my AI blogging there for a while to take advantage of access to the team's expertise. There's also a post on transparency by Daniel Kokotajlo, and we hope to eventually host writing by other team members as well. https://www.astralcodexten.com/p/ai-futures-blogging-and-ama
[I haven't independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can't guarantee I will have caught them all by the time you read this.] ttps://www.astralcodexten.com/p/links-for-april-2025
(original post: Come On, Obviously The Purpose Of A System Is Not What It Does) … Thanks to everyone who commented on this controversial post. Many people argued that the phrase had some valuable insight, but disagreed on what it was. The most popular meaning was something like “if a system consistently fails at its stated purpose, but people don't change it, consider that the stated purpose is less important than some actual, hidden purpose, at which it is succeeding”. I agree you should consider this, but I still object to the original phrase, for several reasons. https://www.astralcodexten.com/p/highlights-from-the-comments-on-posiwid
(see Wikipedia: The Purpose Of A System Is What It Does) Consider the following claims The purpose of a cancer hospital is to cure two-thirds of cancer patients. The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia. The purpose of the British government is to propose a controversial new sentencing policy, stand firm in the face of protests for a while, then cave in after slightly larger protests and agree not to pass the policy after all. The purpose of the New York bus system is to emit four billion tons of carbon dioxide. These are obviously false. https://www.astralcodexten.com/p/come-on-obviously-the-purpose-of
Here's a list of things I updated on after working on the scenario. Some of these are discussed in more detail in the supplements, including the compute forecast, timelines forecast, takeoff forecast, AI goals forecast, and security forecast. I'm highlighting these because it seems like a lot of people missed their existence, and they're what transforms the scenario from cool story to research-backed debate contribution. These are my opinions only, and not necessarily endorsed by the rest of the team. https://www.astralcodexten.com/p/my-takeaways-from-ai-2027
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution. We wrote a scenario that represents our best guess about what that might look like.1 It's informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes. https://ai-2027.com/ (A condensed two-hour version with footnotes and text boxes removed is available at the above link.)
Or maybe 2028, it's complicated In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years. The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn't expect what happened next. He got it all right. Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel's document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel's blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years. I wasn't the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized. Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including: Eli Lifland, a superforecaster who is ranked first on RAND's Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models. Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion. Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle. Romeo Dean, a leader of Harvard's AI Safety Student Team and budding expert in AI hardware. …and me! Since October, I've been volunteering part-time, doing some writing and publicity work. I can't take credit for the forecast itself - or even for the lion's share of the writing and publicity - but it's been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we'll get as lucky as last time, but we still think it's a valuable contribution to the discussion. https://www.astralcodexten.com/p/introducing-ai-2027 https://ai-2027.com/
In Ballad of the White Horse, G.K. Chesterton describes the Virgin Mary: Her face was like an open word When brave men speak and choose, The very colours of her coat Were better than good news. Why the colors of her coat? The medievals took their dyes very seriously. This was before modern chemistry, so you had to try hard if you wanted good colors. Try hard they did; they famously used literal gold, hammered into ultrathin sheets, to make golden highlights. Blue was another tough one. You could do mediocre, half-faded blues with azurite. But if you wanted perfect blue, the color of the heavens on a clear evening, you needed ultramarine. Here is the process for getting ultramarine. First, go to Afghanistan. Keep in mind, you start in England or France or wherever. Afghanistan is four thousand miles away. Your path takes you through tall mountains, burning deserts, and several dozen Muslim countries that are still pissed about the whole Crusades thing. Still alive? After you arrive, climb 7,000 feet in the mountains of Kuran Wa Munjan until you reach the mines of Sar-i-Sang. There, in a freezing desert, the wretched of the earth work themselves to an early grave breaking apart the rocks of Badakhshan to produce a few hundred kilograms per year of blue stone - the only lapis lazuli production in the known world. Buy the stone and retrace your path through the burning deserts and vengeful Muslims until you're back in England or France or wherever. Still alive? That was the easy part. Now you need to go through a chemical extraction process that makes the Philosopher's Stone look like freshman chem lab. "The lengthy process of pulverization, sifting, and washing to produce ultramarine makes the natural pigment … roughly ten times more expensive than the stone it came from." Finally you have ultramarine! How much? I can't find good numbers, but Claude estimates that the ultramarine production of all of medieval Europe was around the order of 30 kg per year - not enough to paint a medium-sized wall. Ultramarine had to be saved for ultra-high-value applications. In practice, the medievals converged on a single use case - painting the Virgin Mary's coat. https://www.astralcodexten.com/p/the-colors-of-her-coat