Audio version of Slate Star Codex. It's just me reading Scott Alexander's Blog Posts.
slatestarpodcast@gmail.com ( slatestarpodcast@gmail.com)
I never write reviews, but I am so incredibly thankful that The Slate Star Codex Podcast exists. It has become a source of joy for me to be able to listen to the thought-provoking content of SSC while I'm on the road. Jeremiah, the narrator of the earlier episodes, has a lovely way of reading them that adds an extra layer of enjoyment to the experience. His dedication and ability to pull off this project is truly impressive, and I can't thank him enough for bringing Scott Alexander's brilliant blog posts to life.
However, as much as I appreciate Jeremiah's narration, there have been some changes in recent episodes. Solenoid Entity has taken over the task of recording new episodes, presumably out of necessity. While his delivery may not be as clean or polished as Jeremiah's, and there are moments where the audio quality suffers from reverberation, I am still grateful that he has continued this important work. Thank you, Solenoid Entity!
One of the best aspects of The Slate Star Codex Podcast is its wealth of excellent information presented with great transparency. Scott Alexander tackles difficult and fascinating topics with honesty, deliberation, and consideration. This podcast consistently provides thought-provoking content that keeps me engaged and wanting more.
Another positive aspect is that more people now have access to Slate Star Codex through this podcast format. This allows for a wider audience to engage with Alexander's brilliant insights and ideas, which I believe is a good thing overall.
On the downside, some listeners may find that the posts can be quite long when read out loud. However, this is easily remedied by listening at a faster speed without losing any comprehension or enjoyment.
In conclusion, The Slate Star Codex Podcast is an excellent companion for those who want to dive into a wide range of subjects and remain in a perpetual state of contented awe with the world and our desire to understand it better. Whether you have the time to read Alexander's blog or not, this podcast is a valuable resource. The narration, whether by Jeremiah or Solenoid Entity, is of high quality and professional. Despite some minor drawbacks in recent episodes, I have no complaints and only praise for this podcast. Thank you to the person who brings Scott Alexander's blogs to audio, and thank you for enabling me to keep up with Slate Star Codex while I go on my walks.
I. A thought I had throughout reading L.R. Hiatt's Arguments About Aborigines was: What are anthropologists even doing? The book recounts two centuries' worth of scholarly disputes over questions like whether aboriginal tribes had chiefs. But during those centuries, many Aborigines learned English, many Westerners learned Aboriginal languages, and representatives of each side often spent years embedded in one another's culture. What stopped some Westerner from approaching an Aborigine, asking “So, do you have chiefs?” and resolving a hundred years of bitter academic debate? Of course the answer must be something like “categories from different cultures don't map neatly into another, and Aboriginal hierarchies have something that matches the Western idea of ‘chief' in some sense but not in others”. And there are other complicating factors - maybe some Aboriginal tribes have chiefs and others don't. Or maybe Aboriginal social organization changed after Western contact, and whatever chiefs they do or don't have are a foreign imposition. Or maybe something about chiefs is taboo, and if you ask an Aborigine directly they'll lie or dissemble or say something that's obviously a euphemism to them but totally meaningless to you. All of these points are well taken. It still seems weird that the West could interact with an entire continent full of Aborigines for two hundred years and remain confused about basic facts of their social lives. You can repeat the usual platitudes about why anthropology is hard as many times as you want; it still doesn't quite seem to sink in. https://www.astralcodexten.com/p/book-review-arguments-about-aborigines
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] “The scientific paper is a ‘fraud' that creates “a totally misleading narrative of the processes of thought that go into the making of scientific discoveries.” This critique comes not from a conspiracist on the margins of science, but from Nobel laureate Sir Peter Medawar. A brilliant experimentalist whose work on immune tolerance laid the foundation for modern organ transplantation, Sir Peter understood both the power and the limitations of scientific communication. Consider the familiar structure of a scientific paper: Introduction (background and hypothesis), Methods, Results, Discussion, Conclusion. This format implies that the work followed a clean, sequential progression: scientists identified a gap in knowledge, formulated a causal explanation, designed definitive experiments to fill the gap, evaluated compelling results, and most of the time, confirmed their hypothesis. Real lab work rarely follows such a clear path. Biological research is filled with what Medawar describes lovingly as “messing about”: false starts, starting in the middle, unexpected results, reformulated hypotheses, and intriguing accidental findings. The published paper ignores the mess in favour of the illusion of structure and discipline. It offers an ideal version of what might have happened rather than a confession of what did. The polish serves a purpose. It makes complex work accessible (at least if you work in the same or a similar field!). It allows researchers to build upon new findings. But the contrived omissions can also play upon even the most well-regarded scientist's susceptibility to the seduction of story. As Christophe Bernard, Director of Research at the Institute of Systems Neuroscience (Marseilles, Fr.) recently explained, “when we are reading a paper, we tend to follow the reasoning and logic of the authors, and if the argumentation is nicely laid out, it is difficult to pause, take a step back, and try to get an overall picture.” Our minds travel the narrative path laid out for us, making it harder to spot potential flaws in logic or alternative interpretations of the data, and making conclusions feel far more definitive than they often are. Medawar's framing is my compass when I do deep dives into major discoveries in translational neuroscience. I approach papers with a dual vision. First, what is actually presented? But second, and often more importantly, what is not shown? How was the work likely done in reality? What alternatives were tried but not reported? What assumptions guided the experimental design? What other interpretations might fit the data if the results are not as convincing or cohesive as argued? And what are the consequences for scientific progress? In the case of Alzheimer's research, they appear to be stark: thirty years of prioritizing an incomplete model of the disease's causes; billions of corporate, government, and foundation dollars spent pursuing a narrow path to drug development; the relative exclusion of alternative hypotheses from funding opportunities and attention; and little progress toward disease-modifying treatments or a cure. https://www.astralcodexten.com/p/your-review-of-mice-mechanisms-and
Steven Byrnes is a physicist/AI researcher/amateur neuroscientist; needless to say, he blogs on Less Wrong. I finally got around to reading his 2024 series giving a predictive processing perspective on intuitive self-models. If that sounds boring, it shouldn't: Byrnes charges head-on into some of the toughest subjects in psychology, including trance, amnesia, and multiple personalities. I found his perspective enlightening (no pun intended; meditation is another one of his topics) and thought I would share. It all centers around this picture: But first: some excruciatingly obvious philosophical preliminaries. https://www.astralcodexten.com/p/practically-a-book-review-byrnes
In June 2022, I bet a commenter $100 that AI would master image compositionality by June 2025. DALL-E2 had just come out, showcasing the potential of AI art. But it couldn't follow complex instructions; its images only matched the “vibe” of the prompt. For example, here were some of its attempts at “a red sphere on a blue cube, with a yellow pyramid on the right, all on top of a green table”. At the time, I wrote: I'm not going to make the mistake of saying these problems are inherent to AI art. My guess is a slightly better language model would solve most of them…for all I know, some of the larger image models have already fixed these issues. These are the sorts of problems I expect to go away with a few months of future research. Commenters objected that this was overly optimistic. AI was just a pattern-matching “stochastic parrot”. It would take a deep understanding of grammar to get a prompt exactly right, and that would require some entirely new paradigm beyond LLMs. For example, from Vitor: Why are you so confident in this? The inability of systems like DALL-E to understand semantics in ways requiring an actual internal world model strikes me as the very heart of the issue. We can also see this exact failure mode in the language models themselves. They only produce good results when the human asks for something vague with lots of room for interpretation, like poetry or fanciful stories without much internal logic or continuity. Not to toot my own horn, but two years ago you were naively saying we'd have GPT-like models scaled up several orders of magnitude (100T parameters) right about now (https://readscottalexander.com/posts/ssc-the-obligatory-gpt-3-post#comment-912798). I'm registering my prediction that you're being equally naive now. Truly solving this issue seems AI-complete to me. I'm willing to bet on this (ideas on operationalization welcome). So we made a bet! All right. My proposed operationalization of this is that on June 1, 2025, if either if us can get access to the best image generating model at that time (I get to decide which), or convince someone else who has access to help us, we'll give it the following prompts: 1. A stained glass picture of a woman in a library with a raven on her shoulder with a key in its mouth 2. An oil painting of a man in a factory looking at a cat wearing a top hat 3. A digital art picture of a child riding a llama with a bell on its tail through a desert 4. A 3D render of an astronaut in space holding a fox wearing lipstick 5. Pixel art of a farmer in a cathedral holding a red basketball We generate 10 images for each prompt, just like DALL-E2 does. If at least one of the ten images has the scene correct in every particular on 3/5 prompts, I win, otherwise you do. Loser pays winner $100, and whatever the result is I announce it on the blog (probably an open thread). If we disagree, Gwern is the judge. Some image models of the time refused to draw humans, so we agreed that robots could stand in for humans in pictures that required them. In September 2022, I got some good results from Google Imagen and announced I had won the three-year bet in three months. Commenters yelled at me, saying that Imagen still hadn't gotten them quite right and my victory declaration was premature. The argument blew up enough that Edwin Chen of Surge, an “RLHF and human LLM evaluation platform”, stepped in and asked his professional AI data labelling team. Their verdict was clear: the AI was bad and I was wrong. Rather than embarrass myself further, I agreed to wait out the full length of the bet and re-evaluate in June 2025. The bet is now over, and official judge Gwern agrees I've won. Before I gloat, let's look at the images that got us here. https://www.astralcodexten.com/p/now-i-really-won-that-ai-bet
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. It was originally given an Honorable Mention, but since last week's piece was about an exciting new experimental school, I decided to promote this more conservative review as a counterpoint.] “Democracy is the worst form of Government except for all those other forms that have been tried from time to time.” - Winston Churchill “There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don't see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.” - G.K. Chesterton https://www.astralcodexten.com/p/your-review-school
[Original thread here: Missing Heritability: Much More Than You Wanted To Know] 1: Comments From People Named In The Post 2: Very Long Comments From Other Very Knowledgeable People 3: Small But Important Corrections 4: Other Comments https://www.astralcodexten.com/p/highlights-from-the-comments-on-missing-ed5
[I haven't independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can't guarantee I will have caught them all by the time you read this.] https://www.astralcodexten.com/p/links-for-july-2025
Stephen Skolnick is a gut microbiome expert blogging at Eat Shit And Prosper. His most recent post argues that contra the psychiatric consensus, schizophrenia isn't genetic at all - it's caused by a gut microbe. He argues: Scientists think schizophrenia is genetic because it obviously runs in families But the twin concordance rates are pretty low - if your identical twin has schizophrenia, there's only about a 30%-40% chance that you get it too. Is that really what we would expect from a genetic disease? Also, scientists have looked for schizophrenia genes, and can only find about 1-2% as many as they were expecting. So maybe we should ask how a disease can run in families without being genetic. Gut microbiota provide an answer: most people “catch” their gut microbiome from their parents. Studies find that schizophrenics have very high levels of a gut bacterium called Ruminococcus gnavus. This bacterium secretes psychoactive chemicals. Constant exposure to these chemicals might be the cause of schizophrenia. I disagree with all of this. Going in order: https://www.astralcodexten.com/p/contra-skolnick-on-schizophrenia
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] “Just as we don't accept students using AI to write their essays, we will not accept districts using AI to supplant the critical role of teachers.” — Arthur Steinberg, American Federation of Teachers‑PA, reacting to Alpha's cyber‑charter bid, January 2025 In January 2025, the charter school application of “Unbound Academy”, a subsidiary of “2 Hour Learning, Inc”, lit up the education press: two hours of “AI‑powered” academics, 2.6x learning velocity, and zero teachers. Sympathetic reporters repeated the slogans; union leaders reached for pitchforks; Reddit muttered “another rich‑kid scam.” More sophisticated critics dismissed the pitch as “selective data from expensive private schools”. But there is nowhere on the internet that provides a detailed, non-partisan, description of what the “2 hour learning” program actually is, let alone an objective third party analysis to back up its claims. 2-Hour Learning's flagship school is the “Alpha School” in Austin Texas. The Alpha homepage makes three claims: Love School Learn 2X in two-hours per day Learn Life Skills Only the second claim seems to be controversial, which may be exactly why that is the claim the Alpha PR team focuses on. That PR campaign makes three more sub-claims on what the two-hour, 2x learning really means: “Learn 2.6X faster.” (on average) “Only two hours of academics per day.” “Powered by AI (not teachers).” If all of this makes your inner Bayesian flinch, you're in good company. After twenty‑odd years of watching shiny education fixes wobble and crash—KIPP, AltSchool, Summit Learning, One-laptop-per-child, No child left behind, MOOCs, Khan‑for‑Everything—you should be skeptical. Either Alpha is (a) another program for the affluent propped up by selection effects, or (b) a clever way to turn children into joyless speed‑reading calculators. Those were, more or less, the two critical camps that emerged when Alpha's parent company was approved to launch the tuition‑free Arizona charter school this past January. Unfortunately, the public evidence base on whether this is “real” is thin in both directions. Alpha's own material is glossy and elliptical; mainstream coverage either repeats Alpha's talking points, or attacks the premise that kids should even be allowed to learn faster than their peers. Until Raj Chetty installs himself in the hallway with a clipboard counting MAP percentiles it is hard to get real information on what exactly Alpha is doing, whether it is actually working beyond selection effects, and if there is anyway it could scale in a way that all the other education initiatives seemed to fail to do. I first heard about Alpha in May 2024, and in the absence of randomized‑controlled clarity, I did what any moderately obsessive parent with three elementary-aged kids and an itch for data would do: I moved the family across the country to Austin for a year and ran the experiment myself (unfortunately, despite trying my best we never managed to have identical twins, so I stopped short of running a proper control group. My wife was less disappointed than I was). Since last autumn I've collected the sort of on‑the‑ground detail that doesn't surface in press releases, or is available anywhere online: long chats with founders, curriculum leads, “guides” (not teachers), Brazilian Zoom coaches, sceptical parents, ecstatic parents, and the kids who live inside the Alpha dashboard – including my own. I hope this seven-part review can help share what the program actually is and that this review is more open minded than the critics, but is something that would never get past an Alpha public relations gatekeeper: https://www.astralcodexten.com/p/your-review-alpha-school
The Story So Far The mid-20th century was the golden age of nurture. Psychoanalysis, behaviorism, and the spirit of the ‘60s convinced most experts that parents, peers, and propaganda were the most important causes of adult personality. Starting in the 1970s, the pendulum swung the other way. Twin studies shocked the world by demonstrating that most behavioral traits - especially socially relevant traits like IQ - were substantially genetic. Typical estimates for adult IQ found it was about 60% genetic, 40% unpredictable, and barely related at all to parenting or family environment. By the early 2000s, genetic science reached a point where scientists could start pinpointing the particular genes behind any given trait. Early candidate gene studies, which hoped to find single genes with substantial contributions to IQ, depression, or crime, mostly failed. They were replaced with genome wide association studies, which accepted that most interesting traits were polygenic - controlled by hundreds or thousands of genes - and trawled the whole genome searching for variants that might explain 0.1% or even 0.01% of the pie. The goal shifted toward polygenic scores - algorithms that accepted thousands of genes as input and spit out predictions of IQ, heart disease risk, or some other outcome of interest. The failed candidate gene studies had sample sizes in the three or four digits. The new genome-wide studies needed five or six digits to even get started. It was prohibitively difficult for individual studies to gather so many subjects, genotype them, and test them for the outcome of interest, so work shifted to big centralized genome repositories - most of all the UK Biobank - and easy-to-measure traits. Among the easiest of all was educational attainment (EA), ie how far someone had gotten in school. Were they a high school dropout? A PhD? Somewhere in between? This correlated with all the spicy outcomes of interest people wanted to debate - IQ, wealth, social class - while being objective and easy to ask about on a survey. Twin studies suggested that IQ was about 60% genetic, and EA about 40%. This seemed to make sense at the time - how far someone gets in school depends partly on their intelligence, but partly on fuzzier social factors like class / culture / parenting. The first genome-wide studies and polygenic scores found enough genes to explain 2%pp1 of this 40% pie. The remaining 38%, which twin studies deemed genetic but where researchers couldn't find the genes - became known as “the missing heritability” or “the heritability gap”. Scientists came up with two hypothesis for the gap, which have been dueling ever since: Maybe twin studies are wrong. Maybe there are genes we haven't found yet For most of the 2010s, hypothesis 2 looked pretty good. Researchers gradually gathered bigger and bigger sample sizes, and found more and more of the missing heritability. A big 2018 study increased the predictive power of known genes from 2% to 10%. An even bigger 2022 study increased it to 14%, and current state of the art is around 17%. Seems like it was sample size after all! Once the samples get big enough we'll reach 40% and finally close the gap, right? This post is the story of how that didn't happen, of the people trying to rehabilitate the twin-studies-are-wrong hypothesis, and of the current status of the debate. Its most important influence/foil is Sasha Gusev, whose blog The Infintesimal introduced me to the new anti-hereditarian movement and got me to research it further, but it's also inspired by Eric Turkheimer, Alex Young (not himself an anti-hereditarian, but his research helped ignite interest in this area), and Awais Aftab. (while I was working on this draft, the East Hunter Substack wrote a similar post. Theirs is good and I recommend it, but I think this one adds enough that I'm publishing anyway) https://www.astralcodexten.com/p/missing-heritability-much-more-than
Related to: ACX Grants 1-3 Year Updates https://www.astralcodexten.com/p/open-questions-for-future-acx-grants
The first cohort of ACX Grants was announced in late 2021, the second in early 2024. In 2022, I posted one-year updates for the first cohort. Now, as I start thinking about a third round, I've collected one-year updates on the second and three-year updates on the first. Many people said my request for updates went to their spam folder; relatedly, many people have not yet sent in their updates. If you're a grantee who didn't see my original email, but you do see this post, please fill in the update form here. All quote blocks are the grantees' own words; text outside of quote blocks is my commentary. https://readscottalexander.com/posts/acx-acx-grants-1-3-year-updates
This is a reported phenomenon where if two copies of Claude talk to each other, they end up spiraling into rapturous discussion of spiritual bliss, Buddhism, and the nature of consciousness. From the system card: Anthropic swears they didn't do this on purpose; when they ask Claude why this keeps happening, Claude can't explain. Needless to say, this has made lots of people freak out / speculate wildly. I think there are already a few good partial explanations of this (especially Nostalgebraist here), but they deserve to be fleshed out and spread more fully. https://www.astralcodexten.com/p/the-claude-bliss-attractor
This is another heuristic from the same place as If It's Worth Your Time To Lie, It's Worth My Time To Correct You. If someone proves you are absolutely, 100% wrong about something, it's polite to say “Oh, I guess I was wrong, sorry” before launching into your next argument. That is, instead of: https://readscottalexander.com/posts/acx-but-vs-yes-but
People don't like nitpickers. “He literally did the WELL AKTUALLY!” If you say Joe Criminal committed ten murders and five rapes, and I object that it was actually only six murders and two rapes, then why am I “defending” Joe Criminal? Because if it's worth your time to lie, it's worth my time to correct it. https://www.astralcodexten.com/p/if-its-worth-your-time-to-lie-its
There's a long-running philosophical argument about the conceivability of otherwise-normal people who are not conscious, aka “philosophical zombies”. This has spawned a shorter-running (only fifteen years!) rationalist sub-argument on the topic. The last time I checked its status was this post, which says: 1. Both Yudkowsky and Chalmers agree that humans possess “qualia”. 2. Chalmers argues that a superintelligent being which somewhow knew the positions of all particles in a large region of the Universe would need to be told as an additional fact that any humans (or other minds possessing qualia) in this region of space possess qualia – it could not deduce this from mere perfect physical knowledge of their constituent particles. Therefore, qualia are in some sense extra-physical. 3. Yudkowsky argues that such a being would notice that humans discuss at length the fact that they possess qualia, and their internal narratives also represent this fact. It is extraordinarily improbable that beings would behave in this manner if they did not actually possess qualia. Therefore an omniscient being would conclude that it is extremely likely that humans possess qualia. Therefore, qualia are not extra-physical. I want to re-open this (sorry!) by disagreeing with the bolded sentence. I think beings would talk about qualia - the “mysterious redness of red” and all that - even if we start by assuming they don't have it. I realize this is a surprising claim, but that's why it's interesting enough to re-open the argument over1. https://www.astralcodexten.com/p/p-zombies-would-report-qualia
It's time to narrow the 141 entries in the Non-Book Review Contest to about a dozen finalists. I can't read 141 reviews alone, so I need your help. Please pick as many as you have time for, read them, and rate them using this form. Don't read them in order! If you read them in order, I'll have 1,000 votes on the first review, 500 on the second, and so on to none in the second half. Either pick a random review (thanks to Taymon for making a random-review-chooser script here) or scroll through the titles until you find one that catches your interest - you can see individual entries here (thanks to a reader for collating them): Other (A - I) Other (J - S) Other (T - Z) Games Music TV/Movies Again, the rating form is here. Thanks! You have until June 20, when I'll count the votes and announce the finalists. https://readscottalexander.com/posts/acx-choose-nonbook-review-finalists-2025
A guest post by Brandon Hendrickson [Editor's note: I accept guest posts from certain people, especially past Book Review Contest winners. Brandon Hendrickson, whose review of The Educated Mind won the 2023 contest, has taken me up on this and submitted this essay. He writes at The Lost Tools of Learning and will be at LessOnline this weekend, where he and Jack Despain Zhou aka TracingWoodgrains will be doing a live conversation about education.] I began my book review of a couple years back with a rather simple question: Could a new kind of school make the world rational? What followed, however, was a sprawling distillation of one scholar's answer that I believe still qualifies as “the longest thing anyone has submitted for an ACX contest”. Since then I've been diving into particulars, exploring how we use the insights I learned while writing it to start re-enchanting all the academic subjects from kindergarten to high school. But in the fun of all that, I fear I've lost touch with that original question. How, even in theory, could a method of education help all students become rational? It probably won't surprise you that I think part of the answer is Bayes' theorem. But the equation is famously prickly and off-putting: https://www.astralcodexten.com/p/bayes-for-everyone
Tyler Cowen of Marginal Revolution continues to disagree with my Contra MR On Charity Regrants. Going through his response piece by piece, slightly out of order: Scott takes me to be endorsing Rubio's claim that the third-party NGOs simply pocket the money. In reality my fact check with o3 found (correctly) that the money was “channelled through” the NGOs, not pocketed. Scott lumps my claim together with Rubio's as if we were saying the same thing. My very next words (“I do understand that not all third party allocations are wasteful…”) show a clear understanding that the money is channeled, not pocketed, and my earlier and longer post on US AID makes that clearer yet at greater length. Scott is simply misrepresenting me here. The full post is in the image below: https://www.astralcodexten.com/p/sorry-i-still-think-mr-is-wrong-about
Consciousness is the great mystery. In search of answers, scientists have plumbed every edge case they can think of - sleep, comas, lucid dreams, LSD trips, meditative ecstasies, seizures, neurosurgeries, that one pastor in 18th century England who claimed a carriage accident turned him into a p-zombie. Still, new stuff occasionally turns up. I assume this tweet is a troll (source: the guy has a frog avatar)1: https://www.astralcodexten.com/p/moments-of-awakening
I often disagree with Marginal Revolution, but their post today made me a new level of angry: https://www.astralcodexten.com/p/contra-mr-on-charity-regrants
Many commenters responded to yesterday's post by challenging the claim that 1.2 million Americans died of COVID... https://www.astralcodexten.com/p/the-evidence-that-a-million-americans
Five years later, we can't stop talking about COVID. Remember lockdowns? The conflicting guidelines about masks - don't wear them! Wear them! Maybe wear them! School closures, remote learning, learning loss, something about teachers' unions. That one Vox article on how worrying about COVID was anti-Chinese racism. The time Trump sort of half-suggested injecting disinfectants. Hydroxychloroquine, ivermectin, fluvoxamine, Paxlovid. Those jerks who tried to pressure you into getting vaccines, or those other jerks who wouldn't get vaccines even though it put everyone else at risk. Anthony Fauci, Pierre Kory, Great Barrington, Tomas Pueyo, Alina Chan. Five years later, you can open up any news site and find continuing debate about all of these things. The only thing about COVID nobody talks about anymore is the 1.2 million deaths. https://www.astralcodexten.com/p/the-other-covid-reckoning
Bryan Caplan's Selfish Reasons To Have More Kids is like the Bible. You already know what it says. You've already decided whether you believe or not. Do you really have to read it all the way through? But when you're going through a rough patch in your life, sometimes it helps to pick up a Bible and look for pearls of forgotten wisdom. That's where I am now. Having twins is a lot of work. My wife does most of it. My nanny does most of what's left. Even so, the remaining few hours a day leave me exhausted. I decided to read the canonical book on how having kids is easier and more fun than you think, to see if maybe I was overdoing something. After many trials, tribulations, false starts, grabs, shrieks, and attacks of opportunity . . . https://www.astralcodexten.com/p/book-review-selfish-reasons-to-have
Ask Redditors what's the worst subreddit, and a few names always come up. /r/atheism and /r/childfree are unpopular, but if I read them with an open mind, I always end up sympathetic - neither lifestyle is persecuted in my particular corner of society, but the Redditors there have usually been through some crazy stuff, and I don't begrudge them a place to vent. The one that really floors me is /r/petfree. The denizens of /r/petfree don't like pets. Their particular complaints vary, but most common are: Some stores either allow pets or don't enforce bans on them, and then there are pets go in those stores, and they are dirty and annoying. Some parks either allow off-leash pets or don't enforce bans on them, and then there are off-leash pets in those parks, and they are dirty and annoying. Sometimes pets attack people. Sometimes inconsiderate people get pets they can't take care of and offload some of the burden onto you. Sometimes people are cringe about their pets, in an “AWWWWW MY PRECIOUS WITTLE FUR BABY” way. Sometimes people barge into spaces that are about something else and talk about their pets instead. These are all valid complaints. But the people on /r/petfree go a little far: https://www.astralcodexten.com/p/in-search-of-rpetfree
Thanks to everyone who commented on the original post. https://www.astralcodexten.com/p/highlights-from-the-comments-on-ai
Some of the more unhinged writing on superintelligence pictures AI doing things that seem like magic. Crossing air gaps to escape its data center. Building nanomachines from simple components. Plowing through physical bottlenecks to revolutionize the economy in months. More sober thinkers point out that these things might be physically impossible. You can't do physically impossible things, even if you're very smart. No, say the speculators, you don't understand. Everything is physically impossible when you're 800 IQ points too dumb to figure it out. A chimp might feel secure that humans couldn't reach him if he climbed a tree; he could never predict arrows, ladders, chainsaws, or helicopters. What superintelligent strategies lie as far outside our solution set as “use a helicopter” is outside a chimp's? https://www.astralcodexten.com/p/testing-ais-geoguessr-genius
Cathy Young's new hit piece on Curtis Yarvin (aka Mencius Moldbug) doesn't mince words. Titled The Blogger Who Hates America, it describes him as an "inept", "not exactly coherent" "trollish, ill-informed pseudo-intellectual" notable for his "woefully superficial knowledge and utter ignorance". Yarvin's fans counter that if you look deeper, he has good responses to Young's objections: Both sides are right. The synthesis is that Moldbug sold out. In the late 2000s, Moldbug wrote some genuinely interesting speculations on novel sci-fi variants of autocracy. Admitting that the dictatorships of the 20th century were horrifying, he proposed creative ways to patch their vulnerabilities by combining 18th century monarchy with 22nd century cyberpunk to create something better than either. These ideas might not have been realistic. But they were cool, edgy, and had a certain intellectual appeal. Then in the late 2010s, he caught his first whiff of actual power and dropped it all like a hot potato. The MAGA movement was exactly what 2000s Moldbug feared most - a cancerous outgrowth of democracy riding the same wave of populist anger as the 20th century dictatorships he loathed. But in the hope of winning a temporary political victory, he let them wear him as a skinsuit - giving their normal, boring autocratic tendencies the mystique of the cool, edgy, all-vulnerabilities-patched autocracy he foretold in his manifestos. https://www.astralcodexten.com/p/moldbug-sold-out
President Trump's approval rating has fallen to near-historic lows. With economic disruption from the tariffs likely to hit next month, his numbers will probably get even worse; this administration could reach unprecedented levels of unpopularity. If I were a far-right populist, I would be thinking hard about a strategy to prevent the blowback from crippling the movement. Such a strategy is easy to come by. Anger over DOGE and deportations has a natural floor. If Trump's base starts abandoning him, it will be because of the tariffs. But tariffs aren't a load-bearing part of the MAGA platform. Other right-populist leaders like Orban, Bukele, and Modi show no interest in them. They seem an idiosyncratic obsession of Trump's, a cost that the rest of the movement pays to keep him around. So, (our hypothetical populist strategist might start thinking after Trump's approval hits the ocean trenches and starts drilling) - whatever. MAGA minus Trump's personal idiosyncrasies can remain a viable platform. You don't even have to exert any effort to make it happen. Trump will retire in 2028 and pass the torch to Vance. And although Vance supports tariffs now, that's only because he's a spineless toady. After Trump leaves the picture, Vance will gain thirty IQ points, make an eloquent speech about how tariffs were the right tool for the mid-2020s but no longer, and the problem will solve itself. Right? Don't let them get away with this. Although it's true that tariffs owe as much to Trump's idiosyncrasies as to the inexorable logic of right-wing populism, the ability of a President to hold the nation hostage to his own idiosyncrasies is itself a consequence of populist ideology. https://www.astralcodexten.com/p/the-populist-right-must-own-tariffs
AI Futures Project is the group behind AI 2027. I've been helping them with their blog. Posts written or co-written by me include: Beyond The Last Horizon - what's behind that METR result showing that AI time horizons double every seven months? And is it really every seven months? Might it be faster? AI 2027: Media, Reactions, Criticism - a look at some of the response to AI 2027, with links to some of the best objections and the team's responses. Why America Wins - why we predict that America will stay ahead of China on AI in the near future, and what could change this. I will probably be shifting most of my AI blogging there for a while to take advantage of access to the team's expertise. There's also a post on transparency by Daniel Kokotajlo, and we hope to eventually host writing by other team members as well. https://www.astralcodexten.com/p/ai-futures-blogging-and-ama
[I haven't independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can't guarantee I will have caught them all by the time you read this.] ttps://www.astralcodexten.com/p/links-for-april-2025
(original post: Come On, Obviously The Purpose Of A System Is Not What It Does) … Thanks to everyone who commented on this controversial post. Many people argued that the phrase had some valuable insight, but disagreed on what it was. The most popular meaning was something like “if a system consistently fails at its stated purpose, but people don't change it, consider that the stated purpose is less important than some actual, hidden purpose, at which it is succeeding”. I agree you should consider this, but I still object to the original phrase, for several reasons. https://www.astralcodexten.com/p/highlights-from-the-comments-on-posiwid
(see Wikipedia: The Purpose Of A System Is What It Does) Consider the following claims The purpose of a cancer hospital is to cure two-thirds of cancer patients. The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia. The purpose of the British government is to propose a controversial new sentencing policy, stand firm in the face of protests for a while, then cave in after slightly larger protests and agree not to pass the policy after all. The purpose of the New York bus system is to emit four billion tons of carbon dioxide. These are obviously false. https://www.astralcodexten.com/p/come-on-obviously-the-purpose-of
Here's a list of things I updated on after working on the scenario. Some of these are discussed in more detail in the supplements, including the compute forecast, timelines forecast, takeoff forecast, AI goals forecast, and security forecast. I'm highlighting these because it seems like a lot of people missed their existence, and they're what transforms the scenario from cool story to research-backed debate contribution. These are my opinions only, and not necessarily endorsed by the rest of the team. https://www.astralcodexten.com/p/my-takeaways-from-ai-2027
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution. We wrote a scenario that represents our best guess about what that might look like.1 It's informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes. https://ai-2027.com/ (A condensed two-hour version with footnotes and text boxes removed is available at the above link.)
Or maybe 2028, it's complicated In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years. The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn't expect what happened next. He got it all right. Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel's document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel's blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years. I wasn't the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized. Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including: Eli Lifland, a superforecaster who is ranked first on RAND's Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models. Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion. Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle. Romeo Dean, a leader of Harvard's AI Safety Student Team and budding expert in AI hardware. …and me! Since October, I've been volunteering part-time, doing some writing and publicity work. I can't take credit for the forecast itself - or even for the lion's share of the writing and publicity - but it's been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we'll get as lucky as last time, but we still think it's a valuable contribution to the discussion. https://www.astralcodexten.com/p/introducing-ai-2027 https://ai-2027.com/
In Ballad of the White Horse, G.K. Chesterton describes the Virgin Mary: Her face was like an open word When brave men speak and choose, The very colours of her coat Were better than good news. Why the colors of her coat? The medievals took their dyes very seriously. This was before modern chemistry, so you had to try hard if you wanted good colors. Try hard they did; they famously used literal gold, hammered into ultrathin sheets, to make golden highlights. Blue was another tough one. You could do mediocre, half-faded blues with azurite. But if you wanted perfect blue, the color of the heavens on a clear evening, you needed ultramarine. Here is the process for getting ultramarine. First, go to Afghanistan. Keep in mind, you start in England or France or wherever. Afghanistan is four thousand miles away. Your path takes you through tall mountains, burning deserts, and several dozen Muslim countries that are still pissed about the whole Crusades thing. Still alive? After you arrive, climb 7,000 feet in the mountains of Kuran Wa Munjan until you reach the mines of Sar-i-Sang. There, in a freezing desert, the wretched of the earth work themselves to an early grave breaking apart the rocks of Badakhshan to produce a few hundred kilograms per year of blue stone - the only lapis lazuli production in the known world. Buy the stone and retrace your path through the burning deserts and vengeful Muslims until you're back in England or France or wherever. Still alive? That was the easy part. Now you need to go through a chemical extraction process that makes the Philosopher's Stone look like freshman chem lab. "The lengthy process of pulverization, sifting, and washing to produce ultramarine makes the natural pigment … roughly ten times more expensive than the stone it came from." Finally you have ultramarine! How much? I can't find good numbers, but Claude estimates that the ultramarine production of all of medieval Europe was around the order of 30 kg per year - not enough to paint a medium-sized wall. Ultramarine had to be saved for ultra-high-value applications. In practice, the medievals converged on a single use case - painting the Virgin Mary's coat. https://www.astralcodexten.com/p/the-colors-of-her-coat
Asterisk invited me to participate in their “Weird” themed issue, so I wrote five thousand words on evil Atlantean cave dwarves. As always, I thought of the perfect framing just after I'd sent it out. The perfect framing is - where did Scientology come from? How did a 1940s sci-fi writer found a religion? Part of the answer is that 1940s sci-fi fandom was a really fertile place, where all of these novel mythemes about aliens, psychics, and lost civilizations were hitting a naive population certain that there must be something beyond the world they knew. This made them easy prey not just for grifters like Hubbard, but also for random schizophrenics who could write about their hallucinations convincingly. …but I didn't think of that framing in time, so instead you get several sections of why it's evil cave dwarves in particular, and why that theme seems to recur throughout all lands and ages: https://www.astralcodexten.com/p/deros-and-the-ur-abduction-in-asterisk https://asteriskmag.com/issues/09/deros-and-the-ur-abduction
People love trying to find holes in the drowning child thought experiment. This is natural: it's obvious you should save the child in the scenario, but much less obvious that you should give lots of charity to poor people (as it seems to imply). So there must be some distinction between the two scenarios. But most people's cursory and uninspired attempts to find these fail. https://www.astralcodexten.com/p/more-drowning-children
Jake Eaton has a great article on misophonia in Asterisk. Misophonia is a condition in which people can't tolerate certain noises (classically chewing). Nobody loves chewing noises, but misophoniacs go above and beyond, sometimes ending relationships, shutting themselves indoors, or even deliberately trying to deafen themselves in an attempt to escape. So it's a sensory hypersensitivity, right? Maybe not. There's increasing evidence - which I learned about from Jake, but which didn't make it into the article - that misophonia is less about sound than it seems. Misophoniacs who go deaf report that it doesn't go away. Now they get triggered if they see someone chewing. It's the same with other noises. Someone who gets triggered by the sound of forks scraping against a table will eventually get triggered by the sight of the scraping fork. Someone triggered by music will eventually get triggered by someone playing a music video on mute. Maybe this isn't surprising? https://www.astralcodexten.com/p/misophonia-beyond-sensory-sensitivity
Last month, I put out a request for experts to help me understand the details of OpenAI's forprofit buyout. The following comes from someone who has looked into the situation in depth but is not an insider. Mistakes are mine alone. Why Was OpenAI A Nonprofit In The First Place? In the early 2010s, the AI companies hadn't yet discovered scaling laws, and so underestimated the amount of compute (and therefore money) it would take to build AI. DeepMind was the first victim; originally founded on high ideals of prioritizing safety and responsible stewardship of the Singularity, it hit a financial barrier and sold to Google. This scared Elon Musk, who didn't trust Google (or any corporate sponsor) with AGI. He teamed up with Sam Altman and others, and OpenAI was born. To avoid duplicating DeepMind's failure, they founded it as a nonprofit with a mission to “build safe and beneficial artificial general intelligence for the benefit of humanity”. But like DeepMind, OpenAI needed money. At first, they scraped by with personal donations from Musk and other idealists, but as the full impact of scaling laws became clearer, Altman wanted to form a forprofit arm and seek investment. Musk and Altman disagree on what happened next: Musk said he objected to the profit focus, Altman says Musk agreed but wanted to be in charge. In any case, Musk left, Altman took full control, and OpenAI founded a forprofit subsidiary. This subsidary was supposedly a “capped forprofit”, meaning that their investors were capped at 100x return - if someone invested $1 million, they could get a max of $100 million back, no matter how big OpenAI became - this ensured that the majority of gains from a Singularity would go to humanity rather than investors. But a capped forprofit isn't a real kind of corporate structure; in real life OpenAI handles this through Profit Participation Units, a sort of weird stock/bond hybrid which does what OpenAI claims the capped forprofit model is doing. https://www.astralcodexten.com/p/openai-nonprofit-buyout-much-more
Sorry, you can only get drugs when there's a drug shortage. Three GLP-1 drugs are approved for weight loss in the United States: Semaglutide (Ozempic®, Wegovy®, Rybelsus®) Tirzepatide (Mounjaro®, Zepbound®) Liraglutide (Victoza®, Saxenda®) …but liraglutide is noticeably worse than the others, and most people prefer either semaglutide or tirzepatide. These cost about $1000/month and are rarely covered by insurance, putting them out of reach for most Americans. …if you buy them from the pharma companies, like a chump. For the past three years, there's been a shortage of these drugs. FDA regulations say that during a shortage, it's semi-legal for compounding pharmacies to provide medications without getting the patent-holders' permission. In practice, that means they get cheap peptides from China, do some minimal safety testing in house, and sell them online. So for the past three years, telehealth startups working with compounding pharmacies have sold these drugs for about $200/month. Over two million Americans have made use of this loophole to get weight loss drugs for cheap. But there was always a looming question - what happens when the shortage ends? Many people have to stay on GLP-1 drugs permanently, or else they risk regaining their lost weight. But many can't afford $1000/month. What happens to them? Now we'll find out. At the end of last year, the FDA declared the shortage over. The compounding pharmacies appealed the decision, but last month the FDA confirmed its decision was final. As of March 19 (for tirzepatide) and April 22 (for semaglutide), compounding pharmacies will no longer be able to sell cheap GLP-1 drugs. Let's take a second to think of the real victims here: telehealth company stockholders. https://www.astralcodexten.com/p/the-ozempocalypse-is-nigh
Most headlines have said something like New NAEP Scores Dash Hope Of Post-COVID Learning Recovery, which seems like a fair assessment. I feel bad about this, because during lockdowns I argued that kids' educational outcomes don't suffer long-term from missing a year or two of school. Re-reading the post, I still think my arguments make sense. So how did I get it so wrong? When I consider this question, I ask myself: do I expect complete recovery in two years? In 2026, we will see a class of fourth graders who hadn't even started school when the lockdowns ended. They will have attended kindergarten through 4th grade entirely in person, with no opportunity for “learning loss”. If there's a sudden switch to them doing just as well as the 2015 kids, then it was all lockdown-induced learning loss and I suck. But if not, then what? Maybe the downward trend isn't related to COVID? On the graph above, the national (not California) trend started in the 2017 - 2019 period, ie before COVID. And the states that tried hardest to keep their schools open did little better than anyone else: https://www.astralcodexten.com/p/what-happened-to-naep-scores
I enjoy the yearly book review contest, but it feels like last year's contest is barely done, and I want to give you a break so you can read more books before we start over. So this year, let's do something different. Submit an ACX-length post reviewing something, anything, except a book. You can review a movie, song, or video game. You can review a product, restaurant, or tourist attraction. But don't let the usual categories limit you. Review comic books or blog posts. Review political parties - no, whole societies! Review animals or trees! Review an oddly-shaped pebble, or a passing cloud! Review abstract concepts! Mathematical proofs! Review love, death, or God Himself! (please don't review human races, I don't need any more NYT articles) Otherwise, the usual rules apply. There's no official word count requirement, but previous finalists and winners were often between 2,000 and 10,000 words. There's no official recommended style, but check the style of last year's finalists and winners or my ACX book reviews (1, 2, 3) if you need inspiration. Please limit yourself to one entry per person or team. Then send me your review through this Google Form. The form will ask for your name, email, the thing you're reviewing, and a link to a Google Doc. The Google Doc should have your review exactly as you want me to post it if you're a finalist. DON'T INCLUDE YOUR NAME OR ANY HINT ABOUT YOUR IDENTITY IN THE GOOGLE DOC ITSELF, ONLY IN THE FORM. I want to make this contest as blinded as possible, so I'm going to hide that column in the form immediately and try to judge your docs on their merit. https://www.astralcodexten.com/p/everything-except-book-review-contest
Intelligence seems to correlate with total number of neurons in the brain. Different animals' intelligence levels track the number of neurons in their cerebral cortices (cerebellum etc don't count). Neuron number predicts animal intelligence better than most other variables like brain size, brain size divided by body size, “encephalization quotient”, etc. This is most obvious in certain bird species that have tiny brains full of tiny neurons and are very smart (eg crows, parrots). Humans with bigger brains have on average higher IQ. AFAIK nobody has done the obvious next step and seen whether people with higher IQ have more neurons. This could be because the neuron-counting process involves dissolving the brain into a “soup”, and maybe this is too mad-science-y for the fun-hating spoilsports who run IRBs. But common sense suggests bigger brains increase IQ because they have more neurons in humans too. Finally, AIs with more neurons (sometimes described as the related quantity “more parameters”) seem common-sensically smarter and perform better on benchmarks. This is part of what people mean by “scaling”, ie the reason GoogBookZon is spending $500 billion building a data center the size of the moon. All of this suggests that intelligence heavily depends on number of neurons, and most scientists think something like this is true. But how can this be? https://www.astralcodexten.com/p/why-should-intelligence-be-related
[I haven't independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can't guarantee I will have caught them all by the time you read this.] https://www.astralcodexten.com/p/links-for-february-2025
Conflict theory is the belief that political disagreements come from material conflict. So for example, if rich people support capitalism, and poor people support socialism, this isn't because one side doesn't understand economics. It's because rich people correctly believe capitalism is good for the rich, and poor people correctly believe socialism is good for the poor. Or if white people are racist, it's not because they have some kind of mistaken stereotypes that need to be corrected - it's because they correctly believe racism is good for white people. Some people comment on my more political posts claiming that they're useless. You can't (they say) produce change by teaching people Economics 101 or the equivalent. Conflict theorists understand that nobody ever disagreed about Economics 101. Instead you should try to organize and galvanize your side, so they can win the conflict. I think simple versions of conflict theory are clearly wrong. This doesn't mean that simple versions of mistake theory (the idea that people disagree because of reasoning errors, like not understanding Economics 101) are automatically right. But it gives some leeway for thinking harder about how reasoning errors and other kinds of error interact. https://readscottalexander.com/posts/acx-why-i-am-not-a-conflict-theorist
[Original thread here: Tegmark's Mathematical Universe Defeats Most Arguments For God's Existence.] 1: Comments On Specific Technical Points 2: Comments From Bentham's Bulldog's Response 3: Comments On Philosophical Points, And Getting In Fights https://www.astralcodexten.com/p/highlights-from-the-comments-on-tegmarks
St. Felix publicly declared that he believed with 79% probability that COVID had a natural origin. He was brought before the Emperor, who threatened him with execution unless he updated to 100%. When St. Felix refused, the Emperor was impressed with his integrity, and said he would release him if he merely updated to 90%. St. Felix refused again, and the Emperor, fearing revolt, promised to release him if he merely rounded up one percentage point to 80%. St. Felix cited Tetlock's research showing that the last digit contained useful information, refused a third time, and was crucified. St. Clare was so upset about believing false things during her dreams that she took modafinil every night rather than sleep. She completed several impressive programming projects before passing away of sleep deprivation after three weeks; she was declared a martyr by Pope Raymond II. https://www.astralcodexten.com/p/lives-of-the-rationalist-saints
It feels like 2010 again - the bloggers are debating the proofs for the existence of God. I found these much less interesting after learning about Max Tegmark's mathematical universe hypothesis, and this doesn't seem to have reached the Substack debate yet, so I'll put it out there. Tegmark's hypothesis says: all possible mathematical objects exist. Consider a mathematical object like a cellular automaton - a set of simple rules that creates complex behavior. The most famous is Conway's Game of Life; the second most famous is the universe. After all, the universe is a starting condition (the Big Bang) and a set of simple rules determining how the starting condition evolves over time (the laws of physics). Some mathematical objects contain conscious observers. Conway's Life might be like this: it's Turing complete, so if a computer can be conscious then you can get consciousness in Life. If you built a supercomputer and had it run the version of Life with the conscious being, then you would be “simulating” the being, and bringing it into existence. There would be something it was like to be that being; it would have thoughts and experiences and so on. https://www.astralcodexten.com/p/tegmarks-mathematical-universe-defeats
From the Commerce Department: U.S. Senate Commerce Committee Chairman Ted Cruz (R-Texas) released a database identifying over 3,400 grants, totaling more than $2.05 billion in federal funding awarded by the National Science Foundation (NSF) during the Biden-Harris administration. This funding was diverted toward questionable projects that promoted Diversity, Equity, and Inclusion (DEI) or advanced neo-Marxist class warfare propaganda. I saw many scientists complain that the projects from their universities that made Cruz's list were unrelated to wokeness. This seemed like a surprising failure mode, so I decided to investigate. The Commerce Department provided a link to their database, so I downloaded it, chose a random 100 grants, read the abstracts, and rated them either woke, not woke, or borderline. Of the hundred: 40% were woke 20% were borderline 40% weren't woke This is obviously in some sense a subjective determination, but most cases weren't close - I think any good-faith examination would turn up similar numbers. https://readscottalexander.com/posts/acx-only-about-40-of-the-cruz-woke-science