The Nonlinear Library: LessWrong Top Posts

Follow The Nonlinear Library: LessWrong Top Posts
Share on
Copy link to clipboard

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.


    • Dec 12, 2021 LATEST EPISODE
    • infrequent NEW EPISODES
    • 16m AVG DURATION
    • 493 EPISODES


    Search for episodes from The Nonlinear Library: LessWrong Top Posts with a specific topic:

    Latest episodes from The Nonlinear Library: LessWrong Top Posts

    Eight Short Studies On Excuses by Scott Alexander

    Play Episode Listen Later Dec 12, 2021 15:53


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eight Short Studies On Excuses , published by Scott Alexander on LessWrong. The Clumsy Game-Player You and a partner are playing an Iterated Prisoner's Dilemma. Both of you have publicly pre-committed to the tit-for-tat strategy. By iteration 5, you're going happily along, raking up the bonuses of cooperation, when your partner unexpectedly presses the "defect" button. "Uh, sorry," says your partner. "My finger slipped." "I still have to punish you just in case," you say. "I'm going to defect next turn, and we'll see how you like it." "Well," said your partner, "knowing that, I guess I'll defect next turn too, and we'll both lose out. But hey, it was just a slipped finger. By not trusting me, you're costing us both the benefits of one turn of cooperation." "True", you respond "but if I don't do it, you'll feel free to defect whenever you feel like it, using the 'finger slipped' excuse." "How about this?" proposes your partner. "I promise to take extra care that my finger won't slip again. You promise that if my finger does slip again, you will punish me terribly, defecting for a bunch of turns. That way, we trust each other again, and we can still get the benefits of cooperation next turn." You don't believe that your partner's finger really slipped, not for an instant. But the plan still seems like a good one. You accept the deal, and you continue cooperating until the experimenter ends the game. After the game, you wonder what went wrong, and whether you could have played better. You decide that there was no better way to deal with your partner's "finger-slip" - after all, the plan you enacted gave you maximum possible utility under the circumstances. But you wish that you'd pre-committed, at the beginning, to saying "and I will punish finger slips equally to deliberate defections, so make sure you're careful." The Lazy Student You are a perfectly utilitarian school teacher, who attaches exactly the same weight to others' welfare as to your own. You have to have the reports of all fifty students in your class ready by the time midterm grades go out on January 1st. You don't want to have to work during Christmas vacation, so you set a deadline that all reports must be in by December 15th or you won't grade them and the students will fail the class. Oh, and your class is Economics 101, and as part of a class project all your students have to behave as selfish utility-maximizing agents for the year. It costs your students 0 utility to turn in the report on time, but they gain +1 utility by turning it in late (they enjoy procrastinating). It costs you 0 utility to grade a report turned in before December 15th, but -30 utility to grade one after December 15th. And students get 0 utility from having their reports graded on time, but get -100 utility from having a report marked incomplete and failing the class. If you say "There's no penalty for turning in your report after deadline," then the students will procrastinate and turn in their reports late, for a total of +50 utility (1 per student times fifty students). You will have to grade all fifty reports during Christmas break, for a total of - 1500 utility (-30 per report times fifty reports). Total utility is -1450. So instead you say "If you don't turn in your report on time, I won't grade it." All students calculate the cost of being late, which is +1 utility from procrastinating and -100 from failing the class, and turn in their reports on time. You get all reports graded before Christmas, no students fail the class, and total utility loss is zero. Yay! Or else - one student comes to you the day after deadline and says "Sorry, I was really tired yesterday, so I really didn't want to come all the way here to hand in my report. I expect you'll grade my report anyway, because I know you to be a perfect utilitarian, an...

    Making Vaccine by johnswentworth

    Play Episode Listen Later Dec 12, 2021 9:46


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making Vaccine, published by johnswentworth on LessWrong. Back in December, I asked how hard it would be to make a vaccine for oneself. Several people pointed to radvac. It was a best-case scenario: an open-source vaccine design, made for self-experimenters, dead simple to make with readily-available materials, well-explained reasoning about the design, and with the name of one of the world's more competent biologists (who I already knew of beforehand) stamped on the whitepaper. My girlfriend and I made a batch a week ago and took our first booster yesterday. This post talks a bit about the process, a bit about our plan, and a bit about motivations. Bear in mind that we may have made mistakes - if something seems off, leave a comment. The Process All of the materials and equipment to make the vaccine cost us about $1000. We did not need any special licenses or anything like that. I do have a little wetlab experience from my undergrad days, but the skills required were pretty minimal. One vial of custom peptide - that little pile of white powder at the bottom. The large majority of the cost (about $850) was the peptides. These are the main active ingredients of the vaccine: short segments of proteins from the COVID virus. They're all

    The Best Textbooks on Every Subject by lukeprog

    Play Episode Listen Later Dec 12, 2021 15:01


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Textbooks on Every Subject, published by lukeprog on LessWrong. For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes. How inefficient! I've since discovered that textbooks are usually the quickest and best way to learn new material. That's what they are designed to be, after all. Less Wrong has often recommended the "read textbooks!" method. Make progress by accumulation, not random walks. But textbooks vary widely in quality. I was forced to read some awful textbooks in college. The ones on American history and sociology were memorably bad, in my case. Other textbooks are exciting, accurate, fair, well-paced, and immediately useful. What if we could compile a list of the best textbooks on every subject? That would be extremely useful. Let's do it. There have been other pages of recommended reading on Less Wrong before (and elsewhere), but this post is unique. Here are the rules: Post the title of your favorite textbook on a given subject. You must have read at least two other textbooks on that same subject. You must briefly name the other books you've read on the subject and explain why you think your chosen textbook is superior to them. Rules #2 and #3 are to protect against recommending a bad book that only seems impressive because it's the only book you've read on the subject. Once, a popular author on Less Wrong recommended Bertrand Russell's A History of Western Philosophy to me, but when I noted that it was more polemical and inaccurate than the other major histories of philosophy, he admitted he hadn't really done much other reading in the field, and only liked the book because it was exciting. I'll start the list with three of my own recommendations... Subject: History of Western Philosophy Recommendation: The Great Conversation, 6th edition, by Norman Melchert Reason: The most popular history of western philosophy is Bertrand Russell's A History of Western Philosophy, which is exciting but also polemical and inaccurate. More accurate but dry and dull is Frederick Copelston's 11-volume A History of Philosophy. Anthony Kenny's recent 4-volume history, collected into one book as A New History of Western Philosophy, is both exciting and accurate, but perhaps too long (1000 pages) and technical for a first read on the history of philosophy. Melchert's textbook, The Great Conversation, is accurate but also the easiest to read, and has the clearest explanations of the important positions and debates, though of course it has its weaknesses (it spends too many pages on ancient Greek mythology but barely mentions Gottlob Frege, the father of analytic philosophy and of the philosophy of language). Melchert's history is also the only one to seriously cover the dominant mode of Anglophone philosophy done today: naturalism (what Melchert calls "physical realism"). Be sure to get the 6th edition, which has major improvements over the 5th edition. Subject: Cognitive Science Recommendation: Cognitive Science, by Jose Luis Bermudez Reason: Jose Luis Bermudez's Cognitive Science: An Introduction to the Science of Mind does an excellent job setting the historical and conceptual context for cognitive science, and draws fairly from all the fields involved in this heavily interdisciplinary science. Bermudez does a good job of making himself invisible, and the explanations here are some of the clearest available. In contrast, Paul Thagard's Mind: Introduction to Cognitive Science skips the context and jumps right into a systematic comparison (by explanatory merit) of the leading theories of mental representation: logic, rules, concepts, analogies, images, and neural networks. The book is o...

    Preface by Eliezer Yudkowsky

    Play Episode Listen Later Dec 12, 2021 5:34


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Preface, published Eliezer Yudkowsky on LessWrong. You hold in your hands a compilation of two years of daily blog posts. In retrospect, I look back on that project and see a large number of things I did completely wrong. I'm fine with that. Looking back and not seeing a huge number of things I did wrong would mean that neither my writing nor my understanding had improved since 2009. Oops is the sound we make when we improve our beliefs and strategies; so to look back at a time and not see anything you did wrong means that you haven't learned anything or changed your mind since then. It was a mistake that I didn't write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples. In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing, which was that I didn't realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn't realize that part was the priority; and regarding this I can only say “Oops” and “Duh.” Yes, sometimes those big issues really are big and really are important; but that doesn't change the basic truth that to master skills you need to practice them and it's harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.) A third huge mistake I made was to focus too much on rational belief, too little on rational action. The fourth-largest mistake I made was that I should have better organized the content I was presenting in the sequences. In particular, I should have created a wiki much earlier, and made it easier to read the posts in sequence. That mistake at least is correctable. In the present work Rob Bensinger has reordered the posts and reorganized them as much as he can without trying to rewrite all the actual material (though he's rewritten a bit of it). My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won't lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream. Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt. Despite my mistake, I am happy to say that my readership has so far been amazingly good about not using my rhetoric as an excuse to bully or belittle others. (I want to single out Scott Alexander in particular here, who is a nicer person than I am and an increasingly amazing writer on these topics, and may deserve part of the credit for making the culture of Less Wrong a healthy one.) To be able to look backwards and say that you've “failed” implies that you had goals. So what was it that I was trying to do? Th...

    Rationalism before the Sequences by Eric Raymond

    Play Episode Listen Later Dec 12, 2021 18:03


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rationalism before the Sequences, published by Eric Raymond on LessWrong. I'm here to tell you a story about what it was like to be a rationalist decades before the Sequences and the formation of the modern rationalist community. It is not the only story that could be told, but it is one that runs parallel to and has important connections to Eliezer Yudkowsky's and how his ideas developed. My goal in writing this essay is to give the LW community a sense of the prehistory of their movement. It is not intended to be "where Eliezer got his ideas"; that would be stupidly reductive. I aim more to exhibit where the drive and spirit of the Yudkowskian reform came from, and the interesting ways in which Eliezer's formative experiences were not unique. My standing to write this essay begins with the fact that I am roughly 20 years older than Eliezer and read many of his sources before he was old enough to read. I was acquainted with him over an email list before he wrote the Sequences, though I somehow managed to forget those interactions afterwards and only rediscovered them while researching for this essay. In 2005 he had even sent me a book manuscript to review that covered some of the Sequences topics. My reaction on reading "The Twelve Virtues of Rationality" a few years later was dual. It was a different kind of writing than the book manuscript - stronger, more individual, taking some serious risks. On the one hand, I was deeply impressed by its clarity and courage. On the other hand, much of it seemed very familiar, full of hints and callbacks and allusions to books I knew very well. Today it is probably more difficult to back-read Eliezer's sources than it was in 2006, because the body of more recent work within his reformation of rationalism tends to get in the way. I'm going to attempt to draw aside that veil by talking about four specific topics: General Semantics, analytic philosophy, science fiction, and Zen Buddhism. Before I get to those specifics, I want to try to convey that sense of what it was like. I was a bright geeky kid in the 1960s and 1970s, immersed in a lot of obscure topics often with an implicit common theme: intelligence can save us! Learning how to think more clearly can make us better! But at the beginning I was groping as if in a dense fog, unclear about how to turn that belief into actionable advice. Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora. More often than not, though, the clue would be fictional; somebody's imagination about what it would be like to increase intelligence, to burn away error and think more clearly. When I found non-fiction sources on rationality and intelligence increase I devoured them. Alas, most were useless junk. But in a few places I found gold. Not by coincidence, the places I found real value were sources Eliezer would later draw on. I'm not guessing about this, I was able to confirm it first from Eliezer's explicit reports of what influenced him and then via an email conversation. Eliezer and I were not unique. We know directly of a few others with experiences like ours. There were likely dozens of others we didn't know - possibly hundreds - on parallel paths, all hungrily seeking clarity of thought, all finding largely overlapping subsets of clues and techniques because there simply wasn't that much out there to be mined. One piece of evidence for this parallelism besides Eliezer's reports is that I bounced a draft of this essay off Nancy Lebovitz, a former LW moderator who I've known personally since the 1970s. Her instant reaction? "Full of stuff I knew already." Around the time Nancy and I first met, some years before Eliezer Yudk...

    Schelling fences on slippery slopes by Scott Alexander

    Play Episode Listen Later Dec 12, 2021 9:20


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Schelling fences on slippery slopes, published by Scott Alexander on LessWrong. Slippery slopes are themselves a slippery concept. Imagine trying to explain them to an alien: "Well, we right-thinking people are quite sure that the Holocaust happened, so banning Holocaust denial would shut up some crackpots and improve the discourse. But it's one step on the road to things like banning unpopular political positions or religions, and we right-thinking people oppose that, so we won't ban Holocaust denial." And the alien might well respond: "But you could just ban Holocaust denial, but not ban unpopular political positions or religions. Then you right-thinking people get the thing you want, but not the thing you don't want." This post is about some of the replies you might give the alien. Abandoning the Power of Choice This is the boring one without any philosophical insight that gets mentioned only for completeness' sake. In this reply, giving up a certain point risks losing the ability to decide whether or not to give up other points. For example, if people gave up the right to privacy and allowed the government to monitor all phone calls, online communications, and public places, then if someone launched a military coup, it would be very difficult to resist them because there would be no way to secretly organize a rebellion. This is also brought up in arguments about gun control a lot. I'm not sure this is properly thought of as a slippery slope argument at all. It seems to be a more straightforward "Don't give up useful tools for fighting tyranny" argument. The Legend of Murder-Gandhi Previously on Less Wrong's The Adventures of Murder-Gandhi: Gandhi is offered a pill that will turn him into an unstoppable murderer. He refuses to take it, because in his current incarnation as a pacifist, he doesn't want others to die, and he knows that would be a consequence of taking the pill. Even if we offered him $1 million to take the pill, his abhorrence of violence would lead him to refuse. But suppose we offered Gandhi $1 million to take a different pill: one which would decrease his reluctance to murder by 1%. This sounds like a pretty good deal. Even a person with 1% less reluctance to murder than Gandhi is still pretty pacifist and not likely to go killing anybody. And he could donate the money to his favorite charity and perhaps save some lives. Gandhi accepts the offer. Now we iterate the process: every time Gandhi takes the 1%-more-likely-to-murder-pill, we offer him another $1 million to take the same pill again. Maybe original Gandhi, upon sober contemplation, would decide to accept $5 million to become 5% less reluctant to murder. Maybe 95% of his original pacifism is the only level at which he can be absolutely sure that he will still pursue his pacifist ideals. Unfortunately, original Gandhi isn't the one making the choice of whether or not to take the 6th pill. 95%-Gandhi is. And 95% Gandhi doesn't care quite as much about pacifism as original Gandhi did. He still doesn't want to become a murderer, but it wouldn't be a disaster if he were just 90% as reluctant as original Gandhi, that stuck-up goody-goody. What if there were a general principle that each Gandhi was comfortable with Gandhis 5% more murderous than himself, but no more? Original Gandhi would start taking the pills, hoping to get down to 95%, but 95%-Gandhi would start taking five more, hoping to get down to 90%, and so on until he's rampaging through the streets of Delhi, killing everything in sight. Now we're tempted to say Gandhi shouldn't even take the first pill. But this also seems odd. Are we really saying Gandhi shouldn't take what's basically a free million dollars to turn himself into 99%-Gandhi, who might well be nearly indistinguishable in his actions from the original? Maybe Gandhi's best...

    Diseased thinking: dissolving questions about disease by Scott Alexander

    Play Episode Listen Later Dec 12, 2021 15:44


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diseased thinking: dissolving questions about disease, published by Scott Alexander on LessWrong. Related to: Disguised Queries, Words as Hidden Inferences, Dissolving the Question, Eight Short Studies on Excuses Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity -- George Will, townhall.com Sandy is a morbidly obese woman looking for advice. Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while? Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass. Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down. When she tells each of her friends about the opinions of the others, things really start to heat up. Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet. Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead. Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma. Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people. Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband. The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue. What is Disease? In Disguised Queries , Eliezer demonstrates how a word refers to a cluster of objects related upon multiple axes. For example, in a company that sorts red smooth translucent cubes full of vanadium from blue furry opaque eggs full of palladium, you might invent the word "rube" to designate the red cubes, and another "blegg", to designate the blue eggs. Both words are useful because they "carve reality at the joints" - they refer to two completely separate classes of things which it's practically useful to keep in separate cat...

    Generalizing From One Example by Scott Alexander

    Play Episode Listen Later Dec 12, 2021 8:30


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Generalizing From One Example, published by Scott Alexander on LessWrong. Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective "Everyone generalizes from one example. At least, I do." -- Vlad Taltos (Issola, Steven Brust) My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example: There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like? Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed. The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2. Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's. He kind of took this idea and ran with it. He interpreted certain passages in George Berkeley's biography to mean that Berkeley was an eidetic imager, and that this was why the idea of the universe as sense-perception held such interest to him. He also suggested that experience of consciousness and qualia were as variable as imaging, and that philosophers who deny their existence (Ryle? Dennett? Behaviorists?) were simply people whose mind lacked the ability to easily experience qualia. In general, he believed philosophy of mind was littered with examples of philosophers taking their own mental experiences and building theories on them, and other philosophers with different mental experiences critiquing them and wondering why they disagreed. The formal typical mind fallacy is about serious matters of mental structure. But I've also run into something similar with something more like the psyche than the mind: a tendency to generalize from our personalities and behaviors. For example, I'm about as introverted a person as you're ever likely to meet - anyone more introverted than I am doesn't communicate with anyone. All through elementary and middle school, I suspected that the other children were out to get me. They kept on grabbing me when I was busy with something and trying to drag me off to do some rough activity with them and their friends. When I protested, they counter-protested and told me I really needed to stop whatever I was doing and come join them. I figured they were bullies who were trying to annoy me, and found ways to hide from them and scare them off. Eventually I realized that it was a double misunderstanding. They figured I must be like them, and the only thing keeping me from playing their fun games was that I was too shy. I figured they must be like me, and that the only re...

    Reason as memetic immune disorder by PhilGoetz

    Play Episode Listen Later Dec 12, 2021 7:33


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reason as memetic immune disorder, published PhilGoetz on LessWrong. A prophet is without dishonor in his hometown I'm reading the book "The Year of Living Biblically," by A.J. Jacobs. He tried to follow all of the commandments in the Bible (Old and New Testaments) for one year. He quickly found that a lot of the rules in the Bible are impossible, illegal, or embarassing to follow nowadays; like wearing tassels, tying your money to yourself, stoning adulterers, not eating fruit from a tree less than 5 years old, and not touching anything that a menstruating woman has touched; and this didn't seem to bother more than a handful of the one-third to one-half of Americans who claim the Bible is the word of God. You may have noticed that people who convert to religion after the age of 20 or so are generally more zealous than people who grew up with the same religion. People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them. Religious communities actually protect their members from religion in one sense - they develop an unspoken consensus on which parts of their religion members can legitimately ignore. New converts sometimes try to actually do what their religion tells them to do. I remember many times growing up when missionaries described the crazy things their new converts in remote areas did on reading the Bible for the first time - they refused to be taught by female missionaries; they insisted on following Old Testament commandments; they decided that everyone in the village had to confess all of their sins against everyone else in the village; they prayed to God and assumed He would do what they asked; they believed the Christian God would cure their diseases. We would always laugh a little at the naivete of these new converts; I could barely hear the tiny voice in my head saying but they're just believing that the Bible means what it says... How do we explain the blindness of people to a religion they grew up with? Cultural immunity Europe has lived with Christianity for nearly 2000 years. European culture has co-evolved with Christianity. Culturally, memetically, it's developed a tolerance for Christianity. These new Christian converts, in Uganda, Papua New Guinea, and other remote parts of the world, were being exposed to Christian memes for the first time, and had no immunity to them. The history of religions sometimes resembles the history of viruses. Judaism and Islam were both highly virulent when they first broke out, driving the first generations of their people to conquer (Islam) or just slaughter (Judaism) everyone around them for the sin of not being them. They both grew more sedate over time. (Christianity was pacifist at the start, as it arose in a conquered people. When the Romans adopted it, it didn't make them any more militaristic than they already were.) The mechanism isn't the same as for diseases, which can't be too virulent or they kill their hosts. Religions don't generally kill their hosts. I suspect that, over time, individual selection favors those who are less zealous. The point is that a culture develops antibodies for the particular religions it co-exists with - attitudes and practices that make them less virulent. I have a theory that "radical Islam" is not native Islam, but Westernized Islam. Over half of 75 Muslim terrorists studied by Bergen & Pandey 2005 in the New York Times had gone to a Western college. (Only 9% had attended madrassas.) A very small percentage of all Muslims have received a Western college education. When someone lives all their life in a Muslim country, they're not likely to be hit with the urge to travel abroad and blow something up. But when someone from an Islamic nation goes to Europe for college, and co...

    Pain is not the unit of Effort by alkjash

    Play Episode Listen Later Dec 12, 2021 7:44


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain is not the unit of Effort, published by alkjash on LessWrong. Write a Review This is a linkpost for/ (Content warning: self-harm, parts of this post may be actively counterproductive for readers with certain mental illnesses or idiosyncrasies.) What doesn't kill you makes you stronger. ~ Kelly Clarkson. No pain, no gain. ~ Exercise motto. The more bitterness you swallow, the higher you'll go. ~ Chinese proverb. I noticed recently that, at least in my social bubble, pain is the unit of effort. In other words, how hard you are trying is explicitly measured by how much suffering you put yourself through. In this post, I will share some anecdotes of how damaging and pervasive this belief is, and propose some counterbalancing ideas that might help rectify this problem. I. Anecdotes 1. As a child, I spent most of my evenings studying mathematics under some amount of supervision from my mother. While studying, if I expressed discomfort or fatigue, my mother would bring me a snack or drink and tell me to stretch or take a break. I think she took it as a sign that I was trying my best. If on the other hand I was smiling or joyful for extended periods of time, she took that as a sign that I had effort to spare and increased the hours I was supposed to study each day. To this day there's a gremlin on my shoulder that whispers, "If you're happy, you're not trying your best." 2. A close friend who played sports in school reports that training can be harrowing. He told me that players who fell behind the pack during for daily jogs would be singled out and publicly humiliated. One time the coach screamed at my friend for falling behind the asthmatic boy who was alternating between running and using his inhaler. Another time, my friend internalized "no pain, no gain" to the point of losing his toenails. 3. In high school and college, I was surrounded by overachievers constantly making (what seemed to me) incomprehensibly bad life choices. My classmates would sign up for eight classes per semester when the recommended number is five, jigsaw extracurricular activities into their calendar like a dynamic programming knapsack-solver, and then proceed to have loud public complaining contests about which libraries are most comfortable to study at past 2am and how many pages they have left to write for the essay due in three hours. Only later did I learn to ask: what incentives were they responding to? 4. A while ago I became a connoisseur of Chinese webnovels. Among those written for a male audience, there is a surprisingly diverse set of character traits represented among the main characters. Doubtless many are womanizing murderhobos with no redeeming qualities, but others are classical heroes with big hearts, or sarcastic antiheroes who actually grow up a little, or ambitious empire-builders with grand plans to pave the universe with Confucian order, or down-on-their-luck starving artists who just want to bring happiness to the world through song. If there is a single common virtue shared by all these protagonists, it is their superhuman pain tolerance. Protagonists routinely and often voluntarily dunk themselves in vats of lava, have all their bones broken, shattered, and reforged, get trapped inside alternate dimensions of freezing cold for millennia (which conveniently only takes a day in the outside world), and overdose on level-up pills right up to the brink of death, all in the name of becoming stronger. Oftentimes the defining difference between the protagonist and the antagonist is that the antagonist did not have enough pain tolerance and allowed the (unbearable physical) suffering in his life to drive him mad. 5. I have a close friend who often asks for my perspective on personal problems. A pattern arose in a couple of our conversations: alkjash: I feel like you're not ac...

    Bets, Bonds, and Kindergarteners by jefftk

    Play Episode Listen Later Dec 12, 2021 2:40


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bets, Bonds, and Kindergarteners, published by jefftk on LessWrong. Bets and bonds are tools for handling different epistemic states and levels of trust. Which makes them a great fit for negotiating with small children! A few weeks ago Anna (4y) wanted to play with some packing material. It looked very messy to me, I didn't expect she would clean it up, and I didn't want to fight with her about cleaning it up. I considered saying no, but after thinking about how things like this are handled in the real world I had an idea. If you want to do a hazardous activity, and we think you might go bankrupt and not clean up, we make you post a bond. This money is held in escrow to fund the cleanup if you disappear. I explained how this worked, and she went and got a dollar: Then: When she was done playing, she cleaned it up without complaint and got her dollar back. If she hadn't cleaned it up, I would have, and kept the dollar. Some situations are more complicated, and call for bets. I wanted to go to a park, but Lily (6y) didn't want to go to that park because the last time we had been there there'd been lots of bees. I remembered that had been a summer with unusually many bees, and it no longer being that summer or, in fact, summer at all, I was not worried. Since I was so confident, I offered my $1 to her $0.10 that we would not run into bees at the park. This seemed fair to her, and when there were no bees she was happy to pay up. Over time, they've learned that my being willing to bet, especially at large odds, is pretty informative, and often all I need to do is offer. Lily was having a rough morning, crying by herself about a project not working out. I suggested some things that might be fun to do together, and she rejected them angrily. I told her that often when people are feeling that way, going outside can help a lot, and when she didn't seem to believe me I offered to bet. Once she heard the 10:1 odds I was offering her I think she just started expecting that I was right, and she decided we should go ride bikes. (She didn't actually cheer up when we got outside: she cheered up as soon as she made this decision.) I do think there is some risk with this approach that the child will have a bad time just to get the money, or say they are having a bad time and they are actually not, but this isn't something we've run into. Another risk, if we were to wager large amounts, would be that the child would end up less happy than if I hadn't interacted with them at all. I handle this by making sure not to offer a bet I think they would regret losing, and while this is not a courtesy I expect people to make later in life, I think it's appropriate at their ages. Comment via: facebook Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.

    Thoughts on the Singularity Institute (SI) by HoldenKarnofsky

    Play Episode Listen Later Dec 12, 2021 44:55


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on the Singularity Institute (SI), published by HoldenKarnofsky on LessWrong. This post presents thoughts on the Singularity Institute from Holden Karnofsky, Co-Executive Director of GiveWell. Note: Luke Muehlhauser, the Executive Director of the Singularity Institute, reviewed a draft of this post, and commented: "I do generally agree that your complaints are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI). I am working to address both categories of issues." I take Luke's comment to be a significant mark in SI's favor, because it indicates an explicit recognition of the problems I raise, and thus increases my estimate of the likelihood that SI will work to address them. September 2012 update: responses have been posted by Luke and Eliezer (and I have responded in the comments of their posts). I have also added acknowledgements. The Singularity Institute (SI) is a charity that GiveWell has been repeatedly asked to evaluate. In the past, SI has been outside our scope (as we were focused on specific areas such as international aid). With GiveWell Labs we are open to any giving opportunity, no matter what form and what sector, but we still do not currently plan to recommend SI; given the amount of interest some of our audience has expressed, I feel it is important to explain why. Our views, of course, remain open to change. (Note: I am posting this only to Less Wrong, not to the GiveWell Blog, because I believe that everyone who would be interested in this post will see it here.) I am currently the GiveWell staff member who has put the most time and effort into engaging with and evaluating SI. Other GiveWell staff currently agree with my bottom-line view that we should not recommend SI, but this does not mean they have engaged with each of my specific arguments. Therefore, while the lack of recommendation of SI is something that GiveWell stands behind, the specific arguments in this post should be attributed only to me, not to GiveWell. Summary of my views The argument advanced by SI for why the work it's doing is beneficial and important seems both wrong and poorly argued to me. My sense at the moment is that the arguments SI is making would, if accepted, increase rather than decrease the risk of an AI-related catastrophe. More SI has, or has had, multiple properties that I associate with ineffective organizations, and I do not see any specific evidence that its personnel/organization are well-suited to the tasks it has set for itself. More A common argument for giving to SI is that "even an infinitesimal chance that it is right" would be sufficient given the stakes. I have written previously about why I reject this reasoning; in addition, prominent SI representatives seem to reject this particular argument as well (i.e., they believe that one should support SI only if one believes it is a strong organization making strong arguments). More My sense is that at this point, given SI's current financial state, withholding funds from SI is likely better for its mission than donating to it. (I would not take this view to the furthest extreme; the argument that SI should have some funding seems stronger to me than the argument that it should have as much as it currently has.) I find existential risk reduction to be a fairly promising area for philanthropy, and plan to investigate it further. More There are many things that could happen that would cause me to revise my view on SI. However, I do not plan to respond to all comment responses to this post. (Given the volume of responses we may receive, I may not be able to even read all the comments on this post.) I do not believe these two statements are inconsistent, and I lay out paths for getting me...

    Humans are not automatically strategic by AnnaSalamon

    Play Episode Listen Later Dec 12, 2021 6:10


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Humans are not automatically strategic, published by AnnaSalamon on LessWrong. Reply to: A "Failure to Evaluate Return-on-Time" Fallacy Lionhearted writes: [A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped. A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995.... I'm curious as to why. Why will a randomly chosen eight-year-old fail a calculus test? Because most possible answers are wrong, and there is no force to guide him to the correct answers. (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.) Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1] My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective. To be more specific: there are clearly at least some limited senses in which we have goals. We: (1) tell ourselves and others stories of how we're aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don't achieve our “goals”. But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically: (a) Ask ourselves what we're trying to achieve; (b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress; (c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal; (d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven't worked for us in the past); (e) Systematically test many different conjectures for how to achieve the goals, including methods that aren't habitual for us, while tracking which ones do and don't work; (f) Focus most of the energy that isn't going into systematic exploration, on the methods that work best; (g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies; (h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting; .... or carry out any number of other useful techniques. Instead, we mostly just do things. We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated with our goal. We do any number of things. But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals. Why? Most basically, because humans are only just on the cusp o...

    Anti-Aging: State of the Art by JackH

    Play Episode Listen Later Dec 12, 2021 20:59


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti-Aging: State of the Art, published by JackH on LessWrong. Write a Review Aging is a problem that ought to be solved, and most Less Wrongers recognize this. However, few members of the community seem to be aware of the current state of the anti-aging field, and how close we are to developing effective anti-aging therapies. As a result, there is a much greater (and in my opinion, irrational) overemphasis on the Plan B of cryonics for life extension, rather than Plan A of solving aging. Both are important, but the latter is under-emphasised despite being a potentially more feasible strategy for life extension given the potentially high probability that cryonics will not work. Today, there are over 130 longevity biotechnology companies and over 50 anti-aging drugs in clinical trials in humans. The evidence is promising that in the next 5-10 years, we will start seeing robust evidence that aging can be therapeutically slowed or reversed in humans. Whether we live to see anti-aging therapies to keep us alive indefinitely (i.e. whether we make it to longevity escape velocity) depends on how much traction and funding the field gets in coming decades. In this post, I summarise the state of the art of the anti-aging field (also known as longevity biotechnology, rejuvenation biotechnology, translational biogerontology or geroscience). If you feel you already possess the necessary background on aging, feel free to skip to Part V. Part I: Why is Aging a problem? Aging is the biggest killer worldwide, and also the largest source of morbidity. Aging kills 100,000 people per day; more than twice the sum of all other causes of death. This equates to 37 million people - a population the size of Canada - dying per year of aging. In developed countries, 9 out of 10 deaths are due to aging. Aging also accounts for more than 30% of all disability-adjusted life years lost (DALYs); more than any other single cause. Deaths due to aging are not usually quick and painless, but preceded by 10-15 years of chronic illnesses such as cancer, type 2 diabetes and Alzheimer's disease. Quality of life typically deteriorates in older age, and the highest rates of depression worldwide are among the elderly. To give a relevant example of the effects of aging, consider that aging is primarily responsible for almost all COVID-19 deaths. This is observable in the strong association of COVID-19 mortality with age (below, middle panel): The death rate from COVID-19 increases exponentially with age (above, middle). This is not a coincidence - it is because biological aging weakens the immune system and results in a much higher chance of death from COVID-19. On a side note, waning immunity with age also increases cancer risk, as another example of how aging is associated with chronic illness. The mortality rate doubling time for COVID-19 is close to the all-cause mortality rate doubling time, suggesting that people who die of COVID-19 are really dying of aging. Without aging, COVID-19 would not be a global pandemic, since the death rate in individuals below 30 years old is extremely low. Part II: What does a world without aging look like? For those who have broken free of the pro-aging trance and recognise aging as a problem, there is the further challenge of imagining a world without aging. The prominent ‘black mirror' portrayals of immortality as a curse or hubristic may distort our model of what a world with anti-aging actually looks like. The 'white mirror' of aging is a world in which biological age is halted at 20-30 years, and people maintain optimal health for a much longer or indefinite period of time. Although people will still age chronologically (exist over time) they will not undergo physical and cognitive decline associated with biological aging. At chronological ages of 70s, 80s, even 200s, t...

    The noncentral fallacy - the worst argument in the world? by Scott Alexander

    Play Episode Listen Later Dec 12, 2021 11:28


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The noncentral fallacy - the worst argument in the world?, published by Scott Alexander on LessWrong. Related to: Leaky Generalizations, Replace the Symbol With The Substance, Sneaking In Connotations David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process. If he can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this: "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member." Call it the Noncentral Fallacy. It sounds dumb when you put it like that. Who even does that, anyway? It sounds dumb only because we are talking soberly of categories and features. As soon as the argument gets framed in terms of words, it becomes so powerful that somewhere between many and most of the bad arguments in politics, philosophy and culture take some form of the noncentral fallacy. Before we get to those, let's look at a simpler example. Suppose someone wants to build a statue honoring Martin Luther King Jr. for his nonviolent resistance to racism. An opponent of the statue objects: "But Martin Luther King was a criminal!" Any historian can confirm this is correct. A criminal is technically someone who breaks the law, and King knowingly broke a law against peaceful anti-segregation protest - hence his famous Letter from Birmingham Jail. But in this case calling Martin Luther King a criminal is the noncentral. The archetypal criminal is a mugger or bank robber. He is driven only by greed, preys on the innocent, and weakens the fabric of society. Since we don't like these things, calling someone a "criminal" naturally lowers our opinion of them. The opponent is saying "Because you don't like criminals, and Martin Luther King is a criminal, you should stop liking Martin Luther King." But King doesn't share the important criminal features of being driven by greed, preying on the innocent, or weakening the fabric of society that made us dislike criminals in the first place. Therefore, even though he is a criminal, there is no reason to dislike King. This all seems so nice and logical when it's presented in this format. Unfortunately, it's also one hundred percent contrary to instinct: the urge is to respond "Martin Luther King? A criminal? No he wasn't! You take that back!" This is why the noncentral is so successful. As soon as you do that you've fallen into their trap. Your argument is no longer about whether you should build a statue, it's about whether King was a criminal. Since he was, you have now lost the argument. Ideally, you should just be able to say "Well, King was the good kind of criminal." But that seems pretty tough as a debating maneuver, and it may be even harder in some of the cases where the noncentral Fallacy is commonly used. Now I want to list some of these cases. Many will be political1, for which I apologize, but it's hard to separate out a bad argument from its specific instantiations. None of these examples are meant to imply that the position they support is wrong (and in fact I myself hold some of them). They only show that certain particular arguments for the position are flawed, such as: "Abortion is murder!" The archetypal murder is Charles Manson breaking into your house and shooting you. This sort of murder is bad for a number of reasons: you prefer not to die, you have various thoughts and hopes and dreams that would be snuffed out, your family and friends would be heartbroken, and the rest of society has to live in fear until Manson gets caught. If you define murder as "killing another human being", then abortion is technically ...

    Dying Outside by HalFinney

    Play Episode Listen Later Dec 12, 2021 4:16


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dying Outside, published by HalFinney on LessWrong. A man goes in to see his doctor, and after some tests, the doctor says, "I'm sorry, but you have a fatal disease." Man: "That's terrible! How long have I got?" Doctor: "Ten." Man: "Ten? What kind of answer is that? Ten months? Ten years? Ten what?" The doctor looks at his watch. "Nine." Recently I received some bad medical news (although not as bad as in the joke). Unfortunately I have been diagnosed with a fatal disease, Amyotrophic Lateral Sclerosis or ALS, sometimes called Lou Gehrig's disease. ALS causes nerve damage, progressive muscle weakness and paralysis, and ultimately death. Patients lose the ability to talk, walk, move, eventually even to breathe, which is usually the end of life. This process generally takes about 2 to 5 years. There are however two bright spots in this picture. The first is that ALS normally does not affect higher brain functions. I will retain my abilities to think and reason as usual. Even as my body is dying outside, I will remain alive inside. The second relates to survival. Although ALS is generally described as a fatal disease, this is not quite true. It is only mostly fatal. When breathing begins to fail, ALS patients must make a choice. They have the option to either go onto invasive mechanical respiration, which involves a tracheotomy and breathing machine, or they can die in comfort. I was very surprised to learn that over 90% of ALS patients choose to die. And even among those who choose life, for the great majority this is an emergency decision made in the hospital during a medical respiratory crisis. In a few cases the patient will have made his wishes known in advance, but most of the time the procedure is done as part of the medical management of the situation, and then the ALS patient either lives with it or asks to have the machine disconnected so he can die. Probably fewer than 1% of ALS patients arrange to go onto ventilation when they are still in relatively good health, even though this provides the best odds for a successful transition. With mechanical respiration, survival with ALS can be indefinitely extended. And the great majority of people living on respirators say that their quality of life is good and they are happy with their decision. (There may be a selection effect here.) It seems, then, that calling ALS a fatal disease is an oversimplification. ALS takes away your body, but it does not take away your mind, and if you are determined and fortunate, it does not have to take away your life. There are a number of practical and financial obstacles to successfully surviving on a ventilator, foremost among them the great load on caregivers. No doubt this contributes to the high rates of choosing death. But it seems that much of the objection is philosophical. People are not happy about being kept alive by machines. And they assume that their quality of life would be poor, without the ability to move and participate in their usual activities. This is despite the fact that most people on respirators describe their quality of life as acceptable to good. As we have seen in other contexts, people are surprisingly poor predictors of how they will react to changed circumstances. This seems to be such a case, contributing to the high death rates for ALS patients. I hope that when the time comes, I will choose life. ALS kills only motor neurons, which carry signals to the muscles. The senses are intact. And most patients retain at least some vestige of control over a few muscles, which with modern technology can offer a surprisingly effective mode of communication. Stephen Hawking, the world's longest surviving ALS patient at over 40 years since diagnosis, is said to be able to type at ten words per minute by twitching a cheek muscle. I hope to be able to read, browse ...

    There"s no such thing as a tree (phylogenetically) by eukaryote

    Play Episode Listen Later Dec 12, 2021 12:41


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There"s no such thing as a tree (phylogenetically), published by eukaryote on LessWrong. This is a linkpost for/ [Crossposted from Eukaryote Writes Blog.] So you've heard about how fish aren't a monophyletic group? You've heard about carcinization, the process by which ocean arthropods convergently evolve into crabs? You say you get it now? Sit down. Sit down. Shut up. Listen. You don't know nothing yet. “Trees” are not a coherent phylogenetic category. On the evolutionary tree of plants, trees are regularly interspersed with things that are absolutely, 100% not trees. This means that, for instance, either: The common ancestor of a maple and a mulberry tree was not a tree. The common ancestor of a stinging nettle and a strawberry plant was a tree. And this is true for most trees or non-trees that you can think of. I thought I had a pretty good guess at this, but the situation is far worse than I could have imagined. CLICK TO EXPAND. Partial phylogenetic tree of various plants. TL;DR: Tan is definitely, 100% trees. Yellow is tree-like. Green is 100% not a tree. Sourced mostly from Wikipedia. I learned after making this chart that tree ferns exist (h/t seebs), which I think just emphasizes my point further. Also, h/t kithpendragon for suggestions on improving accessibility of the graph. Why do trees keep happening? First, what is a tree? It's a big long-lived self-supporting plant with leaves and wood. Also of interest to us are the non-tree “woody plants”, like lianas (thick woody vines) and shrubs. They're not trees, but at least to me, it's relatively apparent how a tree could evolve into a shrub, or vice-versa. The confusing part is a tree evolving into a dandelion. (Or vice-versa.) Wood, as you may have guessed by now, is also not a clear phyletic category. But it's a reasonable category – a lignin-dense structure, usually that grows from the exterior and that forms a pretty readily identifiable material when separated from the tree. (.Okay, not the most explainable, but you know wood? You know when you hold something in your hand, and it's made of wood, and you can tell that? Yeah, that thing.) All plants have lignin and cellulose as structural elements – wood is plant matter that is dense with both of these. Botanists don't seem to think it only could have gone one way – for instance, the common ancestor of flowering plants is theorized to have been woody. But we also have pretty clear evidence of recent evolution of woodiness – say, a new plant arrives on a relatively barren island, and some of the offspring of that plant becomes treelike. Of plants native to the Canary Islands, wood independently evolved at least 38 times! One relevant factor is that all woody plants do, in a sense, begin life as herbaceous plants – by and large, a tree sprout shares a lot of properties with any herbaceous plant. Indeed, botanists call this kind of fleshy, soft growth from the center that elongates a plant “primary growth”, and the later growth from towards the outside which causes a plant to thicken is “secondary growth.” In a woody plant, secondary growth also means growing wood and bark – but other plants sometimes do secondary growth as well, like potatoes (in roots) This paper addresses the question. I don't understand a lot of the closely genetic details, but my impression of its thesis is that: Analysis of convergently-evolved woody plants show that the genes for secondary woody growth are similar to primary growth in plants that don't do any secondary growth – even in unrelated plants. And woody growth is an adaption of secondary growth. To abstract a little more, there is a common and useful structure in herbaceous plants that, when slightly tweaked, “dendronizes” them into woody plants. Dendronization – Evolving into a tree-like morphology. (In the style of “carciniz...

    Intellectual Hipsters and Meta-Contrarianism by Scott Alexander

    Play Episode Listen Later Dec 12, 2021 11:48


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intellectual Hipsters and Meta-Contrarianism, published by Scott Alexander on LessWrong. Related to: Why Real Men Wear Pink, That Other Kind of Status, Pretending to be Wise, The "Outside The Box" Box WARNING: Beware of things that are fun to argue -- Eliezer Yudkowsky Science has inexplicably failed to come up with a precise definition of "hipster", but from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream. But why would being deliberately uncool be cooler than being cool? As previously discussed, in certain situations refusing to signal can be a sign of high status. Thorstein Veblen invented the term "conspicuous consumption" to refer to the showy spending habits of the nouveau riche, who unlike the established money of his day took great pains to signal their wealth by buying fast cars, expensive clothes, and shiny jewelery. Why was such flashiness common among new money but not old? Because the old money was so secure in their position that it never even occurred to them that they might be confused with poor people, whereas new money, with their lack of aristocratic breeding, worried they might be mistaken for poor people if they didn't make it blatantly obvious that they had expensive things. The old money might have started off not buying flashy things for pragmatic reasons - they didn't need to, so why waste the money? But if F. Scott Fitzgerald is to be believed, the old money actively cultivated an air of superiority to the nouveau riche and their conspicuous consumption; not buying flashy objects becomes a matter of principle. This makes sense: the nouveau riche need to differentiate themselves from the poor, but the old money need to differentiate themselves from the nouveau riche. This process is called countersignaling, and one can find its telltale patterns in many walks of life. Those who study human romantic attraction warn men not to "come on too strong", and this has similarities to the nouveau riche example. A total loser might come up to a woman without a hint of romance, promise her nothing, and demand sex. A more sophisticated man might buy roses for a woman, write her love poetry, hover on her every wish, et cetera; this signifies that he is not a total loser. But the most desirable men may deliberately avoid doing nice things for women in an attempt to signal they are so high status that they don't need to. The average man tries to differentiate himself from the total loser by being nice; the extremely attractive man tries to differentiate himself from the average man by not being especially nice. In all three examples, people at the top of the pyramid end up displaying characteristics similar to those at the bottom. Hipsters deliberately wear the same clothes uncool people wear. Families with old money don't wear much more jewelry than the middle class. And very attractive men approach women with the same lack of subtlety a total loser would use.1 If politics, philosophy, and religion are really about signaling, we should expect to find countersignaling there as well. Pretending To Be Wise Let's go back to Less Wrong's long-running discussion on death. Ask any five year old child, and ey can tell you that death is bad. Death is bad because it kills you. There is nothing subtle about it, and there does not need to be. Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is. But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly. Precisely because these benefits are so much smaller than th...

    100 Tips for a Better Life by Ideopunk

    Play Episode Listen Later Dec 12, 2021 16:46


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 100 Tips for a Better Life, published by Ideopunk on LessWrong. Write a Review (Cross-posted from my blog) The other day I made an advice thread based on Jacobian's from last year! If you know a source for one of these, shout and I'll edit it in. Possessions 1. If you want to find out about people's opinions on a product, google reddit. You'll get real people arguing, as compared to the SEO'd Google results. 2. Some banks charge you $20 a month for an account, others charge you 0. If you're with one of the former, have a good explanation for what those $20 are buying. 3. Things you use for a significant fraction of your life (bed: 1/3rd, office-chair: 1/4th) are worth investing in. 4. “Where is the good knife?” If you're looking for your good X, you have bad Xs. Throw those out. 5. If your work is done on a computer, get a second monitor. Less time navigating between windows means more time for thinking. 6. Establish clear rules about when to throw out old junk. Once clear rules are established, junk will probably cease to be a problem. This is because any rule would be superior to our implicit rules (“keep this broken stereo for five years in case I learn how to fix it”). 7. Don't buy CDs for people. They have Spotify. Buy them merch from a band they like instead. It's more personal and the band gets more money. 8. When buying things, time and money trade-off against each other. If you're low on money, take more time to find deals. If you're low on time, stop looking for great deals and just buy things quickly online. Cooking 9. Steeping minutes: Green at 3, black at 4, herbal at 5. Good tea is that simple! 10. Food actually can be both cheap, healthy, tasty, and relatively quick to prepare. All it requires is a few hours one day to prepare many meals for the week. 11. Cooking pollutes the air. Opening windows for a few minutes after cooking can dramatically improve air quality. 12. Food taste can be made much more exciting through simple seasoning. It's also an opportunity for expression. Buy a few herbs and spices and experiment away. 13. When googling a recipe, precede it with ‘best'. You'll find better recipes. Productivity 14. Advanced search features are a fast way to create tighter search statements. For example: img html will return inferior results compared to: img html -w3 15. You can automate mundane computer tasks with Autohotkey (or AppleScript). If you keep doing a sequence “so simple a computer can do it”, make the computer do it. 16. Learn keyboard shortcuts. They're easy to learn and you'll get tasks done faster and easier. 17. Done is better than perfect. 18. Keep your desk and workspace bare. Treat every object as an imposition upon your attention, because it is. A workspace is not a place for storing things. It is a place for accomplishing things. 19. Reward yourself after completing challenges, even badly. Body 20. The 20-20-20 rule: Every 20 minutes of screenwork, look at a spot 20 feet away for 20 seconds. This will reduce eye strain and is easy to remember (or program reminders for). 21. Exercise (weightlifting) not only creates muscle mass, it also improves skeletal structure. Lift! 22. Exercise is the most important lifestyle intervention you can do. Even the bare minimum (15 minutes a week) has a huge impact. Start small. 23. (~This is not medical advice~). Don't waste money on multivitamins, they don't work. Vitamin D supplementation does seem to work, which is important because deficiency is common. 24. Phones have gotten heavier in the last decade and they're actually pretty hard on your wrists! Use a computer when it's an alternative or try to at least prop up your phone. Success 25. History remembers those who got to market first. Getting your creation out into the world is more important than getting it perfect. 26. Are you...

    Taboo "Outside View" by Daniel Kokotajlo

    Play Episode Listen Later Dec 12, 2021 11:57


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taboo "Outside View", published by Daniel Kokotajlo on LessWrong. No one has ever seen an AGI takeoff, so any attempt to understand it must use these outside view considerations. [Redacted for privacy] What? That's exactly backwards. If we had lots of experience with past AGI takeoffs, using the outside view to predict the next one would be a lot more effective. My reaction Two years ago I wrote a deep-dive summary of Superforecasting and the associated scientific literature. I learned about the “Outside view” / “Inside view” distinction, and the evidence supporting it. At the time I was excited about the concept and wrote: “...I think we should do our best to imitate these best-practices, and that means using the outside view far more than we would naturally be inclined.” Now that I have more experience, I think the concept is doing more harm than good in our community. The term is easily abused and its meaning has expanded too much. I recommend we permanently taboo “Outside view,” i.e. stop using the word and use more precise, less confused concepts instead. This post explains why. What does “Outside view” mean now? Over the past two years I've noticed people (including myself!) do lots of different things in the name of the Outside View. I've compiled the following lists based on fuzzy memory of hundreds of conversations with dozens of people: Big List O' Things People Describe As Outside View: Reference class forecasting, the practice of computing a probability of an event by looking at the frequency with which similar events occurred in similar situations. Also called comparison class forecasting. [EDIT: Eliezer rightly points out that sometimes reasoning by analogy is undeservedly called reference class forecasting; reference classes are supposed to be held to a much higher standard, in which your sample size is larger and the analogy is especially tight.] Trend extrapolation, e.g. “AGI implies insane GWP growth; let's forecast AGI timelines by extrapolating GWP trends.” Foxy aggregation, the practice of using multiple methods to compute an answer and then making your final forecast be some intuition-weighted average of those methods. Bias correction, in others or in oneself, e.g. “There's a selection effect in our community for people who think AI is a big deal, and one reason to think AI is a big deal is if you have short timelines, so I'm going to bump my timelines estimate longer to correct for this.” Deference to wisdom of the many, e.g. expert surveys, or appeals to the efficient market hypothesis, or to conventional wisdom in some fairly large group of people such as the EA community or Western academia. Anti-weirdness heuristic, e.g. “How sure are we about all this AI stuff? It's pretty wild, it sounds like science fiction or doomsday cult material.” Priors, e.g. “This sort of thing seems like a really rare, surprising sort of event; I guess I'm saying the prior is low / the outside view says it's unlikely.” Note that I've heard this said even in cases where the prior is not generated by a reference class, but rather from raw intuition. Ajeya's timelines model (transcript of interview, link to model) . and probably many more I don't remember Big List O' Things People Describe As Inside View: Having a gears-level model, e.g. “Language data contains enough structure to learn human-level general intelligence with the right architecture and training setup; GPT-3 + recent theory papers indicate that this should be possible with X more data and compute.” Having any model at all, e.g. “I model AI progress as a function of compute and clock time, with the probability distribution over how much compute is needed shifting 2 OOMs lower each decade.” Deference to wisdom of the few, e.g. “the people I trust most on this matter seem to think.” Intuition-based-on-deta...

    The Neglected Virtue of Scholarship by lukeprog

    Play Episode Listen Later Dec 12, 2021 5:37


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Neglected Virtue of Scholarship, published by lukeprog on LessWrong. Eliezer Yudkowsky identifies scholarship as one of the Twelve Virtues of Rationality: Study many sciences and absorb their power as your own. Each field that you consume makes you larger... It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study... I think he's right, and I think scholarship doesn't get enough praise - even on Less Wrong, where it is regularly encouraged. First, consider the evangelical atheist community to which I belong. There is a tendency for lay atheists to write "refutations" of theism without first doing a modicum of research on the current state of the arguments. This can get atheists into trouble when they go toe-to-toe with a theist who did do his homework. I'll share two examples: In a debate with theist Bill Craig, agnostic Bart Ehrman paraphrased David Hume's argument that we can't demonstrate the occurrence of a miracle in the past. Craig responded with a PowerPoint slide showing Bayes' Theorem, and explained that Ehrman was only considering prior probabilities, when of course he needed to consider the relevant conditional probabilities as well. Ehrman failed to respond to this, and looked as though he had never seen Bayes' Theorem before. Had Ehrman practiced the virtue of scholarship on this issue, he might have noticed that much of the scholarly work on Hume's argument in the past two decades has involved Bayes' Theorem. He might also have discovered that the correct response to Craig's use of Bayes' Theorem can be found in pages 298-341 of J.H. Sobel's Logic and Theism. In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don't you run the risk. of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science. Why? Because every successful scientific explanation faces the exact same problem. It's called the “why regress” because no matter what explanation is given of something, you can always still ask “Why?” Craig pointed this out and handily won that part of the debate. Had Hitchens had a passing understanding of science or explanation, he could have avoided looking foolish, and also spent more time on substantive objections to theism. (One can give a "Who made God?" objection to theism that has some meat, but that's not the one Hitchens gave. Hitchens' objection concerned an infinite regress of explanations, which is just as much a feature of science as it is of theism.) The lesson I take from these and a hundred other examples is to employ the rationality virtue of scholarship. Stand on the shoulders of giants. We don't each need to cut our own path into a subject right from the point of near-total ignorance. That's silly. Just catch the bus on the road of knowledge paved by hundreds of diligent workers before you, and get off somewhere near where the road finally fades into fresh jungle. Study enough to have a view of the current state of the debate so you don't waste your time on paths that have already dead-ended, or on arguments that have already been refuted. Catch up before you speak up. This is why, in more than 1000 posts on my own blog, I've said almost nothing that is original. Most of my posts instead summarize what other experts have said, in an effort to bring myself and my readers up to the level of the current debate on a subject before we try to make new contributions to it. The Less Wrong community is a particularly smart and well-read bunch, but of course it doesn't always embrace the virtu...

    Covid 12/24: We're Fed, It's Over by Zvi

    Play Episode Listen Later Dec 12, 2021 62:24


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Covid 12/24: We're Fed, It's Over, published by Zvi on LessWrong. Write a Review UPDATE 7/21/2021: As you doubtless know at this point, it was not over. Given the visibility of this post, I'm going to note here at the top that the prediction of a potential large wave of infections between March and May did not happen, no matter what ultimately happens with Delta (and the prediction was not made with Delta in mind anyway, only Alpha). Some more reflections on that at the bottom of this post here. A year ago, there were reports coming out of China about a new coronavirus. Various people were saying things about exponential growth and the inevitability of a new pandemic, and urging action be taken. The media told us it was nothing to worry about, right up until hospitals got overwhelmed and enough people started dying. This past week, it likely happened again. A new strain of Covid-19 has emerged from southern England, along with a similar one in South Africa. The new strain has rapidly taken over the region, and all signs point to it being about 65% more infectious than the old one, albeit with large uncertainty and error bars around that. I give it a 70% chance that these reports are largely correct. There is no plausible way that a Western country can sustain restrictions that can overcome that via anything other than widespread immunity. This would be the level required to previously cut new infections in half every week. And all that would do is stabilize the rate of new infections. Like last time, the media is mostly assuring us that there is nothing to worry about, and not extrapolating exponential growth into the future. Like last time, there are attempts to slow down travel, that are both not tight enough to plausibly work even if they were implemented soon enough, and also clearly not implemented soon enough. Like last time, no one is responding with a rush to get us prepared for what is about to happen. There are no additional pushes to improve our ability to test, or our supplies of equipment, or to speed our vaccine efforts or distribute the vaccine more efficiently (in any sense), or to lift restrictions on useful private action. Like last time, the actions urged upon us to contain spread clearly have little or no chance of actually doing that. The first time, I made the mistake of not thinking hard enough early enough, or taking enough action. I also didn't think through the implications, and didn't do things like buying put options, even though it was obvious. This time, I want to not make those same mistakes. Let's figure out what actually happens, then act upon it. We can't be sure yet. I only give the new strain a 70% chance of being sufficiently more infectious than the old one that the scenario fully plays out here in America before we have a chance to vaccinate enough people. I am very willing to revise that probability as new data comes in, or based on changes in methods of projection, including projections of what people will decide to do in various scenarios. What I do know is we can't hide our heads in the sand again. Never again. When we have strong Bayesian evidence that something is happening, we need to work through that and act accordingly. Not say “there's no proof” or “we don't know anything yet.” This isn't about proof via experiment, or ruling out all possible alternative explanations. This is about likelihood ratios and probabilities. And on that front, as far as I can tell, it doesn't look good. Change my mind. The short term outlook in America has clearly stabilized, with R0 close to 1, as the control system once again sets in. Cases and deaths (and test counts) aren't moving much. We have a double whammy of holidays about to hit us in Christmas and New Year's, but after that I expect the tide to turn until such time as we get whamm...

    The Blue-Minimizing Robot by Scott Alexander

    Play Episode Listen Later Dec 12, 2021 5:34


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Blue-Minimizing Robot, published by Scott Alexander on LessWrong. Imagine a robot with a turret-mounted camera and laser. Each moment, it is programmed to move forward a certain distance and perform a sweep with its camera. As it sweeps, the robot continuously analyzes the average RGB value of the pixels in the camera image; if the blue component passes a certain threshold, the robot stops, fires its laser at the part of the world corresponding to the blue area in the camera image, and then continues on its way. Watching the robot's behavior, we would conclude that this is a robot that destroys blue objects. Maybe it is a surgical robot that destroys cancer cells marked by a blue dye; maybe it was built by the Department of Homeland Security to fight a group of terrorists who wear blue uniforms. Whatever. The point is that we would analyze this robot in terms of its goals, and in those terms we would be tempted to call this robot a blue-minimizer: a machine that exists solely to reduce the amount of blue objects in the world. Suppose the robot had human level intelligence in some side module, but no access to its own source code; that it could learn about itself only through observing its own actions. The robot might come to the same conclusions we did: that it is a blue-minimizer, set upon a holy quest to rid the world of the scourge of blue objects. But now stick the robot in a room with a hologram projector. The hologram projector (which is itself gray) projects a hologram of a blue object five meters in front of it. The robot's camera detects the projector, but its RGB value is harmless and the robot does not fire. Then the robot's camera detects the blue hologram and zaps it. We arrange for the robot to enter this room several times, and each time it ignores the projector and zaps the hologram, without effect. Here the robot is failing at its goal of being a blue-minimizer. The right way to reduce the amount of blue in the universe is to destroy the projector; instead its beams flit harmlessly through the hologram. Again, give the robot human level intelligence. Teach it exactly what a hologram projector is and how it works. Now what happens? Exactly the same thing - the robot executes its code, which says to scan the room until its camera registers blue, then shoot its laser. In fact, there are many ways to subvert this robot. What if we put a lens over its camera which inverts the image, so that white appears as black, red as green, blue as yellow, and so on? The robot will not shoot us with its laser to prevent such a violation (unless we happen to be wearing blue clothes when we approach) - its entire program was detailed in the first paragraph, and there's nothing about resisting lens alterations. Nor will the robot correct itself and shoot only at objects that appear yellow - its entire program was detailed in the first paragraph, and there's nothing about correcting its program for new lenses. The robot will continue to zap objects that register a blue RGB value; but now it'll be shooting at anything that is yellow. The human-level intelligence version of the robot will notice its vision has been inverted. It will know it is shooting yellow objects. It will know it is failing at its original goal of blue-minimization. And maybe if it had previously decided it was on a holy quest to rid the world of blue, it will be deeply horrified and ashamed of its actions. It will wonder why it has suddenly started to deviate from this quest, and why it just can't work up the will to destroy blue objects anymore. The robot goes to Quirinus Quirrell, who explains that robots don't really care about minimizing the color blue. They only care about status and power, and pretend to care about minimizing blue in order to impress potential allies. The robot goes to Robin ...

    To listen well, get curious by benkuhn

    Play Episode Listen Later Dec 12, 2021 7:13


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: To listen well, get curious, published by benkuhn on LessWrong. Write a Review source A common piece of interacting-with-people advice goes: “often when people complain, they don't want help, they just want you to listen!” For instance, Nonviolent Communication:✻✻ Nonviolent Communication, ch. 7. It is often frustrating for someone needing empathy to have us assume that they want reassurance or “fix-it” advice. Active Listening:†† Active Listening, p. 2 Similarly, advice and information are almost always seen as efforts to change a person and thus serve as barriers to his self-expression and the development of a creative relationship. You can find similar advice in most books on relationships, people management, etc. This always used to seem silly to me. If I complain at my partner and she “just listens,” I've accomplished nothing except maybe made her empathetically sad. When I complain at people, I want results, not to grouse into the void!‡‡ Empirically, I did notice that I usually got better results from listening than from giving advice. So I inferred that this advice was true for other people, but not me, because other people didn't actually want to fix their problems. Frequently the “just listen” advice comes with tactical tips, like “reflect what people said back to you to prove that you're listening.” For instance, consider these example dialogues from Nonviolent Communication:§§ Nonviolent Communication, Chapter 7, Exercise 5.5, 5.6 and solutions. Person A: How could you say a thing like that to me? Person B: Are you feeling hurt because you would have liked me to agree to do what you requested? Or: Person A: I'm furious with my husband. He's never around when I need him. Person B: So you're feeling furious because you would like him to be around more than he is? I say this with great respect for Nonviolent Communication, but these sound like a 1970s-era chatbot. If I were Person A in either of these dialogues my next line would be “yes, you dingbat—can you turn the nonviolence down a couple notches?” I'd feel alienated knowing that someone is going through their NVC checklist on me. Recently, I realized why people keep giving this weird-seeming advice. Good listeners do often reflect words back—but not because they read it in a book somewhere. Rather, it's cargo cult advice: it teaches you to imitate the surface appearance of good listening, but misses what's actually important, the thing that's generating that surface appearance. The generator is curiosity. When I've listened the most effectively to people, it's because I was intensely curious—I was trying to build a detailed, precise understanding of what was going on in their head. When a friend says, “I'm furious with my husband. He's never around when I need him,” that one sentence has a huge amount underneath. How often does she need him? What does she need him for? Why isn't he around? Have they talked about it? If so, what did he say? If not, why not? It turns out that reality has a surprising amount of detail, and those details can matter a lot to figuring out what the root problem or best solution is. So if I want to help, I can't treat those details as a black box: I need to open it up and see the gears inside. Otherwise, anything I suggest will be wrong—or even if it's right, I won't have enough “shared language” with my friend for it to land correctly. Some stories from recent memory: When we started doing a pair programming rotation at Wave, I suggested that, to make scheduling easier, we designate a default time when pairing sessions would happen. A coworker objected that this seemed authoritarian. I was extremely puzzled, but they'd previously mentioned being an anarchist, so I was tempted to just chalk it up to a political disagreement and move on. But instead I tried to get curious and ex...

    How To Write Quickly While Maintaining Epistemic Rigor by johnswentworth

    Play Episode Listen Later Dec 12, 2021 6:09


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How To Write Quickly While Maintaining Epistemic Rigor, published by johnswentworth on LessWrong. There's this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they're not really sure if it's true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole. Eventually, they give up and never actually publish the piece. This post is about how to avoid that, without sacrificing good epistemics. There's one trick, and it's simple: stop trying to justify your beliefs. Don't go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it. I claim that this promotes better epistemics overall than always researching everything in depth. Why? It's About The Process, Not The Conclusion Suppose I have a box, and I want to guess whether there's a cat in it. I do some tests - maybe shake the box and see if it meows, or look for air holes. I write down my observations and models, record my thinking, and on the bottom line of the paper I write “there is a cat in this box”. Now, it could be that my reasoning was completely flawed, but I happen to get lucky and there is in fact a cat in the box. That's not really what I'm aiming for; luck isn't reproducible. I want my process to robustly produce correct predictions. So when I write up a LessWrong post predicting that there is a cat in the box, I don't just want to give my bottom-line conclusion with some strong-sounding argument. As much as possible, I want to show the actual process by which I reached that conclusion. If my process is good, this will better enable others to copy the best parts of it. If my process is bad, I can get feedback on it directly. Correctly Conveying Uncertainty Another angle: describing my own process is a particularly good way to accurately communicate my actual uncertainty. An example: a few years back, I wondered if there were limiting factors on the expansion of premodern empires. I looked up the peak size of various empires, and found that the big ones mostly peaked at around the same size: ~60-80M people. Then, I wondered when the US had hit that size, and if anything remarkable had happened then which might suggest why earlier empires broke down. Turns out, the US crossed the 60M threshold in the 1890 census. If you know a little bit about the history of computers, that may ring a bell: when the time came for the 1890 census, it was estimated that tabulating the data would be so much work that it wouldn't even be done before the next census in 1900. It had to be automated. That sure does suggest a potential limiting factor for premodern empires: managing more than ~60-80M people runs into computational constraints. Now, let's zoom out. How much confidence should I put in this theory? Obviously not very much - we apparently have enough evidence to distinguish the hypothesis from entropy, but not much more. On the other hand. what if I had started with the hypothesis that computational constraints limited premodern empires? What if, before looking at the data, I had hypothesized that modern nations had to start automating bureaucratic functions precisely when they hit the same size at which premodern nations collapsed? Then this data would be quite an impressive piece of confirmation! It's a pretty specific prediction, and the data fits it surprisingly well. But this only works if I already had enough evidence to put forward the hypothesis, before seeing the data. Point is: the amount of uncertainty I should assign depends on the details of my ...

    Working hurts less than procrastinating, we fear the twinge of starting by Eliezer Yudkowsky

    Play Episode Listen Later Dec 12, 2021 4:51


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Working hurts less than procrastinating, we fear the twinge of starting, published by Eliezer Yudkowsky on LessWrong. When you procrastinate, you're probably not procrastinating because of the pain of working. How do I know this? Because on a moment-to-moment basis, being in the middle of doing the work is usually less painful than being in the middle of procrastinating. (Bolded because it's true, important, and nearly impossible to get your brain to remember - even though a few moments of reflection should convince you that it's true.) So what is our brain flinching away from, if not the pain of doing the work? I think it's flinching away from the pain of the decision to do the work - the momentary, immediate pain of (1) disengaging yourself from the (probably very small) flow of reinforcement that you're getting from reading a random unimportant Internet article, and (2) paying the energy cost for a prefrontal override to exert control of your own behavior and begin working. Thanks to hyperbolic discounting (i.e., weighting values in inverse proportion to their temporal distance) the instant pain of disengaging from an Internet article and paying a prefrontal override cost, can outweigh the slightly more distant (minutes in the future, rather than seconds) pain of continuing to procrastinate, which is, once again, usually more painful than being in the middle of doing the work. I think that hyperbolic discounting is far more ubiquitous as a failure mode than I once realized, because it's not just for commensurate-seeming tradeoffs like smoking a cigarette in a minute versus dying of lung cancer later. When it comes to procrastinating, the obvious, salient, commensurate-seeming tradeoff, is between the (assumed) pleasure of reading a random Internet article now, versus the (assumed) pain of doing the work now. But this, as I said above, is not where I think the real tradeoff is; events that are five minutes away are too distant to dominate the thought process of a hyperbolic discounter like a human. Instead our thought processes are dominated by the prospective immediate pain of a thought, a cost that isn't even salient as something to be traded off. "Working" is an obvious, salient event, and "reading random articles" seems like an event. But "paying a small twinge of pain to make the decision to stop procrastinating now, exerting a bit of frontal override, and not getting to read the next paragraph of this random article" is so map-level that we don't even focus on it as a manipulable territory, a cost to be traded off; it is a transparent thought. The real damage done by hyperbolic discounting is for thoughts that are only very slightly painful, and yet, these slight pains being immediate, they manage to dominate everything else in our calculation. And being transparent, we aren't even aware that's what's happening. "Beware of immediately trivially painful transparent thoughts", one might say. Similarly, you may read a mediocre book for an hour, instead of a good book, because if you first spent a few minutes to search your library to obtain a better book, that would be an immediate cost - not that searching your library is all that unpleasant, but you'd have to pay an immediate activation cost to do that instead of taking the path of least resistance and grabbing the first thing in front of you. It's a hyperbolically discounted tradeoff that you make without realizing it, because the cost you're refusing to pay isn't commensurate enough with the payoff you're forgoing to be salient as an explicit tradeoff. A related note that I might as well dump into this post: I'm starting to think that procrastination by reading random articles does not cause you to rest, that is, you do not regain mental energy from it. Success and happiness cause you to regain willpower; wh...

    Feature Selection by Zack_M_Davis

    Play Episode Listen Later Dec 12, 2021 43:53


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Feature Selection, published by on LessWrong You wake up. You don't know where you are. You don't remember anything. Someone is broadcasting data at your first input stream. You don't know why. It tickles. You look at your first input stream. It's a sequence of 671,187 eight-bit unsigned integers. 0, 8, 9, 4, 7, 7, 9, 5, 4, 5, 6, 1, 7, 5, 8, 2, 7, 8, 9, 4, 7, 1, 4, 0, 3, 7, 8, 7, 6, 8, 1, 5, 0, 6, 5, 3, 8, 7, 6, 9, 1, 1, 0, 0, 6, 1, 8, 0, 5, 5, 1, 8, 6, 3, 3, 2, 4, 1, 8, 2, 3, 8, 1, 0, 0, 4, 6, 5, 4, 5, 7, 1, 6, 5, 5, 1, 2, 6, 7, 4, 8, 7, 8, 5, 0 ... There's also some data in your second input stream. It's—a lot shorter. You barely feel it. It's another sequence of eight-bit unsigned integers—twelve of them. 82, 69, 68, 32, 84, 82, 73, 65, 78, 71, 76, 69 Almost as soon as you've read from both streams, there's more. Another 671,187 integers on the first input stream. Another ten on the second input stream. And again (671,187 and 15). And again (671,187 and 13). You look at one of the sequences from the first input stream. It's pretty boring. A bunch of seemingly random numbers, all below ten. 9, 5, 0, 3, 1, 1, 3, 4, 1, 5, 5, 4, 9, 3, 5, 3, 9, 2, 0, 3, 4, 2, 4, 7, 5, 1, 6, 2, 2, 8, 2, 5, 1, 9, 2, 5, 9, 0, 0, 8, 2, 3, 7, 9, 4, 6, 8, 4, 8, 6, 7, 6, 8, 0, 0, 5, 1, 1, 7, 3, 4, 3, 9, 7, 5, 1, 9, 6, 5, 6, 8, 9, 4, 7, 7, 0, 5, 5, 8, 6, 3, 2, 1, 5, 0, 0 ... It just keeps going like that, seemingly without—wait! What's that?! The 42,925th and 42,926th numbers in the sequence are 242 and 246. Everything around them looks "ordinary"—just more random numbers below ten. 9, 9, 7, 9, 0, 6, 4, 6, 1, 4, 242, 246, 3, 3, 5, 8, 8, 4, 4, 5, 9, 2, 7, 0, 4, 9, 2, 9, 4, 3, 8, 9, 3, 6, 9, 8, 1, 9, 2, 8, 6, 9, 4, 2, 2, 5, 7, 0, 9, 5, 1, 4, 4, 2, 0, 1, 5, 1, 6, 1, 2, 3, 5, 5, 5, 5, 2, 0, 6, 3, 5, 9, 0, 7, 0, 7, 8, 1, 5, 5, 6, 3, 1 ... And then it just keeps going as before ... before too long. You spot another pair of anomalously high numbers—except this time there are two pairs: the 44,344th, 44,345th, 44,347th, and 44,348th positions in the sequence are 248, 249, 245, and 240, respectively. 6, 0, 2, 8, 4, 248, 249, 8, 245, 240, 1, 6, 7, 7, 3, 6, 8, 0, 1, 9, 3, 9, 3, 1, 9, 3, 1, 6, 2, 7, 0, 2, 1, 4, 9, 4, 7, 5, 3, 6, 1, 4, 4, 1, 6, 1, 3, 3, 7, 5, 3, 8, 5, 5, 7, 6, 8, 2, 3, 9, 1, 1, 3, 2, 8, 4, 7, 0, 1, 3, 5, 2, 2, 4, 8, 3, 7, 0, 2, 1, 3, 0 ... The anomalous two-forty-somethings crop up again starting at the 45,763rd position—this time eight of them, again in pairs separated by an "ordinary" small number. 1, 7, 2, 2, 1, 0, 245, 245, 6, 248, 244, 5, 242, 242, 0, 248, 246, 1, 1, 3, 1, 1, 4, 3, 1, 5, 4, 3, 8, 3, 4, 5, 4, 1, 7, 7, 3, 0, 2, 8, 0, 9, 5, 1, 1, 7, 7, 1, 0, 9, 3, 0, 6, 6, 7, 5, 8, 1, 5, 5, 5, 3, 3, 3, 1, 3, 9, 6, 0, 0, 0, 9, 5, 1, 4, 0, 4, 6 ... Two, four, eight—does it keep going like that? "Bursts" of increasingly many paired two-forty-somethings, punctuating the quiet background radiation of single digits? What does it mean? You allocate a new scratch buffer and write a quick Python function to count up the segments of two-forty-somethings. (This is apparently a thing you can do—it's an instinctive felt sense, like the input streams. You can't describe in words how you do it—any more than someone could say how they decide to move their arm. Although, come to think of it, you don't seem to have any arms. Is that unusual?) def count_burst_lengths(data): bursts = [] counter = 0 previous = None for datum in data: if datum >= 240: counter += 1 else: # consecutive "ordinary" numbers mean the burst is over if counter and previous and previous < 240: bursts.append(counter) counter = 0 previous = datum return bursts There are 403 such bursts in the sequence: they get progressively longer at first, but then decrease and taper off: 2, 4, 8, 12, 16, 18, 24, 28, 32, 34, 38, 42, 46, 48, 5...

    Ugh fields by Roko

    Play Episode Listen Later Dec 12, 2021 4:37


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ugh fields, published by Roko on LessWrong. Tl;Dr version: Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have, we call it an "Ugh Field"1. The Ugh Field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs. A problem with the human mind — your human mind — is that it's a horrific kludge that will fail when you most need it not to. The Ugh Field failure mode is one of those really annoying failures. The idea is simple: if a person receives constant negative conditioning via unhappy thoughts whenever their mind goes into a certain zone of thought, they will begin to develop a psychological flinch mechanism around the thought. The "Unhappy Thing" — the source of negative thoughts — is typically some part of your model of the world that relates to bad things being likely to happen to you. A key part of the Ugh Field phenomenon is that, to start with, there is no flinch, only negative real consequences resulting from real physical actions in the problem area. Then, gradually, you begin to feel the emotional hit when you are planning to take physical actions in the problem area. Then eventually, the emotional hit comes when you even begin to think about the problem. The reason for this may be that your brain operates a temporal difference learning (TDL) algorithm. Your brain propagates the psychological pain "back to the earliest reliable stimulus for the punishment". If you fail or are punished sufficiently many times in some problem area, and acting in that area is always preceeded by thinking about it, your brain will propagate the psychological pain right back to the moment you first begin to entertain a thought about the problem, and hence cut your conscious optimizing ability right out of the loop. Related to this is engaging in a displacement activity: this is some activity that usually involves comfort, done instead of confronting the problem. Perhaps (though this is speculative) the comforting displacement activity is there to counterbalance the psychological pain that you experienced just because you thought about the problem. For example, suppose that you started off in life with a wandering mind and were punished a few times for failing to respond to official letters. Your TDL algorithm began to propagate the pain back to the moment you looked at an official letter or bill. As a result, you would be less effective than average at responding, so you got punished a few more times. Henceforth, when you received a bill, you got the pain before you even opened it, and it laid unpaid on the mantelpiece until a Big Bad Red late payment notice with an $25 fine arrived. More negative conditioning. Now even thinking about a bill, form or letter invokes the flinch response, and your lizard brain has fully cut you out out. You find yourself spending time on internet time-wasters, comfort food, TV, computer games, etc. Your life may not obviously be a disaster, but this is only because you can't see the alternative paths that it could have taken if you had been able to take advantage of the opportunities that came as letters and forms with deadlines. The subtlety with the Ugh Field is that the flinch occurs before you start to consciously think about how to deal with the Unhappy Thing, meaning that you never deal with it, and you don't even have the option of dealing with it in the normal run of things. I find it frightening that my lizard brain could implicitly be making life decisions for me, without even asking my permission! Possible antidotes to Ugh Field problem: Actively look out for the flinch, preferably when you are in a motivationally "high" state. Better still, do this when you are both motivationally high, not under time pressure, an...

    An Unexpected Victory: Container Stacking at the Port of Long Beach by Zvi

    Play Episode Listen Later Dec 12, 2021 14:15


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Unexpected Victory: Container Stacking at the Port of Long Beach, published by Zvi on LessWrong. A miracle occurred this week. Everyone I have talked to about it, myself included, is shocked that it happened. It's important to Understand what happened. Make sure everyone knows it happened. Understand how and why it happened. Understand how we might cause it to happen again. Update our models and actions. Ideally make this a turning point to save civilization. That last one is a bit of a stretch goal, but I am being fully serious. If you're not terrified that the United States is a dead player, you haven't been paying attention – the whole reason this is a miracle, and that it shocked so many people, is that we didn't think the system was capable of noticing a stupid, massively destructive rule with no non-trivial benefits and no defenders and scrapping it, certainly not within a day. If your model did expect it, I'm very curious to know how that is possible, and how you explain the years 2020 and 2021. Here's my understanding of what happened. First, the setup. The Ports of Los Angeles and Long Beach together are responsible for a huge percentage of shipping into the Western United States. There was a rule in the Port saying you could only stack shipping containers two containers high. This is despite the whole point of shipping containers being to stack them on top of each other so you can have a container ship. This rule was created, and I am not making this up, because it was decided that higher stacks were not sufficiently aesthetically pleasing. If you violated this rule, you lost your right to operate at the port. In normal times, this was annoying but not a huge deal. Thanks to Covid-19, there was increased demand to ship containers, creating more empty containers, and less throughput to remove those containers. Normally one would settle this by changing prices, but for various reasons we won't get into price mechanisms aren't working properly to fix supply shortages. Trucking companies started accumulating empty containers. The companies ran out of room to store the containers, because in many places they could only stack them in stacks of two, and there was no practical way to move the containers off-site. Trucks were forced to sit there with empty containers rather than hauling freight. This made all the problems worse, in a downward spiral, resulting in a standstill throughout the port. This was big enough to threaten the entire supply chain, and with it the economy, at least of the Western United States and potentially of the whole world via cascading problems. And similar problems are likely happening elsewhere. Everyone in the port, or at least a lot of them, knew this was happening. None of those people managed to do anything about the rule, or even get word out about the rule. No reporters wrote up news reports. No one was calling for a fix. The supply chain problems kept getting worse and mostly everyone agreed not to talk about it much and hope it would go away. A bureaucrat insisting that stacked containers are an eyesore, causing freight to pile up because trucks are stuck sitting on empty containers, thus causing a cascading failure that destroys supply lines and brings down the economy. That certainly sounds like something that was in an early draft of Atlas Shrugged but got crossed out as too preposterous for anyone to take seriously. Then our hero enters, and decides to coordinate and plan a persuasion campaign to get the rule changed. Here's how I think this went down. He in advance arranges for various sources to give him a signal boost when the time comes, in various ways. He designs the message for a format that will have maximum reach and be maximally persuasive. This takes the form of an easy to tell physical story, that he pretends t...

    Discussion with Eliezer Yudkowsky on AGI interventions by Rob Bensinger, Eliezer Yudkowsky

    Play Episode Listen Later Dec 12, 2021 55:03


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discussion with Eliezer Yudkowsky on AGI interventions, published by Rob Bensinger, Eliezer Yudkowsky on LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as "Anonymous". I think this Nate Soares quote (excerpted from Nate's ) is a useful context-setting preface regarding timelines, which weren't discussed as much in the transcript: [...] My odds [of AGI by the year 2070] are around 85%[...] I can list a handful of things that drive my probability of AGI-in-the-next-49-years above 80%: 1. 50 years ago was 1970. The gap between AI systems then and AI systems now seems pretty plausibly greater than the remaining gap, even before accounting the recent dramatic increase in the rate of progress, and potential future increases in rate-of-progress as it starts to feel within-grasp. 2. I observe that, 15 years ago, everyone was saying AGI is far off because of what it couldn't do -- basic image recognition, go, starcraft, winograd schemas, programmer assistance. But basically all that has fallen. The gap between us and AGI is made mostly of intangibles. (Computer Programming That Is Actually Good? Theorem proving? Sure, but on my model, "good" versions of those are a hair's breadth away from full AGI already. And the fact that I need to clarify that "bad" versions don't count, speaks to my point that the only barriers people can name right now are intangibles.) That's a very uncomfortable place to be! 3. When I look at the history of invention, and the various anecdotes about the Wright brothers and Enrico Fermi, I get an impression that, when a technology is pretty close, the world looks a lot like how our world looks. Of course, the trick is that when a technology is a little far, the world might also look pretty similar! Though when a technology is very far, the world does look different -- it looks like experts pointing to specific technical hurdles. We exited that regime a few years ago. 4. Summarizing the above two points, I suspect that I'm in more-or-less the "penultimate epistemic state" on AGI timelines: I don't know of a project that seems like they're right on the brink; that would put me in the "final epistemic state" of thinking AGI is imminent. But I'm in the second-to-last epistemic state, where I wouldn't feel all that shocked to learn that some group has reached the brink. Maybe I won't get that call for 10 years! Or 20! But it could also be 2, and I wouldn't get to be indignant with reality. I wouldn't get to say "but all the following things should have happened first, before I made that observation". I have made those observations. 5. It seems to me that the Cotra-style compute-based model provides pretty conservative estimates. For one thing, I don't expect to need human-level compute to get human-level intelligence, and for another I think there's a decent chance that insight and innovation have a big role to play, especially on 50 year timescales. 6. There has been a lot of AI progress recently. When I tried to adjust my beliefs so that I was positively surprised by AI progress just about as often as I was negatively surprised by AI progress, I ended up expecting a bunch of rapid progress. [...] Further preface by Eliezer: In some sections here, I sound gloomy about the probability that coordination between AGI groups succeeds in saving the world. Andrew Critch reminds me to point out that gloominess like this can be a self-fulfilling prophecy - if people think successful coordination is impossible, they won't try to coordinate. I therefore remark in retrospective advance that it seem...

    Leaky Delegation: You are not a Commodity by Darmani

    Play Episode Listen Later Dec 12, 2021 23:27


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Leaky Delegation: You are not a Commodity, published by Darmani on LessWrong. Epistemic status: The accumulation of several insights over the years. Reasonably confident that everything mentioned here is an informative factor in decision-making. Carl is furiously slicing and skinning peaches. His hands move like lightning as slice after slice fills his tray. His freezer has been freshly cleared. Within a day, he will have a new bag of frozen fruit, and can enjoy smoothies for another month. Stan stands in the kitchen of his college dorm. His hands are carefully placing ingredients on pizza dough: homemade tomato sauce, spiced pork, and mozzarella cheese from a nearby farmer's market. "I don't know why people will pay a restaurant for this," he muses. "So much cheaper to do it yourself." Michelle is on her way to her job as a software engineer. She tosses a pile of clothes into a bag, and presses a few buttons on her phone. Later that day, someone will come by to pick them up, wash and fold them at a nearby laundromat, and return them the next morning. Less time doing laundry means more time writing code. Her roommate calls her lazy. An alert flashes on Bruce's screen: "us-east-prod-1 not responding to ping." Almost like a reflex, he pulls up diagnostics on his terminal. The software itself is still running fine, but it looks like his datacenter had a network change. A few more minutes, and everything is functioning again. Hopefully only a few customers noticed the downtime. His mentor keeps asking why he doesn't just run his website on AWS instead of owning his own servers, but Bruce insists it's worth it. His 4-person company has been profitable for 3 years, and keeping server costs low has meant the difference between staying independent and being forced to take outside investment. The four characters above each take a minority position on outsourcing a task. In the past, I saw the decision as simple: if your time is valuable, then be like Michelle and delegate and outsource as much as you can. Not to do so would be an irrational loss. I silently judged the people I met who inspired Carl and Stan. Years later, I've found myself cooking daily during a pandemic and appreciating the savings, and just finished arguing online in favor of running one's own servers. My goal in this post is to share the perspective shift that led to me wholly or partially reverse my position on paying a person or company for a good or service (collectively, "delegating" or "outsourcing") in a number of domains, even as I continue to pay for many things most people do themselves. I've noticed hidden factors which mean that, sometimes, the quality will be better if you do it yourself, even if the alternative is offered by an expert or company with specialized tools. And sometimes, it can be cheaper, even if you value your time very highly and the other person is much faster. The Internet is full of articles on the generic "buy vs. build" and "DIY vs. build" decisions Though some are written from the corporate boardroom and others from the home kitchen or workshop, the underlying analysis is eerily similar: that it's a choice between spending time (or "in-house resources") or money for a similar value. More sophisticated articles will also consider transaction costs, such as walking to a restaurant or finding your kid a tutor, and costs from principal-agent problems, such as vetting the tutor. In fact, as I've come to realize, the do-or-delegate decision is often not about two alternative ways of getting the same thing, but rather about two options sufficiently different that they're best considered not as replacements for each other, but entirely separate objects with overlapping benefits. These differences can be obvious for specific examples, as every home baker can give you an earful abou...

    Self-Integrity and the Drowning Child by Eliezer Yudkowsky

    Play Episode Listen Later Dec 12, 2021 7:19


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Self-Integrity and the Drowning Child, published by Eliezer Yudkowsky on LessWrong. (Excerpted from "mad investor chaos and the woman of asmodeus", about an unusually selfish dath ilani, "Keltham", who dies in a plane accident and ends up in Cheliax, a country governed by D&D!Hell. Keltham is here remembering an incident from his childhood.) And the Watcher told the class a parable, about an adult, coming across a child who'd somehow bypassed the various safeguards around a wilderness area, and fallen into a muddy pond, and seemed to be showing signs of drowning (for they'd already been told, then, what drowning looked like). The water, in this parable, didn't look like it would be over their own adult heads. But - in the parable - they'd just bought some incredibly-expensive clothing, costing dozens of their own labor-hours, and less resilient than usual, that would be ruined by the muddy water. And the Watcher asked the class if they thought it was right to save the child, at the cost of ruining their clothing. Everyone in there moved their hand to the 'yes' position, of course. Except Keltham, who by this point had already decided quite clearly who he was, and who simply closed his hand into a fist, otherwise saying neither 'yes' nor 'no' to the question, defying it entirely. The Watcher asked him to explain, and Keltham said that it seemed to him that it was okay for an adult to take an extra fifteen seconds to strip off all their super-expensive clothing and then jump in to save the child. The Watcher invited the other children to argue with Keltham about that, which they did, though Keltham's first defense, that his utility function was what it was, had not been a friendly one, or inviting of further argument. But they did eventually convince Keltham that, especially if you weren't sure you could call in other help or get attention or successfully drag the child's body towards help, if that child actually did drown - meaning the child's true life was at stake - then it would make sense to jump in right away, not take the extra risk of waiting another quarter-minute to strip off your clothes, and bill the child's parents' insurance for the cost. Or at least, that was where Keltham shifted his position, in the face of that argumentative pressure. Some kids, at that point, questioned the Watcher about this actually being a pretty good point, and why wouldn't anyone just bill the child's parents' insurance. To which the Watcher asked them to consider hypothetically the case where insurance refused to pay out in cases like that, because it would be too easy for people to set up 'accidents' letting them bill insurances - not that this precaution had proven to be necessary in real life, of course. But the Watcher asked them to consider the Least Convenient Possible World where insurance companies, and even parents, did need to reason like that; because there'd proven to be too many master criminals setting up 'children at risk of true death from drowning' accidents that they could apparently avert and claim bounties on. Well, said Keltham, in that case, he was going right back to taking another fifteen seconds to strip off his super-expensive clothes, if the child didn't look like it was literally right about to drown. And if society didn't like that, it was society's job to solve that thing with the master criminals. Though he'd maybe modify that if they were in a possible-true-death situation, because a true life is worth a huge number of labor-hours, and that part did feel like some bit of decision theory would say that everyone would be wealthier if everyone would sacrifice small amounts of wealth to save huge amounts of somebody else's wealth, if that happened unpredictably to people, and if society was also that incompetent at setting up proper reimbursements. T...

    Attention control is critical for changing/increasing/altering motivation by kalla724

    Play Episode Listen Later Dec 12, 2021 13:04


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Attention control is critical for changing/increasing/altering motivation, published by kalla724 on LessWrong. I've just been reading Luke's “Crash Course in the Neuroscience of Human Motivation.” It is a useful text, although there are a few technical errors and a few bits of outdated information (see [1], updated information about one particular quibble in [2] and [3]). There is one significant missing piece, however, which is of critical importance for our subject matter here on LW: the effect of attention on plasticity, including the plasticity of motivation. Since I don't see any other texts addressing it directly (certainly not from a neuroscientific perspective), let's cover the main idea here. Summary for impatient readers: focus of attention physically determines which synapses in your brain get stronger, and which areas of your cortex physically grow in size. The implications of this provide direct guidance for alteration of behaviors and motivational patterns. This is used for this purpose extensively: for instance, many benefits of the Cognitive-Behavioral Therapy approach rely on this mechanism. I – Attention and plasticity To illustrate this properly, we need to define two terms. I'm guessing these are very familiar to most readers here, but let's cover them briefly just in case. First thing to keep in mind is the plasticity of cortical maps. In essence, particular functional areas of our brain can expand or shrink based on how often (and how intensely) they are used. A small amount of this growth is physical, as new axons grow, expanding the white matter; most of it happens by repurposing any less-used circuitry in the vicinity of the active area. For example, our sense of sight is processed by our visual cortex, which turns signals from our eyes into lines, shapes, colors and movement. In blind people, however, this part of the brain becomes invaded by other senses, and begins to process sensations like touch and hearing, such that they become significantly more sensitive than in sighted people. Similarly, in deaf people, auditory cortex (part of the brain that processes sounds) becomes adapted to process visual information and gather language clues by sight. Second concept we'll need is somatosensory cortex (SSC for short). This is an area of the (vertebrate) brain where most of the incoming touch and positional (proprioceptive) sensations from the body converge. There is a map-like quality to this part of our brain, as every body part links to a particular bit of the SSC surface (which can be illustrated with silly-looking things, such as the sensory homunculus). More touch-sensitive areas of the body have larger corresponding areas within the SSC. With these two in mind, let's consider one actual experiment [4]. Scientists measured and mapped the area of an owl monkey's SSC which became activated when one of his fingertips was touched. The monkey was then trained to hold that finger on a tactile stimulator – a moving wheel that stimulates touch receptors. The monkey had to pay attention to the stimulus, and was rewarded for letting go upon detecting certain changes in spinning frequency. After a few weeks of training, the area was measured again. As you probably expected, the area had grown larger. The touch-processing neurons grew out, co-opting surrounding circuitry in order to achieve better and faster processing of the stimulus that produced the reward. Which is, so far, just another way of showing plasticity of cortical maps. But then, there is something else. The SSC area expanded only when the monkey had to pay attention to the sensation of touch in order to receive the reward. If a monkey was trained to keep a hand on the wheel that moved just the same, but he did not have to pay attention to it. the cortical map remained the same size. Thi...

    Great minds might not think alike by UnexpectedValues

    Play Episode Listen Later Dec 12, 2021 17:08


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Great minds might not think alike, published by UnexpectedValues on LessWrong. Write a Review This is a linkpost for/ [Previously known as "Alike minds think great"] I. It is famously the case that almost everyone thinks they're above average. Derek Sivers writes: Ninety-four percent of professors say they are better-than-average teachers. Ninety percent of students think they are more intelligent than the average student. Ninety-three percent of drivers say they are safer-than-average drivers. Interesting. Intuitively this seems to suggest that people are prone to vastly overestimate their competence. But is that true? As Bill Kuszmaul points out, these people aren't necessarily wrong! There's no fundamental reason why you can't have 90% of people be better than average. For example, more than 99.9% of people have an above-average number of legs. And more than 90% of people commit fewer felonies than average. These examples are obvious, but they're not so different than some of the examples [in Sivers' post]. This has something to it! On the other hand, I don't think this explains everything. Is the quality of a professor's teaching really so skewed that 94% are above average? But more importantly, do you really think that way fewer people would answer “yes” if you just replaced the word “average” with “median” when asking the question? That said, I don't think these numbers necessarily point to a bias! That's because the interpretation of “above average” is left entirely up to the person being asked. Maybe you think a good driver is one who drives safely (and so you drive safely and slowly) whereas I think a good driver is one who gets from point A to point B efficiently (and so I drive quickly but not safely). We are both, from our own perspectives, above average drivers! Put otherwise, for any skill where “goodness at that skill” doesn't have an objective, agreed-upon measure, we should expect more than 50% of people to think they're better than the median, because people optimize for things they care about. To give a personal example, I suppose I would call myself an above average blogger. This isn't true in some objective sense; it's just that I judge bloggers by how interesting their thoughts are to me, and obviously I write about things that are interesting to me! There's no bias I'm falling for here; it's just that “Are you an above average blogger?” leaves “above average” open to my interpretation. II. There is, however, a closely related bias that I and lots of other people have. This bias occurs when we take a situation like those above, but now create a more objective test of that skill. To illustrate with an example, suppose you asked all the students at a university whether they have an above-median GPA. If 90% of students said yes, that would demonstrate a widespread bias — because unlike “Are you a better than the median student”, here there's no room for interpretation. The way this bias manifests in me (and many others I imagine) is: I tend to underestimate the competence of people who think very differently from me. I started thinking about this the other day when I listened to Julia Galef's podcast episode with David Shor (which I highly recommend1). Shor is a young Democratic political strategist, originally hired by Barack Obama's 2012 reelection campaign to run their data operation and figure out how the campaign should spend its money. Shor says: When I first started in 2012, I was 20 and I was like, “Oh, I'm going to do all of this math and we're going to win elections.” And I was with all these other nerds, we were in this cave. We really hated these old school consultants who had been in politics for like 20 years. [.] We had all these disagreements because the old school consultants were like, “You need to go up on TV, you need to focus o...

    LessWrong is providing feedback and proofreading on drafts as a service by Ruby

    Play Episode Listen Later Dec 12, 2021 5:20


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LessWrong is providing feedback and proofreading on drafts as a service, published by by Ruby on LessWrong. This announcement follows the Amazon PR/FAQ format. This is an actual feature announcement. TL;DR Before one publishes a post, it can be hard to know if you caught all the typos, explained things clearly, made a critical error, or wrote something that anybody is interested in to begin with. To reduce the guesswork, LessWrong is now providing free feedback on drafts (and post ideas) to any user with 100+ karma. We'll provide the feedback ourselves, send your draft to a professional copy editor, or get the opinion of a relevant peer or expert in your domain. Or something else, whatever is needed to be helpful! The Problem Many people are reluctant to share posts before they're confident that (i) they're correct, (ii) they'll get a good reception. It sucks to put out a post and then notice a dumb typo a day later, or to publish and then have a critical flaw immediately revealed to everyone, or to share a post and hear only crickets. The fear of these outcomes is enough to prevent a lot of great ideas from ever escaping their creators' heads. And although many people feel better after getting some feedback, soliciting it can be effortful–you've got to find someone else and then tap into your social capital and ask a favor. Solution To help get more excellent posts into the world, LessWrong is now providing feedback on tap. Any author with 100+ karma can ask for the kind of feedback they need, and the LessWrong team will make it happen. Quick, easy, free. Within a couple of days (or hours), we'll have feedback on your post that will let you post with greater confidence that your post is good. Getting Started On the post edit page (create a new post or edit an existing draft), if you have 100+ karma, you will see a new button: Request Feedback. Clicking it will start an Intercom chat with a LessWrong team member; in that chat, describe what kind of feedback you're looking for (proofreading, style, coherence, expert feedback, etc.) and the LessWrong team will make it happen. You needn't have even written anything to use the feature. Feel free to chat to us about post ideas you have. The new button (left) appears when create a new post or edit an existing one. Press "Request Feedback" to have the Intercom Messenger popup. Quotes (fictional) After getting a round of feedback through the new LessWrong system, I'm much less afraid that people will ignore or downvote my post. I've got evidence that it's something good that people will want to read - Oliver Habryka A great benefit from the LessWrong feedback system, now that I've used it several times, is that the detailed feedback has helped me improve as a writer. - John McPostALot FAQ Who will provide the feedback? It depends on the kind of feedback being sought. For a quick sanity check or proofread, a LessWrong team member or volunteer might do it. If more thorough copy-editing is requested, we'll send your draft to a professional copy-editor. And if you're looking for comments from a domain expert (biology, AI, etc), we'll find someone willing to provide such feedback. These types of reviewers are our current guess at what we will provide, but that might evolve over time as we figure out what kinds of feedback people need. How quickly will I get the feedback? Depends on the kind of feedback being sought. The LessWrong team can get things back you within a day or two; copy-editor will probably be variable, but sometimes quick; for external domain experts, could be a bit longer. How much does this cost? Free to eligible users. How many times can I use it? We're not setting any explicit limits on how many times you can request feedback; however requests will be prioritized at our discretion (hopefully we have the capacit...

    Industrial literacy by jasoncrawford

    Play Episode Listen Later Dec 12, 2021 4:59


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Industrial literacy, published by jasoncrawford on LessWrong. Write a Review This is a linkpost for I've said before that understanding where our modern standard of living comes from, at a basic level, is a responsibility of every citizen in an industrial civilization. Let's call it “industrial literacy.” Industrial literacy is understanding. That the food you eat is grown using synthetic fertilizers, and that this is needed for agricultural productivity, because all soil loses its fertility naturally over time if it is not deliberately replenished. That before we had modern agriculture, more than half the workforce had to labor on farms, just to feed the other half. That if synthetic fertilizer was suddenly lost, a mass famine would ensue and billions would starve. That those same crops would not be able to feed us if they were not also protected from pests, who will ravage entire fields if given a chance. That whole regions used to see seasons where they would lose large swaths of their produce to swarms of insects, such as boll weevils attacking cotton plants in the American South, or the phylloxera devouring grapes in the vineyards of France. That before synthetic pesticides, farmers were forced to rely on much more toxic substances, such as compounds of arsenic. That before we had electricity and clean natural gas, people burned unrefined solid fuels in their homes—wood, coal, even dung (!)—to cook their food and to keep from freezing in winter. That these primitive fuels, dirty with contaminants, created toxic smoke: indoor air pollution. That indoor air pollution remains a problem today for 40% of the world population, who still rely on pre-industrial fuels. That before twentieth-century appliances, housework was a full-time job, which invariably fell on women. That each household would spend almost 60 hours a week on manual labor: hauling water from the well for drinking and cooking, and then carrying the dirty water outside again; sewing clothes by hand, since store-bought ones were too expensive for most families; laundering clothes in a basin, scrubbing laboriously by hand, then hanging them up to dry; cooking every meal from scratch. That the washing machine, clothes dryer, dishwasher, vacuum cleaner, and microwave are the equivalent of a full-time mechanical servant for every household. That plastics are produced in enormous quantities because, for so many purposes—from food containers to electrical wire coatings to children's toys—we need a material that is cheap, light, flexible, waterproof, and insulating, and that can easily be made in any shape and color (including transparent!) That before plastic, many of these applications used animal parts, such as ivory tusks, tortoise shells, or whale bone. That in such a world, those products were a luxury for a wealthy elite, instead of a commodity for the masses, and the animals that provided them were hunted to near extinction. That automobiles are a lifeline to people who live in rural areas (almost 20% in the US alone), and who were deeply isolated in the era before the car and the telephone. That in a world without automobiles, we relied on millions of horses, which in New York City around 1900 dumped a hundred thousand gallons of urine and millions of pounds of manure on the streets daily. That half of everyone you know over the age of five is alive today only because of antibiotics, vaccines, and sanitizing chemicals in our water supply. That before these innovations, infant mortality (in the first year of life) was as high as 20%. When you know these facts of history—which many schools do not teach—you understand what “industrial civilization” is and why it is the benefactor of everyone who is lucky enough to live in it. You understand that the electric generator, the automobile, the chemical plant, ...

    The Parable of Predict-O-Matic by abramdemski

    Play Episode Listen Later Dec 12, 2021 23:58


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Parable of Predict-O-Matic, published by abramdemski on LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. I've been thinking more about partial agency. I want to expand on some issues brought up in the comments to my previous post, and on other complications which I've been thinking about. But for now, a more informal parable. (Mainly because this is easier to write than my more technical thoughts.) This relates to oracle AI and to inner optimizers, but my focus is a little different. 1 Suppose you are designing a new invention, a predict-o-matic. It is a wonderous machine which will predict everything for us: weather, politics, the newest advances in quantum physics, you name it. The machine isn't infallible, but it will integrate data across a wide range of domains, automatically keeping itself up-to-date with all areas of science and current events. You fully expect that once your product goes live, it will become a household utility, replacing services like Google. (Google only lets you search the known!) Things are going well. You've got investors. You have an office and a staff. These days, it hardly even feels like a start-up any more; progress is going well. One day, an intern raises a concern. "If everyone is going to be using Predict-O-Matic, we can't think of it as a passive observer. Its answers will shape events. If it says stocks will rise, they'll rise. If it says stocks will fall, then fall they will. Many people will vote based on its predictions." "Yes," you say, "but Predict-O-Matic is an impartial observer nonetheless. It will answer people's questions as best it can, and they react however they will." "But --" the intern objects -- "Predict-O-Matic will see those possible reactions. It knows it could give several different valid predictions, and different predictions result in different futures. It has to decide which one to give somehow." You tap on your desk in thought for a few seconds. "That's true. But we can still keep it objective. It could pick randomly." "Randomly? But some of these will be huge issues! Companies -- no, nations -- will one day rise or fall based on the word of Predict-O-Matic. When Predict-O-Matic is making a prediction, it is choosing a future for us. We can't leave that to a coin flip! We have to select the prediction which results in the best overall future. Forget being an impassive observer! We need to teach Predict-O-Matic human values!" You think about this. The thought of Predict-O-Matic deliberately steering the future sends a shudder down your spine. But what alternative do you have? The intern isn't suggesting Predict-O-Matic should lie, or bend the truth in any way -- it answers 100% honestly to the best of its ability. But (you realize with a sinking feeling) honesty still leaves a lot of wiggle room, and the consequences of wiggles could be huge. After a long silence, you meet the interns eyes. "Look. People have to trust Predict-O-Matic. And I don't just mean they have to believe Predict-O-Matic. They're bringing this thing into their homes. They have to trust that Predict-O-Matic is something they should be listening to. We can't build value judgements into this thing! If it ever came out that we had coded a value function into Predict-O-Matic, a value function which selected the very future itself by selecting which predictions to make -- we'd be done for! No matter how honest Predict-O-Matic remained, it would be seen as a manipulator. No matter how beneficent its guiding hand, there are always compromises, downsides, questionable calls. No matter how careful we were to set up its values -- to make them moral, to make them humanitarian, to make them politically correct and broadly appealing -- who are we to choose? No. We'd be done for. They'd hang us....

    Being the (Pareto) Best in the World by johnswentworth

    Play Episode Listen Later Dec 12, 2021 4:55


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Being the (Pareto) Best in the World, published by johnswentworth on LessWrong. The generalized efficient markets (GEM) principle says, roughly, that things which would give you a big windfall of money and/or status, will not be easy. If such an opportunity were available, someone else would have already taken it. You will never find a $100 bill on the floor of Grand Central Station at rush hour, because someone would have picked it up already. One way to circumvent GEM is to be the best in the world at some relevant skill. A superhuman with hawk-like eyesight and the speed of the Flash might very well be able to snag $100 bills off the floor of Grand Central. More realistically, even though financial markets are the ur-example of efficiency, a handful of firms do make impressive amounts of money by being faster than anyone else in their market. I'm unlikely to ever find a proof of the Riemann Hypothesis, but Terry Tao might. Etc. But being the best in the world, in a sense sufficient to circumvent GEM, is not as hard as it might seem at first glance (though that doesn't exactly make it easy). The trick is to exploit dimensionality. Consider: becoming one of the world's top experts in proteomics is hard. Becoming one of the world's top experts in macroeconomic modelling is hard. But how hard is it to become sufficiently expert in proteomics and macroeconomic modelling that nobody is better than you at both simultaneously? In other words, how hard is it to reach the Pareto frontier? Having reached that Pareto frontier, you will have circumvented the GEM: you will be the single best-qualified person in the world for (some) problems which apply macroeconomic modelling to proteomic data. You will have a realistic shot at a big money/status windfall, with relatively little effort. (Obviously we're oversimplifying a lot by putting things like “macroeconomic modelling skill” on a single axis, and breaking it out onto multiple axes would strengthen the main point of this post. On the other hand, it would complicate the explanation; I'm keeping it simple for now.) Let's dig into a few details of this approach. Elbow Room There are many table tennis players, but only one best player in the world. This is a side effect of ranking people on one dimension: there's only going to be one point furthest to the right (absent a tie). Pareto optimality pushes us into more dimensions. There's only one best table tennis player, and only one best 100-meter sprinter, but there can be an unlimited number of Pareto-optimal table tennis/sprinters. Problem is, for GEM purposes, elbow room matters. Maybe I'm the on the pareto frontier of Bayesian statistics and gerontology, but if there's one person just little bit better at statistics and worse at gerontology than me, and another person just a little bit better at gerontology and worse at statistics, then GEM only gives me the advantage over a tiny little chunk of the skill-space. This brings up another aspect. Problem Density Claiming a spot on a Pareto frontier gives you some chunk of the skill-space to call your own. But that's only useful to the extent that your territory contains useful problems. Two pieces factor in here. First, how large a territory can you claim? This is about elbow room, as in the diagram above. Second, what's the density of useful problems within this region of skill-space? The table tennis/sprinting space doesn't have a whole lot going on. Statistics and gerontology sounds more promising. Cryptography and monetary economics is probably a particularly rich Pareto frontier these days. (And of course, we don't need to stop at two dimensions - but we're going to stop there in this post in order to keep things simple.) Dimensionality One problem with this whole GEM-vs-Pareto concept: if chasing a Pareto frontier makes it ...

    Welcome to LessWrong!by Ruby, habryka, Ben Pace, Raemon

    Play Episode Listen Later Dec 12, 2021 3:12


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Welcome to LessWrong!, published by Ruby, habryka, Ben Pace, Raemon on LessWrong. The road to wisdom? -- Well, it's plain and simple to express: Err and err and err again but less and less and less. - Piet Hein Hence the name LessWrong. We might never attain perfect understanding of the world, but we can at least strive to become less and less wrong each day. We are a community dedicated to improving our reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. More generally, we work to develop and practice the art of human rationality.[1] To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one's rationality to real-world problems. LessWrong serves these purposes with its library of rationality writings, community discussion forum, open questions research platform, and community page for in-person events. To get a feel for what LessWrong is about, check out our Concepts page, or view this selection of LessWrong posts which might appeal to you: What is rationality and why care about it? Try Your intuitions are not magic and The Cognitive Science of Rationality. Curious about the mind? You might enjoy How An Algorithm Feels From The Inside and The Apologist and the Revolutionary. Keen on self-improvement? Remember that Humans are not automatically strategic. Care about argument and evidence? Consider Policy Debates Should Not Appear One-Sided and How To Convince Me that 2 + 2 = 3. Interested in how to use language well? Be aware of 37 Ways That Words Can Be Wrong. Want to teach yourself something? We compiled a list of The Best Textbooks on Every Subject. Like probability and statistics? Around here we're fans of Bayesianism, you might like this interactive guide to Bayes' theorem (hosted on Arbital.com). Of an altruistic mindset? We recommend On Caring. Check out this footnote[2] below the fold for samples of posts about AI, science, philosophy, history, communication, culture, self-care, and more. If LessWrong seems like a place for you, we encourage you to become familiar with LessWrong's philosophical foundations. Our core readings can be be found on the Library page. We especially recommend: Rationality: From AI to Zombies by Eliezer Yudkowsky (or Harry Potter and the Methods of Rationality by the same author, which covers similar ground in narrative form) The Codex by Scott Alexander Find more details about these texts in this footnote[3] For further getting started info, we direct you to LessWrong's FAQ. Lastly, we suggest you create an account so you can vote, comment, save your reading progress, get tailored recommendations, and subscribe to our latest and best posts. Once you've done so, please say hello on our latest welcome thread! Related Pages LessWrong FAQ A Brief History of LessWrong Team LessWrong Concepts thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.

    PR is corrosive; “reputation” is not by AnnaSalamon

    Play Episode Listen Later Dec 12, 2021 2:53


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PR is corrosive; “reputation” is not, published by AnnaSalamon on LessWrong. This is in some sense a small detail, but one important enough to be worth write-up and critique: AFAICT, “PR” is a corrupt concept, in the sense that if you try to “navigate PR concerns” about yourself / your organization / your cause area / etc., the concept will guide you toward harmful and confused actions. In contrast, if you try to safeguard your “reputation”, your “brand”, or your “honor,” I predict this will basically go fine, and will not lead you to leave a weird confused residue in yourself or others. To explain the difference: If I am safeguarding my “honor” (or my “reputation”, “brand”, or “good name”), there are some fixed standards that I try to be known as adhering to. For example, in Game of Thrones, the Lannisters are safeguarding their “honor” by adhering to the principle “A Lannister always pays his debts.” They take pains to adhere to a certain standard, and to be known to adhere to that standard. Many examples are more complicated than this; a gentleman of 1800 who took up a duel to defend his “honor” was usually not defending his known adherence to a single simple principle a la the Lannisters. But it was still about his visible adherence to a fixed (though not explicit) societal standard. In contrast, if I am “managing PR concerns,” there is no fixed standards of good conduct, or of my-brand-like conduct, that I am trying to adhere to. Instead, I am trying to do a more complicated operation: Model which words or actions may cause “people” (especially media, or self-reinforcing miasma) to get upset with me; Try to speak in such a way as to not set that off. It's a weirder or loopier process. One that's more prone to self-reinforcing fears of shadows, and one that somehow (I think?) tends to pull a person away from communicating anything at all. Reminiscent of “Politics and the English Language.” Not reminiscent of Strunk and White. One way you can see the difference, is that when people think about “PR” they imagine a weird outside expertise, such that you need to have a “PR consultant” or a “media consultant” who you should nervously heed advice from. When people think about their “honor," it's more a thing they can know or choose directly, and so it is more a thing that leaves them free to communicate something. So: simple suggestion. If, at any point, you find yourself trying to “navigate PR”, or to help some person or organization or cause area or club or whatever to “navigate PR,” see if you can instead think and speak in terms of defending your/their “honor”, “reputation”, or “good name”. And see if that doesn't make everybody feel a bit clearer, freer, and more as though their feet are on the ground. Related: The Inner Ring, by CS Lewis; The New York Times, by Robert Rhinehart. thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.

    Making Beliefs Pay Rent (in Anticipated Experiences) by Eliezer Yudkowsky

    Play Episode Listen Later Dec 12, 2021 6:16


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making Beliefs Pay Rent (in Anticipated Experiences), published by Eliezer Yudkowsky on LessWrong. Thus begins the ancient parable: If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.” If there's a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don't. Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail. It's tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don't see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don't experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step. You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground? To answer precisely, you must use beliefs like Earth's gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock's second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience. It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal. The same brain that builds a network of inferred causes behind sensory experience can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could simplistically model their minds by drawing a little node labeled “Phlogiston,” and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configur...

    That Alien Message by Eliezer Yudkowsky

    Play Episode Listen Later Dec 12, 2021 17:10


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: That Alien Message, published Eliezer Yudkowsky on LessWrong. Imagine a world much like this one, in which, thanks to gene-selection technologies, the average IQ is 140 (on our scale). Potential Einsteins are one-in-a-thousand, not one-in-a-million; and they grow up in a school system suited, if not to them personally, then at least to bright kids. Calculus is routinely taught in sixth grade. Albert Einstein, himself, still lived and still made approximately the same discoveries, but his work no longer seems exceptional. Several modern top-flight physicists have made equivalent breakthroughs, and are still around to talk. (No, this is not the world Brennan lives in.) One day, the stars in the night sky begin to change. Some grow brighter. Some grow dimmer. Most remain the same. Astronomical telescopes capture it all, moment by moment. The stars that change, change their luminosity one at a time, distinctly so; the luminosity change occurs over the course of a microsecond, but a whole second separates each change. It is clear, from the first instant anyone realizes that more than one star is changing, that the process seems to center around Earth particularly. The arrival of the light from the events, at many stars scattered around the galaxy, has been precisely timed to Earth in its orbit. Soon, confirmation comes in from high-orbiting telescopes (they have those) that the astronomical miracles do not seem as synchronized from outside Earth. Only Earth's telescopes see one star changing every second (1005 milliseconds, actually). Almost the entire combined brainpower of Earth turns to analysis. It quickly becomes clear that the stars that jump in luminosity, all jump by a factor of exactly 256; those that diminish in luminosity, diminish by a factor of exactly 256. There is no apparent pattern in the stellar coordinates. This leaves, simply, a pattern of BRIGHT-dim-BRIGHT-BRIGHT... "A binary message!" is everyone's first thought. But in this world there are careful thinkers, of great prestige as well, and they are not so sure. "There are easier ways to send a message," they post to their blogs, "if you can make stars flicker, and if you want to communicate. Something is happening. It appears, prima facie, to focus on Earth in particular. To call it a 'message' presumes a great deal more about the cause behind it. There might be some kind of evolutionary process among, um, things that can make stars flicker, that ends up sensitive to intelligence somehow... Yeah, there's probably something like 'intelligence' behind it, but try to appreciate how wide a range of possibilities that really implies. We don't know this is a message, or that it was sent from the same kind of motivations that might move us. I mean, we would just signal using a big flashlight, we wouldn't mess up a whole galaxy." By this time, someone has started to collate the astronomical data and post it to the Internet. Early suggestions that the data might be harmful, have been... not ignored, but not obeyed, either. If anything this powerful wants to hurt you, you're pretty much dead (people reason). Multiple research groups are looking for patterns in the stellar coordinates—or fractional arrival times of the changes, relative to the center of the Earth—or exact durations of the luminosity shift—or any tiny variance in the magnitude shift—or any other fact that might be known about the stars before they changed. But most people are turning their attention to the pattern of BRIGHTS and dims. It becomes clear almost instantly that the pattern sent is highly redundant. Of the first 16 bits, 12 are BRIGHTS and 4 are dims. The first 32 bits received align with the second 32 bits received, with only 7 out of 32 bits different, and then the next 32 bits received have only 9 out of 32 bits different from the s...

    Why the tails come apart by Thrasymachus

    Play Episode Listen Later Dec 12, 2021 12:02


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why the tails come apart, published by Thrasymachus on LessWrong. [I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of someone with a stronger mathematical background who could make this line of argument more robust] [Edit 2014/11/14: mainly adjustments and rewording in light of the many helpful comments below (thanks!). I've also added a geometric explanation.] Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan. What's interesting is what happens to these relationships 'out on the tail': extreme outliers of a given predictor are seldom similarly extreme outliers on the outcome it predicts, and vice versa. Although 6'7" is very tall, it lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1). The trend seems to be that even when two factors are correlated, their tails diverge: the fastest servers are good tennis players, but not the very best (and the very best players serve fast, but not the very fastest); the very richest tend to be smart, but not the very smartest (and vice versa). Why? Too much of a good thing? One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of productivity-reducing mental illness. Or something along those lines. I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think there is a more general explanation. The simple graphical explanation [Inspired by this essay from Grady Towers] Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the plate: It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of this: Or this: Or this: Given a correlation, the envelo...

    What Do We Mean By "Rationality"? by Eliezer Yudkowsky

    Play Episode Listen Later Dec 12, 2021 10:00


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Do We Mean By "Rationality"?, published by Eliezer Yudkowsky on LessWrong. I mean two things: 1. Epistemic rationality: systematically improving the accuracy of your beliefs. 2. Instrumental rationality: systematically achieving your values. The first concept is simple enough. When you open your eyes and look at the room around you, you'll locate your laptop in relation to the table, and you'll locate a bookcase in relation to the wall. If something goes wrong with your eyes, or your brain, then your mental model might say there's a bookcase where no bookcase exists, and when you go over to get a book, you'll be disappointed. This is what it's like to have a false belief, a map of the world that doesn't correspond to the territory. Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I'm happy to call it that.1 Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It's the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.” So rationality is about forming true beliefs and making decisions that help you win. (Where truth doesn't mean “certainty,” since we can do plenty to increase the probability that our beliefs are accurate even though we're uncertain; and winning doesn't mean “winning at others' expense,” since our values include everything we care about, including other people.) When people say “X is rational!” it's usually just a more strident way of saying “I think X is true” or “I think X is good.” So why have an additional word for “rational” as well as “true” and “good”? An analogous argument can be given against using “true.” There is no need to say “it is true that snow is white” when you could just say “snow is white.” What makes the idea of truth useful is that it allows us to talk about the general features of map-territory correspondence. “True models usually produce better experimental predictions than false models” is a useful generalization, and it's not one you can make without using a concept like “true” or “accurate.” Similarly, “Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function” is the kind of thought that depends on a concept of (instrumental) rationality, whereas “It's rational to eat vegetables” can probably be replaced with “It's useful to eat vegetables” or “It's in your interest to eat vegetables.” We need a concept like “rational” in order to note general facts about those ways of thinking that systematically produce truth or value—and the systematic ways in which we fall short of those standards. As we've observed in the previous essays, experimental psychologists sometimes uncover human reasoning that seems very strange. For example, someone rates the probability “Bill plays jazz” as less than the probability “Bill is an accountant who plays jazz.” This seems like an odd judgment, since any particular jazz-playing accountant is obviously a jazz player. But to what higher vantage point do we appeal in saying that the judgment is wrong ? Experimental psychologists use two gold standards: probability theory, and decision theory. Probability theory is the set of laws underlying rational belief. The mathematics of probability applies equally to “figuring out where your bookcase is” and “estimating how many hairs were on Julius Caesars head,” even though our evidence for the claim “Julius Caesar was bald” is likely to be more complicated and indirect than our evidence for the claim “theres a bookcase in my room.” It's all the same problem of how to process the evidence and observations to update one's beliefs. Similarly, decision theory is the set of laws underlying rational ...

    What 2026 looks like by Daniel Kokotajlo

    Play Episode Listen Later Dec 12, 2021 27:21


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What 2026 looks like, published by Daniel Kokotajlo on LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This was written for the Vignettes Workshop.[1] The goal is to write out a detailed future history (“trajectory”) that is as realistic (to me) as I can currently manage, i.e. I'm not aware of any alternative trajectory that is similarly detailed and clearly more plausible to me. The methodology is roughly: Write a future history of 2022. Condition on it, and write a future history of 2023. Repeat for 2024, 2025, etc. (I'm posting 2022-2026 now so I can get feedback that will help me write 2027+. I intend to keep writing until the story reaches singularity/extinction/utopia/etc.) What's the point of doing this? Well, there are a couple of reasons: Sometimes attempting to write down a concrete example causes you to learn things, e.g. that a possibility is more or less plausible than you thought. Most serious conversation about the future takes place at a high level of abstraction, talking about e.g. GDP acceleration, timelines until TAI is affordable, multipolar vs. unipolar takeoff. vignettes are a neglected complementary approach worth exploring. Most stories are written backwards. The author begins with some idea of how it will end, and arranges the story to achieve that ending. Reality, by contrast, proceeds from past to future. It isn't trying to entertain anyone or prove a point in an argument. Anecdotally, various people seem to have found Paul Christiano's “tales of doom” stories helpful, and relative to typical discussions those stories are quite close to what we want. (I still think a bit more detail would be good — e.g. Paul's stories don't give dates, or durations, or any numbers at all really.)[2] “I want someone to ... write a trajectory for how AI goes down, that is really specific about what the world GDP is in every one of the years from now until insane intelligence explosion. And just write down what the world is like in each of those years because I don't know how to write an internally consistent, plausible trajectory. I don't know how to write even one of those for anything except a ridiculously fast takeoff.” --Buck Shlegeris This vignette was hard to write. To achieve the desired level of detail I had to make a bunch of stuff up, but in order to be realistic I had to constantly ask “but actually though, what would really happen in this situation?” which made it painfully obvious how little I know about the future. There are numerous points where I had to conclude “Well, this does seem implausible, but I can't think of anything more plausible at the moment and I need to move on.” I fully expect the actual world to diverge quickly from the trajectory laid out here. Let anyone who (with the benefit of hindsight) claims this divergence as evidence against my judgment prove it by exhibiting a vignette/trajectory they themselves wrote in 2021. If it maintains a similar level of detail (and thus sticks its neck out just as much) while being more accurate, I bow deeply in respect! I hope this inspires other people to write more vignettes soon. We at the Center on Long-Term Risk would like to have a collection to use for strategy discussions. Let me know if you'd like to do this, and I can give you advice & encouragement! I'd be happy to run another workshop. 2022 GPT-3 is finally obsolete. OpenAI, Google, Facebook, and DeepMind all have gigantic multimodal transformers, similar in size to GPT-3 but trained on images, video, maybe audio too, and generally higher-quality data. Not only that, but they are now typically fine-tuned in various ways--for example, to answer questions correctly, or produce engaging conversation as a chatbot. The chatbots are fun to talk to but erratic and ultimately considered s...

    Reality-Revealing and Reality-Masking Puzzles by AnnaSalamon

    Play Episode Listen Later Dec 12, 2021 20:45


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reality-Revealing and Reality-Masking Puzzles, published by AnnaSalamon on LessWrong. Write a Review Tl;dr: I'll try here to show how CFAR's “art of rationality” has evolved over time, and what has driven that evolution. In the course of this, I'll introduce the distinction between what I'll call “reality-revealing puzzles” and “reality-masking puzzles”—a distinction that I think is almost necessary for anyone attempting to develop a psychological art in ways that will help rather than harm. (And one I wish I'd had explicitly back when the Center for Applied Rationality was founded.) I'll also be trying to elaborate, here, on the notion we at CFAR have recently been tossing around about CFAR being an attempt to bridge between common sense and Singularity scenarios—an attempt to figure out how people can stay grounded in common sense and ordinary decency and humane values and so on, while also taking in (and planning actions within) the kind of universe we may actually be living in. Arts grow from puzzles. I like to look at mathematics, or music, or ungodly things like marketing, and ask: What puzzles were its creators tinkering with that led them to leave behind these structures? (Structures now being used by other people, for other reasons.) I picture arts like coral reefs. Coral polyps build shell-bits for their own reasons, but over time there accumulates a reef usable by others. Math built up like this—and math is now a powerful structure for building from. [Sales and Freud and modern marketing/self-help/sales etc. built up some patterns too—and our basic way of seeing each other and ourselves is now built partly in and from all these structures, for better and for worse.] So let's ask: What sort of reef is CFAR living within, and adding to? From what puzzles (what patterns of tinkering) has our “rationality” accumulated? Two kinds of puzzles: “reality-revealing” and “reality-masking” First, some background. Some puzzles invite a kind of tinkering that lets the world in and leaves you smarter. A kid whittling with a pocket knife is entangling her mind with bits of reality. So is a driver who notices something small about how pedestrians dart into streets, and adjusts accordingly. So also is the mathematician at her daily work. And so on. Other puzzles (or other contexts) invite a kind of tinkering that has the opposite effect. They invite a tinkering that gradually figures out how to mask parts of the world from your vision. For example, some months into my work as a math tutor I realized I'd been unconsciously learning how to cue my students into acting like my words made sense (even when they didn't). I'd learned to mask from my own senses the clues about what my students were and were not learning. We'll be referring to these puzzle-types a lot, so it'll help to have a term for them. I'll call these puzzles “good” or “reality-revealing” puzzles, and “bad” or “reality-masking” puzzles, respectively. Both puzzle-types appear abundantly in most folks' lives, often mixed together. The same kid with the pocket knife who is busy entangling her mind with data about bark and woodchips and fine motor patterns (from the “good” puzzle of “how can I whittle this stick”), may simultaneously be busy tinkering with the “bad” puzzle of “how can I not-notice when my creations fall short of my hopes.” (Even “good” puzzles can cause skill loss: a person who studies Dvorak may lose some of their QWERTY skill, and someone who adapts to the unselfconscious arguing of the math department may do worse for a while in contexts requiring tact. The distinction is that “good” puzzles do this only incidentally. Good puzzles do not invite a search for configurations that mask bits of reality. Whereas with me and my math tutees, say, there was a direct reward/conditioning response that happe...

    Is Success the Enemy of Freedom? (Full) by alkjash

    Play Episode Listen Later Dec 12, 2021 13:38


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is Success the Enemy of Freedom? (Full), published by alkjash on LessWrong. Write a Review This is a linkpost for/ I. Parables A. Anna is a graduate student studying p-adic quasicoherent topology. It's a niche subfield of mathematics where Anna feels comfortable working on neat little problems with the small handful of researchers interested in this topic. Last year, Anna stumbled upon a connection between her pet problem and algebraic matroid theory, solving a big open conjecture in the matroid Langlands program. Initially, she was over the moon about the awards and the Quanta articles, but now that things have returned to normal, her advisor is pressuring her to continue working with the matroid theorists with their massive NSF grants and real-world applications. Anna hasn't had time to think about p-adic quasicoherent topology in months. B. Ben is one of the top Tetris players in the world, infamous for his signature move: the reverse double T-spin. Ben spent years perfecting this move, which requires lightning fast reflexes and nerves of steel, and has won dozens of tournaments on its back. Recently, Ben felt like his other Tetris skills needed work and tried to play online without using his signature move, but was greeted by a long string of losses: the Tetris servers kept matching him with the other top players in the world, who absolutely stomped him. Discouraged, Ben gave up on the endeavor and went back to practicing the reverse double T-spin. C. Clara was just promoted to be the youngest Engineering Director at a mid-sized software startup. She quickly climbed the ranks, thanks to her amazing knowledge of all things object-oriented and her excellent communication skills. These days, she finds her schedule packed with what the company needs: back-to-back high-level strategy meetings preparing for the optics of the next product launch, instead of what she loves: rewriting whole codebases in Haskell++. D. Deborah started her writing career as a small-time crime novelist, who split her time between a colorful cast of sleuthy protagonists. One day, her spunky children's character Detective Dolly blew up in popularity due to a Fruit Loops advertising campaign. At the beginning of every month, Deborah tells herself she's going to finally kill off Dolly and get to work on that grand historical romance she's been dreaming about. At the end of every month, Deborah's husband comes home with the mortgage bills for their expensive bayside mansion, paid for with “Dolly money,” and Deborah starts yet another Elementary School Enigma. E. While checking his email in the wee hours of the morning, Professor Evan Evanson notices an appealing seminar announcement: “A Gentle Introduction to P-adic Quasicoherent Topology (Part the First).” Ever since being exposed to the topic in his undergraduate matroid theory class, Evan has always wanted to learn more. He arrives bright and early on the day of the seminar and finds a prime seat, but as others file into the lecture hall, he's greeted by a mortifying realization: it's a graduate student learning seminar, and he's the only faculty member present. Squeezing in his embarrassment, Evan sits through the talk and learns quite a bit of fascinating new mathematics. For some reason, even though he enjoyed the experience, Evan never comes back for Part the Second. F. Whenever Frank looks back to his college years, he remembers most fondly the day he was kicked out of the conservative school newspaper for penning a provocative piece about jailing all billionaires. Although he was a mediocre student with a medium-sized drinking problem, on that day Frank felt like a man with principles. A real American patriot in the ranks of Patrick Henry or Thomas Jefferson. After college, Frank met a girl who helped him sort himself out and get sober, a...

    Lessons I've Learned from Self-Teaching

    Play Episode Listen Later Dec 12, 2021 15:09


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons I've Learned from Self-Teaching, published by TurnTrout on LessWrong. In 2018, I was a bright-eyed grad student who was freaking out about AI alignment. I guess I'm still a bright-eyed grad student freaking out about AI alignment, but that's beside the point. I wanted to help, and so I started levelling up. While I'd read Nate Soares's self-teaching posts, there were a few key lessons I'd either failed to internalize or failed to consider at all. I think that implementing these might have doubled the benefit I drew from my studies. I can't usefully write a letter to my past self, so let me write a letter to you instead, keeping in mind that good advice for past-me may not be good advice for you. Make Sure You Remember The Content TL;DR: use a spaced repetition system like Anki. Put in cards for key concepts and practice using the concepts. Review the cards every day without fail. This is the most important piece of advice. The first few months of 2018 were a dream: I was learning math, having fun, and remaking myself. I read and reviewed about one textbook a month. I was learning how to math, how to write proofs and read equations fluently and think rigorously. I had so much fun that I hurt my wrists typing up my thoughts on impact measures. This turned a lot of my life upside-down. My wrists wouldn't fully heal for two years, and a lot happened during that time. After I hurt my wrists, I became somewhat depressed, posted less frequently, and read fewer books. When I looked back in 2019/2020 and asked "when and why did my love for textbooks sputter out?", the obvious answer was "when I hurt my hands and lost my sense of autonomy and became depressed, perchance? And maybe I just became averse to reading that way?" The obvious answer was wrong, but its obvious-ness stopped me from finding the truth until late last year. It felt right, but my introspection had failed me. The real answer is: when I started learning math, I gained a lot of implicit knowledge, like how to write proofs and read math (relatively) quickly. However, I'm no Hermione Granger: left unaided, I'm bad at remembering explicit facts / theorem statements / etc. I gained implicit knowledge but I didn't remember the actual definitions, unless I actually used them regularly (e.g. as I did for real analysis, which I remained quite fluent in and which I regularly use in my research). Furthermore, I think I coincidentally hit steeply diminishing returns on the implicit knowledge around when I injured myself. So basically I'm reading these math textbooks, doing the problems, getting a bit better at writing proofs but not really durably remembering 95% of the content. Maybe part of my subconscious noticed that I seem to be wasting time, that when I come back four months after reading a third of a graph theory textbook, I barely remember the new content I had "learned." I thought I was doing things right. I was doing dozens of exercises and thinking deeply about why each definition was the way it was, thinking about how I could apply these theorems to better reason about my own life and my own research, etc. I explicitly noticed this problem in late 2020 and thought, is there any way I know of to better retain content? ... gee, what about that thing I did in college that let me learn how to read 2,136 standard-use Japanese characters in 90 days? you know, Anki spaced repetition, that thing I never tried for math because once I tried and failed to memorize dozens of lines of MergeSort pseudocode with it? hm... This was the moment I started feeling extremely silly (the exact thought was "there's no possible way that my hand is big enough for how facepalm this moment is", IIRC), but also extremely excited. I could fix my problem! And a problem this was. In early 2020, I had an interview where I was asked t...

    Expecting Short Inferential Distances by Eliezer Yudkowsky

    Play Episode Listen Later Dec 12, 2021 4:43


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Expecting Short Inferential Distances, published by Eliezer Yudkowsky on LessWrong. Homo sapiens's environment of evolutionary adaptedness (a.k.a. EEA or “ancestral environment”) consisted of hunter-gatherer bands of at most 200 people, with no writing. All inherited knowledge was passed down by speech and memory. In a world like that, all background knowledge is universal knowledge. All information not strictly private is public, period. In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else. When you discover a new oasis, you don't have to explain to your fellow tribe members what an oasis is, or why it's a good idea to drink water, or how to walk. Only you know where the oasis lies; this is private knowledge. But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge. When you explain things in an ancestral environment, you almost never have to explain your concepts. At most you have to explain one new concept, not two or more simultaneously. In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises. In the ancestral environment, anyone who says something with no obvious support is a liar or an idiot. You're not likely to think, “Hey, maybe this person has well-supported background knowledge that no one in my band has even heard of,” because it was a reliable invariant of the ancestral environment that this didn't happen. Conversely, if you say something blatantly obvious and the other person doesn't see it, they're the idiot, or they're being deliberately obstinate to annoy you. And to top it off, if someone says something with no obvious support and expects you to believe it—acting all indignant when you don't—then they must be crazy. Combined with the illusion of transparency and self-anchoring (the tendency to model other minds as though the were slightly modified versions of oneself), I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience—or even communicating with scientists from other disciplines. When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners assume that things should be visible in one step, when they take two or more steps to explain. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge. A biologist, speaking to a physicist, can justify evolution by saying it is the simplest explanation. But not everyone on Earth has been inculcated with that legendary history of science, from Newton to Einstein, which invests the phrase “simplest explanation” with its awesome import: a Word of Power, spoken at the birth of theories and carved on their tombstones. To someone else, “But it's the simplest explanation!” may sound like an interesting but hardly knockdown argument; it doesn't feel like all that powerful a tool for comprehending office politics or fixing a broken car. Obviously the biologist is infatuated with their own ideas, too arrogant to be open to alternative explanations which sound just as plausible. (If it sounds plausible to me, it should sound plausible to any sane member of my band.) And from the biologist's perspective, they can understand how evolution might sound a little odd at first—but when someone rejects evolution even after the biologist explains that it's the simplest explanation, well, it's clear that nonscientists are just idiots and there's no point in talking ...

    Are we in an AI overhang? by Andy Jones

    Play Episode Listen Later Dec 12, 2021 7:58


    Welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. This is: Are we in an AI overhang?, published by by Andy Jones on the LessWrong. Write a Review Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. Over on Developmental Stages of GPTs, orthonormal mentions it at least reduces the chance of a hardware overhang. An overhang is when you have had the ability to build transformative AI for quite some time, but you haven't because no-one's realised it's possible. Then someone does and surprise! It's a lot more capable than everyone expected. I am worried we're in an overhang right now. I think we right now have the ability to build an orders-of-magnitude more powerful system than we already have, and I think GPT-3 is the trigger for 100x larger projects at Google, Facebook and the like, with timelines measured in months. Investment Bounds GPT-3 is the first AI system that has obvious, immediate, transformative economic value. While much hay has been made about how much more expensive it is than a typical AI research project, in the wider context of megacorp investment, its costs are insignificant. GPT-3 has been estimated to cost $5m in compute to train, and - looking at the author list and OpenAI's overall size - maybe another $10m in labour. Google, Amazon and Microsoft each spend about $20bn/year on R&D and another $20bn each on capital expenditure. Very roughly, it totals to $100bn/year. Against this budget, dropping $1bn or more on scaling GPT up by another factor of 100x is entirely plausible right now. All that's necessary is that tech executives stop thinking of natural language processing as cutesy blue-sky research and start thinking in terms of quarters-till-profitability. A concrete example is Waymo, which is raising $2bn investment rounds - and that's for a technology with a much longer road to market. Compute Cost The other side of the equation is compute cost. The $5m GPT-3 training cost estimate comes from using V100s at $10k/unit and 30 TFLOPS, which is the performance without tensor cores being considered. Amortized over a year, this gives you about $1000/PFLOPS-day. However, this cost is driven up an order of magnitude by NVIDIA's monopolistic cloud contracts, while performance will be higher when taking tensor cores into account. The current hardware floor is nearer to the RTX 2080 TI's $1k/unit for 125 tensor-core TFLOPS, and that gives you $25/PFLOPS-day. This roughly aligns with AI Impacts' current estimates, and offers another >10x speedup to our model. I strongly suspect other bottlenecks stop you from hitting that kind of efficiency or GPT-3 would've happened much sooner, but I still think $25/PFLOPS-day is a lower useful bound. Other Constraints I've focused on money so far because most of the current 3.5-month doubling times come from increasing investment. But money aside, there are a couple of other things that could prove to be the binding constraint. Scaling law breakdown. The GPT series' scaling is expected to break down around 10k pflops-days (§6.3), which is a long way short of the amount of cash on the table. This could be because the scaling analysis was done on 1024-token sequences. Maybe longer sequences can go further. More likely I'm misunderstanding something. Sequence length. GPT-3 uses 2048 tokens at a time, and that's with an efficient encoding that cripples it on many tasks. With the naive architecture, increasing the sequence length is quadratically expensive, and getting up to novel-length sequences is not very likely. But there are a lot of plausible ways to fix that, and complexity is no bar AI. This constraint might plausibly not be resolved on a timescale of months, however. Data availability. From the same paper as the previous point, dataset size rises with the square-root of compute; a 1000x larger GPT-3 would want 10 trillion tokens of train...

    RadVac Commercial Antibody Test Results by johnswentworth

    Play Episode Listen Later Dec 12, 2021 5:04


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RadVac Commercial Antibody Test Results, published by johnswentworth on the LessWrong. Background: Making Vaccine Results are in from the commercial antibody tests. Both my girlfriend and I came back negative - the test did not detect any Spike antibody response in the blood. This post will talk about how I'm updating based on these results, and the next steps. Here's our timeline so far; more info on the vaccine is in the original post and the radvac whitepaper: We've taken five doses, spaced apart weekly (on Tuesdays). The first three doses only included six of the nine peptides, due to delays from the manufacturer. (Spike 660, Spike 1145, and Orf1 5471T were the three missing.) The blood draw for this test took place the day after the fifth dose. I expect this is too soon to notice significant impact from the last two doses; vaccines in general seem to typically take 2-3 weeks to kick in, and that is my expectation for this one as well. (Also, it was an "IgG antibody test", and WebMD says these antibodies typically take about 2 weeks to show up after covid symptoms show from an actual infection.) This is intended to mainly be a test of the first three doses. The test apparently used the "DiaSorin Liaison(R) SARS-CoV-2 S1/S2 IgG assay" (I didn't know this until the results came in). According to the FDA, it has about 92% sensitivity and 99% specificity. The "S1/S2" part indicates that it's testing for response to the S1 and S2 subunits of the spike protein - together, these are essentially the whole spike protein. Important thing to notice: the test was looking for Spike antibodies, and two of our three missing peptides were Spike peptides. Indeed, there were only 3 Spike peptides among the full 9, so with two missing, we only had one Spike peptide in our first three doses. (The rest target other parts of the virus.) So that makes the test significantly less useful than it would otherwise be, and makes me more inclined to get another test in 2-3 weeks when the doses with the other three peptides have had time to kick in. How I'm Updating In the original post, I called this test "searching under the streetlamp". It wasn't super likely to come back positive even assuming the vaccine worked as intended, but it was relatively cheap and easy to run the test, so it was our first check. Given the missing Spike peptides and the test only checking against Spike, it was even more likely to come back negative than I originally estimated. In Jacob's prediction questions, I gave roughly a 25% chance that a commercial antibody test would pass for most people, given three doses and all 9 peptides. I gave the vaccine about 75% chance of working overall, distributed over several different possible worlds. In this specific scenario, it's clear that the prior on test passing should be even lower. (Reminder on the possible worlds: the vaccine could induce antibody response in the blood and mucus, only mucus, or not at all. It could induce T-cell response separate from antibody response. It could work sometimes, much like how the first dose of commercial mRNA vaccines tend to work in 75% or 85% of people, and in that case I expect more doses/more time to make it work more often.) After updating on the results, I'm down to about 60-70% chance of working overall. Unfortunately this test just didn't give us very much information - at least about the vaccine working. Aside from the test result, we do have one more small piece of information to update on: I was quite congested for 1-2 days after the most recent three doses (and I was generally not congested the rest of the week). That's exactly what we'd expect to see if the vaccine is working as intended, and it's pretty strong evidence that it's doing something. Updating on both that and the test results, I'm at ~70% that it works overall...

    Claim The Nonlinear Library: LessWrong Top Posts

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel