Podcast appearances and mentions of ben pace

  • 14PODCASTS
  • 89EPISODES
  • 18mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Sep 19, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ben pace

Latest podcast episodes about ben pace

LessWrong Curated Podcast
“How I started believing religion might actually matter for rationality and moral philosophy ” by zhukeepa

LessWrong Curated Podcast

Play Episode Listen Later Sep 19, 2024 13:38


After the release of Ben Pace's extended interview with me about my views on religion, I felt inspired to publish more of my thinking about religion in a format that's more detailed, compact, and organized. This post is the first publication in my series of intended posts about religion.Thanks to Ben Pace, Chris Lakin, Richard Ngo, Renshin Lauren Lee, Mark Miller, and Imam Ammar Amonette for their feedback on this post, and thanks to Kaj Sotala, Tomáš Gavenčiak, Paul Colognese, and David Spivak for reviewing earlier versions of this post. Thanks especially to Renshin Lauren Lee and Imam Ammar Amonette for their input on my claims about religion and inner work, and Mark Miller for vetting my claims about predictive processing.In Waking Up, Sam Harris wrote:[1] But I now knew that Jesus, the Buddha, Lao Tzu, and the other saints and sages of [...] ---Outline:(01:36) “Trapped Priors As A Basic Problem Of Rationality”(03:49) Active blind spots as second-order trapped priors(06:17) Inner work ≈ the systematic addressing of trapped priors(08:33) Religious mystical traditions as time-tested traditions of inner work?The original text contained 12 footnotes which were omitted from this narration. --- First published: August 23rd, 2024 Source: https://www.lesswrong.com/posts/X2og6RReKD47vseK8/how-i-started-believing-religion-might-actually-matter-for --- Narrated by TYPE III AUDIO.

The Nonlinear Library
LW - Secular interpretations of core perennialist claims by zhukeepa

The Nonlinear Library

Play Episode Listen Later Aug 26, 2024 28:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Secular interpretations of core perennialist claims, published by zhukeepa on August 26, 2024 on LessWrong. After the release of Ben Pace's extended interview with me about my views on religion, I felt inspired to publish more of my thinking about religion in a format that's more detailed, compact, and organized. This post is the second publication in my series of intended posts about religion. Thanks to Ben Pace, Chris Lakin, Richard Ngo, Damon Pourtahmaseb-Sasi, Marcello Herreshoff, Renshin Lauren Lee, Mark Miller, Roger Thisdell, and Imam Ammar Amonette for their feedback on this post, and thanks to Kaj Sotala, Tomáš Gavenčiak, Paul Colognese, and David Spivak for reviewing earlier versions of this post. Thanks especially to Renshin Lauren Lee, Roger Thisdell, and Imam Ammar Amonette for their input on my claims about perennialism, and Mark Miller for vetting my claims about predictive processing. In my previous post, I introduced the idea that there are broad convergences among the mystical traditions of the major world religions, corresponding to a shared underlying essence, called the perennial philosophy, that gave rise to each of these mystical traditions. I think there's nothing fundamentally mysterious, incomprehensible, or supernatural about the claims in the perennial philosophy. My intention in this post is to articulate my interpretations of some central claims of the perennial philosophy, and present them as legible hypotheses about possible ways the world could be. It is not my intention in this post to justify why I believe these claims can be found in the mystical traditions of the major world religions, or why I believe the mystical traditions are centered around claims like these. I also don't expect these hypotheses to seem plausible in and of themselves - these hypotheses only started seeming plausible to me as I went deeper into my own journey of inner work, and started noticing general patterns about my psychology consistent with these claims. I will warn in advance that in many cases, the strongest versions of these claims might not be compatible with the standard scientific worldview, and may require nonstandard metaphysical assumptions to fully make sense of.[1] (No bearded interventionist sky fathers, though!) I intend to explore the metaphysical foundations of the perennialist worldview in a future post; for now, I will simply note where I think nonstandard metaphysical assumptions may be necessary. The Goodness of Reality Sometimes, we feel that reality is bad for being the way it is, and feel a sense of charge around this. To illustrate the phenomenology of this sense of charge, consider the connotation that's present in the typical usages of "blame" that aren't present in the typical usages of "hold responsible"; ditto "punish" vs "disincentivize"; ditto "bad" vs "dispreferred". I don't think there's a word in the English language that unambiguously captures this sense of charge, but I think it's captured pretty well by the technical Buddhist term tanha, which is often translated as "thirst" or "craving". I interpret this sense of charge present in common usages of the words "blame", "punish", and "bad" as corresponding to the phenomenology of "thirst" or "craving"[2] for reality to be different from how it actually is. When our active blind spots get triggered, we scapegoat reality. We point a finger at reality and say "this is bad for being the way it is" with feelings of tanha, when really there's some vulnerability getting triggered that we're trying to avoid acknowledging. This naturally invites the following question: of the times we point at reality and say "this is bad for being the way it is" with feelings of tanha, what portion of these stem from active blind spots, and what portion of these responses should we fully endorse ...

The Nonlinear Library: LessWrong
LW - Secular interpretations of core perennialist claims by zhukeepa

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 26, 2024 28:23


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Secular interpretations of core perennialist claims, published by zhukeepa on August 26, 2024 on LessWrong. After the release of Ben Pace's extended interview with me about my views on religion, I felt inspired to publish more of my thinking about religion in a format that's more detailed, compact, and organized. This post is the second publication in my series of intended posts about religion. Thanks to Ben Pace, Chris Lakin, Richard Ngo, Damon Pourtahmaseb-Sasi, Marcello Herreshoff, Renshin Lauren Lee, Mark Miller, Roger Thisdell, and Imam Ammar Amonette for their feedback on this post, and thanks to Kaj Sotala, Tomáš Gavenčiak, Paul Colognese, and David Spivak for reviewing earlier versions of this post. Thanks especially to Renshin Lauren Lee, Roger Thisdell, and Imam Ammar Amonette for their input on my claims about perennialism, and Mark Miller for vetting my claims about predictive processing. In my previous post, I introduced the idea that there are broad convergences among the mystical traditions of the major world religions, corresponding to a shared underlying essence, called the perennial philosophy, that gave rise to each of these mystical traditions. I think there's nothing fundamentally mysterious, incomprehensible, or supernatural about the claims in the perennial philosophy. My intention in this post is to articulate my interpretations of some central claims of the perennial philosophy, and present them as legible hypotheses about possible ways the world could be. It is not my intention in this post to justify why I believe these claims can be found in the mystical traditions of the major world religions, or why I believe the mystical traditions are centered around claims like these. I also don't expect these hypotheses to seem plausible in and of themselves - these hypotheses only started seeming plausible to me as I went deeper into my own journey of inner work, and started noticing general patterns about my psychology consistent with these claims. I will warn in advance that in many cases, the strongest versions of these claims might not be compatible with the standard scientific worldview, and may require nonstandard metaphysical assumptions to fully make sense of.[1] (No bearded interventionist sky fathers, though!) I intend to explore the metaphysical foundations of the perennialist worldview in a future post; for now, I will simply note where I think nonstandard metaphysical assumptions may be necessary. The Goodness of Reality Sometimes, we feel that reality is bad for being the way it is, and feel a sense of charge around this. To illustrate the phenomenology of this sense of charge, consider the connotation that's present in the typical usages of "blame" that aren't present in the typical usages of "hold responsible"; ditto "punish" vs "disincentivize"; ditto "bad" vs "dispreferred". I don't think there's a word in the English language that unambiguously captures this sense of charge, but I think it's captured pretty well by the technical Buddhist term tanha, which is often translated as "thirst" or "craving". I interpret this sense of charge present in common usages of the words "blame", "punish", and "bad" as corresponding to the phenomenology of "thirst" or "craving"[2] for reality to be different from how it actually is. When our active blind spots get triggered, we scapegoat reality. We point a finger at reality and say "this is bad for being the way it is" with feelings of tanha, when really there's some vulnerability getting triggered that we're trying to avoid acknowledging. This naturally invites the following question: of the times we point at reality and say "this is bad for being the way it is" with feelings of tanha, what portion of these stem from active blind spots, and what portion of these responses should we fully endorse ...

The Nonlinear Library
LW - How I started believing religion might actually matter for rationality and moral philosophy by zhukeepa

The Nonlinear Library

Play Episode Listen Later Aug 23, 2024 16:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I started believing religion might actually matter for rationality and moral philosophy, published by zhukeepa on August 23, 2024 on LessWrong. After the release of Ben Pace's extended interview with me about my views on religion, I felt inspired to publish more of my thinking about religion in a format that's more detailed, compact, and organized. This post is the first publication in my series of intended posts about religion. Thanks to Ben Pace, Chris Lakin, Richard Ngo, Renshin Lauren Lee, Mark Miller, and Imam Ammar Amonette for their feedback on this post, and thanks to Kaj Sotala, Tomáš Gavenčiak, Paul Colognese, and David Spivak for reviewing earlier versions of this post. Thanks especially to Renshin Lauren Lee and Imam Ammar Amonette for their input on my claims about religion and inner work, and Mark Miller for vetting my claims about predictive processing. In Waking Up, Sam Harris wrote:[1] But I now knew that Jesus, the Buddha, Lao Tzu, and the other saints and sages of history had not all been epileptics, schizophrenics, or frauds. I still considered the world's religions to be mere intellectual ruins, maintained at enormous economic and social cost, but I now understood that important psychological truths could be found in the rubble. Like Sam, I've also come to believe that there are psychological truths that show up across religious traditions. I furthermore think these psychological truths are actually very related to both rationality and moral philosophy. This post will describe how I personally came to start entertaining this belief seriously. "Trapped Priors As A Basic Problem Of Rationality" "Trapped Priors As A Basic Problem of Rationality" was the title of an AstralCodexTen blog post. Scott opens the post with the following: Last month I talked about van der Bergh et al's work on the precision of sensory evidence, which introduced the idea of a trapped prior. I think this concept has far-reaching implications for the rationalist project as a whole. I want to re-derive it, explain it more intuitively, then talk about why it might be relevant for things like intellectual, political and religious biases. The post describes Scott's take on a predictive processing account of a certain kind of cognitive flinch that prevents certain types of sensory input from being perceived accurately, leading to beliefs that are resistant to updating.[2] Some illustrative central examples of trapped priors: Karl Friston has written about how a traumatized veteran might not hear a loud car as a car, but as a gunshot instead. Scott mentions phobias and sticky political beliefs as central examples of trapped priors. I think trapped priors are very related to the concept that "trauma" tries to point at, but I think "trauma" tends to connote a subset of trapped priors that are the result of some much more intense kind of injury. "Wounding" is a more inclusive term than trauma, but tends to refer to trapped priors learned within an organism's lifetime, whereas trapped priors in general also include genetically pre-specified priors, like a fear of snakes or a fear of starvation. My forays into religion and spirituality actually began via the investigation of my own trapped priors, which I had previously articulated to myself as "psychological blocks", and explored in contexts that were adjacent to therapy (for example, getting my psychology dissected at Leverage Research, and experimenting with Circling). It was only after I went deep in my investigation of my trapped priors that I learned of the existence of traditions emphasizing the systematic and thorough exploration of trapped priors. These tended to be spiritual traditions, which is where my interest in spirituality actually began.[3] I will elaborate more on this later. Active blind spots as second-order trapp...

The Nonlinear Library: LessWrong
LW - How I started believing religion might actually matter for rationality and moral philosophy by zhukeepa

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 23, 2024 16:42


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I started believing religion might actually matter for rationality and moral philosophy, published by zhukeepa on August 23, 2024 on LessWrong. After the release of Ben Pace's extended interview with me about my views on religion, I felt inspired to publish more of my thinking about religion in a format that's more detailed, compact, and organized. This post is the first publication in my series of intended posts about religion. Thanks to Ben Pace, Chris Lakin, Richard Ngo, Renshin Lauren Lee, Mark Miller, and Imam Ammar Amonette for their feedback on this post, and thanks to Kaj Sotala, Tomáš Gavenčiak, Paul Colognese, and David Spivak for reviewing earlier versions of this post. Thanks especially to Renshin Lauren Lee and Imam Ammar Amonette for their input on my claims about religion and inner work, and Mark Miller for vetting my claims about predictive processing. In Waking Up, Sam Harris wrote:[1] But I now knew that Jesus, the Buddha, Lao Tzu, and the other saints and sages of history had not all been epileptics, schizophrenics, or frauds. I still considered the world's religions to be mere intellectual ruins, maintained at enormous economic and social cost, but I now understood that important psychological truths could be found in the rubble. Like Sam, I've also come to believe that there are psychological truths that show up across religious traditions. I furthermore think these psychological truths are actually very related to both rationality and moral philosophy. This post will describe how I personally came to start entertaining this belief seriously. "Trapped Priors As A Basic Problem Of Rationality" "Trapped Priors As A Basic Problem of Rationality" was the title of an AstralCodexTen blog post. Scott opens the post with the following: Last month I talked about van der Bergh et al's work on the precision of sensory evidence, which introduced the idea of a trapped prior. I think this concept has far-reaching implications for the rationalist project as a whole. I want to re-derive it, explain it more intuitively, then talk about why it might be relevant for things like intellectual, political and religious biases. The post describes Scott's take on a predictive processing account of a certain kind of cognitive flinch that prevents certain types of sensory input from being perceived accurately, leading to beliefs that are resistant to updating.[2] Some illustrative central examples of trapped priors: Karl Friston has written about how a traumatized veteran might not hear a loud car as a car, but as a gunshot instead. Scott mentions phobias and sticky political beliefs as central examples of trapped priors. I think trapped priors are very related to the concept that "trauma" tries to point at, but I think "trauma" tends to connote a subset of trapped priors that are the result of some much more intense kind of injury. "Wounding" is a more inclusive term than trauma, but tends to refer to trapped priors learned within an organism's lifetime, whereas trapped priors in general also include genetically pre-specified priors, like a fear of snakes or a fear of starvation. My forays into religion and spirituality actually began via the investigation of my own trapped priors, which I had previously articulated to myself as "psychological blocks", and explored in contexts that were adjacent to therapy (for example, getting my psychology dissected at Leverage Research, and experimenting with Circling). It was only after I went deep in my investigation of my trapped priors that I learned of the existence of traditions emphasizing the systematic and thorough exploration of trapped priors. These tended to be spiritual traditions, which is where my interest in spirituality actually began.[3] I will elaborate more on this later. Active blind spots as second-order trapp...

The Nonlinear Library
LW - Thiel on AI & Racing with China by Ben Pace

The Nonlinear Library

Play Episode Listen Later Aug 20, 2024 17:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thiel on AI & Racing with China, published by Ben Pace on August 20, 2024 on LessWrong. This post is a transcript of part of a podcast with Peter Thiel, touching on topics of AI, China, extinction, Effective Altruists, and apocalyptic narratives, published on August 16th 2024. If you're interested in reading the quotes, just skip straight to them, the introduction is not required reading. Introduction Peter Thiel is probably known by most readers, but briefly: he is an venture capitalist, the first outside investor in Facebook, cofounder of Paypal and Palantir, and wrote Zero to One (a book I have found very helpful for thinking about building great companies). He has also been one of the primary proponents of the Great Stagnation hypothesis (along with Tyler Cowen). More local to the LessWrong scene, Thiel was an early funder of MIRI and a speaker at the first Effective Altruism summit in 2013. He funded Leverage Research for many years, and also a lot of anti-aging research, and the seasteading initiative, and his Thiel Fellowship included a number of people who are around the LessWrong scene. I do not believe he has been active around this scene much in the last ~decade. He appears rarely to express a lot of positions about society, and I am curious to hear them when he does. In 2019 I published the transcript of another longform interview of his here with Eric Weinstein. Last week another longform interview with him came out, and I got the sense again, that even though we disagree on many things, conversation with him would be worthwhile and interesting. Then about 3 hours in he started talking more directly about subjects that I think actively about and some conflicts around AI, so I've quoted the relevant parts below. His interviewer, Joe Rogan is a very successful comedian and podcaster. He's not someone who I would go to for insights about AI. I think of him as standing in for a well-intentioned average person, for better or for worse, although he is a little more knowledgeable and a little more intelligent and a lot more curious than the average person. The average Joe. I believe he is talking in good faith to the person before him, with curiosity, and making points that seem natural to many. Artificial Intelligence Discussion focused on the AI race and China, atarting at 2:56:40. The opening monologue by Rogan is skippable. Rogan If you look at this mad rush for artificial intelligence - like, they're literally building nuclear reactors to power AI. Thiel Well, they're talking about it. Rogan Okay. That's because they know they're gonna need enormous amounts of power to do it. Once it's online, and it keeps getting better and better, where does that go? That goes to a sort of artificial life-form. I think either we become that thing, or we integrate with that thing and become cyborgs, or that thing takes over. And that thing becomes the primary life force of the universe. And I think that biological life, we look at like life, because we know what life is, but I think it's very possible that digital life or created life might be a superior life form. Far superior. [...] I love people, I think people are awesome. I am a fan of people. But if I had to look logically, I would assume that we are on the way out. And that the only way forward, really, to make an enormous leap in terms of the integration of society and technology and understanding our place in the universe, is for us to transcend our physical limitations that are essentially based on primate biology, and these primate desires for status (like being the captain), or for control of resources, all of these things - we assume these things are standard, and that they have to exist in intelligent species. I think they only have to exist in intelligent species that have biological limitations. I think in...

The Nonlinear Library: LessWrong
LW - Thiel on AI and Racing with China by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 20, 2024 17:48


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thiel on AI & Racing with China, published by Ben Pace on August 20, 2024 on LessWrong. This post is a transcript of part of a podcast with Peter Thiel, touching on topics of AI, China, extinction, Effective Altruists, and apocalyptic narratives, published on August 16th 2024. If you're interested in reading the quotes, just skip straight to them, the introduction is not required reading. Introduction Peter Thiel is probably known by most readers, but briefly: he is an venture capitalist, the first outside investor in Facebook, cofounder of Paypal and Palantir, and wrote Zero to One (a book I have found very helpful for thinking about building great companies). He has also been one of the primary proponents of the Great Stagnation hypothesis (along with Tyler Cowen). More local to the LessWrong scene, Thiel was an early funder of MIRI and a speaker at the first Effective Altruism summit in 2013. He funded Leverage Research for many years, and also a lot of anti-aging research, and the seasteading initiative, and his Thiel Fellowship included a number of people who are around the LessWrong scene. I do not believe he has been active around this scene much in the last ~decade. He appears rarely to express a lot of positions about society, and I am curious to hear them when he does. In 2019 I published the transcript of another longform interview of his here with Eric Weinstein. Last week another longform interview with him came out, and I got the sense again, that even though we disagree on many things, conversation with him would be worthwhile and interesting. Then about 3 hours in he started talking more directly about subjects that I think actively about and some conflicts around AI, so I've quoted the relevant parts below. His interviewer, Joe Rogan is a very successful comedian and podcaster. He's not someone who I would go to for insights about AI. I think of him as standing in for a well-intentioned average person, for better or for worse, although he is a little more knowledgeable and a little more intelligent and a lot more curious than the average person. The average Joe. I believe he is talking in good faith to the person before him, with curiosity, and making points that seem natural to many. Artificial Intelligence Discussion focused on the AI race and China, atarting at 2:56:40. The opening monologue by Rogan is skippable. Rogan If you look at this mad rush for artificial intelligence - like, they're literally building nuclear reactors to power AI. Thiel Well, they're talking about it. Rogan Okay. That's because they know they're gonna need enormous amounts of power to do it. Once it's online, and it keeps getting better and better, where does that go? That goes to a sort of artificial life-form. I think either we become that thing, or we integrate with that thing and become cyborgs, or that thing takes over. And that thing becomes the primary life force of the universe. And I think that biological life, we look at like life, because we know what life is, but I think it's very possible that digital life or created life might be a superior life form. Far superior. [...] I love people, I think people are awesome. I am a fan of people. But if I had to look logically, I would assume that we are on the way out. And that the only way forward, really, to make an enormous leap in terms of the integration of society and technology and understanding our place in the universe, is for us to transcend our physical limitations that are essentially based on primate biology, and these primate desires for status (like being the captain), or for control of resources, all of these things - we assume these things are standard, and that they have to exist in intelligent species. I think they only have to exist in intelligent species that have biological limitations. I think in...

The Nonlinear Library
LW - Debate: Get a college degree? by Ben Pace

The Nonlinear Library

Play Episode Listen Later Aug 13, 2024 30:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Debate: Get a college degree?, published by Ben Pace on August 13, 2024 on LessWrong. Epistemic Status: Soldier mindset. These are not our actual positions, these are positions we were randomly assigned by a coin toss, and for which we searched for the strongest arguments we could find, over the course of ~1hr 45mins. That said, this debate is a little messy between our performed positions and our personal ones. Sides: Ben is arguing against getting a college degree, and Saul is arguing for . (This is a decision Saul is currently making for himself!) Reading Order: Ben and Saul drafted each round of statements simultaneously. This means that each of Ben's statements you read were written without Ben having read Saul's statements that are immediately proceeding. (This does not apply to the back-and-forth interview.) Saul's Opening Statement first - i do think there's a qualitative difference between the position "getting an undergrad degree is good" vs "getting the typical undergrad experience is good." i think the second is in some ways more defensible than the first, but in most ways less so. For "getting the typical undergrad experience is good" This sort of thing is a strong Chesterton fence. People have been having the typical experience of an undergrad for a while (even while that typical experience changes). General upkeeping of norms/institutions is good. I think that - for a some ppl - their counterfactual is substantially worse. Even if this means college is functionally daycare, I'd rather they be in adult-day-care than otherwise being a drain on society (e.g. crime). It presents the option for automatic solutions to a lot of problems: Socializing high density of possible friends, romantic partners, etc you have to go to classes, talk to ppl, etc Exercise usually a free gym that's at-least functional you gotta walk to class, dining hall, etc Tons of ability to try slightly "weird" stuff you've never tried before - clubs, sports, events, greek life, sexual interactions, classes, etc I think a lot of these things get a lot more difficult when you haven't had the opportunity to experiment w them. A lot of ppl haven't experimented w much of anything before - college gives them an easy opportunity to do that w minimal friction before doing so becomes gated behind a ridiculous amount of friction. E.g. getting into a new hobby as an adult is a bit odd, in most social settings - but in college, it's literally as simple as joining that club. Again - while all of these sorts of things are possible outside of college, they become more difficult, outside of the usual norms, etc. For "getting an undergrad degree is good": This is a strong Chesterton fence. People have been getting undergrad degrees - or similar - for a wihle. It's an extremely legible symbol for a lot of society: Most ppl who get undergrad degrees aren't getting the sort of undergrad degree that ben or i sees - i think most are from huge state schools, followed by the gigantic tail of no-name schools. For those ppl, and for the jobs they typically seek, my guess is that for demonstrating the necessary things, like "i can listen to & follow directions, navigate general beaurocracies, learn things when needed, talk to people when needed, and am unlikely to be a extremely mentally ill, etc" - an undergrad degree is a pretty good signal. my guess is that a big part of the problem is that, despite this legible signal being good, ppl have indexed on it way too hard (& away from other signals of legibility, like a trade school, or a high school diploma with a high GPA or something). there are probably some instances where getting an undergrad degree isn't good, but those instances are strongly overrepresented to ben & saul, and the base rate is not that. also, it seems like society should give greater affordan...

The Nonlinear Library: LessWrong
LW - Debate: Get a college degree? by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 13, 2024 30:13


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Debate: Get a college degree?, published by Ben Pace on August 13, 2024 on LessWrong. Epistemic Status: Soldier mindset. These are not our actual positions, these are positions we were randomly assigned by a coin toss, and for which we searched for the strongest arguments we could find, over the course of ~1hr 45mins. That said, this debate is a little messy between our performed positions and our personal ones. Sides: Ben is arguing against getting a college degree, and Saul is arguing for . (This is a decision Saul is currently making for himself!) Reading Order: Ben and Saul drafted each round of statements simultaneously. This means that each of Ben's statements you read were written without Ben having read Saul's statements that are immediately proceeding. (This does not apply to the back-and-forth interview.) Saul's Opening Statement first - i do think there's a qualitative difference between the position "getting an undergrad degree is good" vs "getting the typical undergrad experience is good." i think the second is in some ways more defensible than the first, but in most ways less so. For "getting the typical undergrad experience is good" This sort of thing is a strong Chesterton fence. People have been having the typical experience of an undergrad for a while (even while that typical experience changes). General upkeeping of norms/institutions is good. I think that - for a some ppl - their counterfactual is substantially worse. Even if this means college is functionally daycare, I'd rather they be in adult-day-care than otherwise being a drain on society (e.g. crime). It presents the option for automatic solutions to a lot of problems: Socializing high density of possible friends, romantic partners, etc you have to go to classes, talk to ppl, etc Exercise usually a free gym that's at-least functional you gotta walk to class, dining hall, etc Tons of ability to try slightly "weird" stuff you've never tried before - clubs, sports, events, greek life, sexual interactions, classes, etc I think a lot of these things get a lot more difficult when you haven't had the opportunity to experiment w them. A lot of ppl haven't experimented w much of anything before - college gives them an easy opportunity to do that w minimal friction before doing so becomes gated behind a ridiculous amount of friction. E.g. getting into a new hobby as an adult is a bit odd, in most social settings - but in college, it's literally as simple as joining that club. Again - while all of these sorts of things are possible outside of college, they become more difficult, outside of the usual norms, etc. For "getting an undergrad degree is good": This is a strong Chesterton fence. People have been getting undergrad degrees - or similar - for a wihle. It's an extremely legible symbol for a lot of society: Most ppl who get undergrad degrees aren't getting the sort of undergrad degree that ben or i sees - i think most are from huge state schools, followed by the gigantic tail of no-name schools. For those ppl, and for the jobs they typically seek, my guess is that for demonstrating the necessary things, like "i can listen to & follow directions, navigate general beaurocracies, learn things when needed, talk to people when needed, and am unlikely to be a extremely mentally ill, etc" - an undergrad degree is a pretty good signal. my guess is that a big part of the problem is that, despite this legible signal being good, ppl have indexed on it way too hard (& away from other signals of legibility, like a trade school, or a high school diploma with a high GPA or something). there are probably some instances where getting an undergrad degree isn't good, but those instances are strongly overrepresented to ben & saul, and the base rate is not that. also, it seems like society should give greater affordan...

The Bayesian Conspiracy
Bayes Blast 30 – Less.Online

The Bayesian Conspiracy

Play Episode Listen Later May 4, 2024 23:04


Less.Online gathers rationalist writers (and readers) for a major event in Berkeley May 31 – June 2nd. Eneasz will be there! Ben Pace and Raymond Arnold tell us all about it. Get all the details you could want plus tickets … Continue reading →

The Nonlinear Library
LW - Key takeaways from our EA and alignment research surveys by Cameron Berg

The Nonlinear Library

Play Episode Listen Later May 3, 2024 47:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Key takeaways from our EA and alignment research surveys, published by Cameron Berg on May 3, 2024 on LessWrong. Many thanks to Spencer Greenberg, Lucius Caviola, Josh Lewis, John Bargh, Ben Pace, Diogo de Lucena, and Philip Gubbins for their valuable ideas and feedback at each stage of this project - as well as the ~375 EAs + alignment researchers who provided the data that made this project possible. Background Last month, AE Studio launched two surveys: one for alignment researchers, and another for the broader EA community. We got some surprisingly interesting results, and we're excited to share them here. We set out to better explore and compare various population-level dynamics within and across both groups. We examined everything from demographics and personality traits to community views on specific EA/alignment-related topics. We took on this project because it seemed to be largely unexplored and rife with potentially-very-high-value insights. In this post, we'll present what we think are the most important findings from this project. Meanwhile, we're also sharing and publicly releasing a tool we built for analyzing both datasets. The tool has some handy features, including customizable filtering of the datasets, distribution comparisons within and across the datasets, automatic classification/regression experiments, LLM-powered custom queries, and more. We're excited for the wider community to use the tool to explore these questions further in whatever manner they desire. There are many open questions we haven't tackled here related to the current psychological and intellectual make-up of both communities that we hope others will leverage the dataset to explore further. (Note: if you want to see all results, navigate to the tool, select the analysis type of interest, and click 'Select All.' If you have additional questions not covered by the existing analyses, the GPT-4 integration at the bottom of the page should ideally help answer them. The code running the tool and the raw anonymized data are both also publicly available.) We incentivized participation by offering to donate $40 per eligible[1] respondent - strong participation in both surveys enabled us to donate over $10,000 to both AI safety orgs as well as a number of different high impact organizations (see here[2] for the exact breakdown across the two surveys). Thanks again to all of those who participated in both surveys! Three miscellaneous points on the goals and structure of this post before diving in: 1. Our goal here is to share the most impactful takeaways rather than simply regurgitating every conceivable result. This is largely why we are also releasing the data analysis tool, where anyone interested can explore the dataset and the results at whatever level of detail they please. 2. This post collectively represents what we at AE found to be the most relevant and interesting findings from these experiments. We sorted the TL;DR below by perceived importance of findings. We are personally excited about pursuing neglected approaches to alignment, but we have attempted to be as deliberate as possible throughout this write-up in striking the balance between presenting the results as straightforwardly as possible and sharing our views about implications of certain results where we thought it was appropriate. 3. This project was descriptive and exploratory in nature. Our goal was to cast a wide psychometric net in order to get a broad sense of the psychological and intellectual make-up of both communities. We used standard frequentist statistical analyses to probe for significance where appropriate, but we definitely still think it is important for ourselves and others to perform follow-up experiments to those presented here with a more tightly controlled scope to replicate and further sharpen t...

The Nonlinear Library: LessWrong
LW - Key takeaways from our EA and alignment research surveys by Cameron Berg

The Nonlinear Library: LessWrong

Play Episode Listen Later May 3, 2024 47:42


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Key takeaways from our EA and alignment research surveys, published by Cameron Berg on May 3, 2024 on LessWrong. Many thanks to Spencer Greenberg, Lucius Caviola, Josh Lewis, John Bargh, Ben Pace, Diogo de Lucena, and Philip Gubbins for their valuable ideas and feedback at each stage of this project - as well as the ~375 EAs + alignment researchers who provided the data that made this project possible. Background Last month, AE Studio launched two surveys: one for alignment researchers, and another for the broader EA community. We got some surprisingly interesting results, and we're excited to share them here. We set out to better explore and compare various population-level dynamics within and across both groups. We examined everything from demographics and personality traits to community views on specific EA/alignment-related topics. We took on this project because it seemed to be largely unexplored and rife with potentially-very-high-value insights. In this post, we'll present what we think are the most important findings from this project. Meanwhile, we're also sharing and publicly releasing a tool we built for analyzing both datasets. The tool has some handy features, including customizable filtering of the datasets, distribution comparisons within and across the datasets, automatic classification/regression experiments, LLM-powered custom queries, and more. We're excited for the wider community to use the tool to explore these questions further in whatever manner they desire. There are many open questions we haven't tackled here related to the current psychological and intellectual make-up of both communities that we hope others will leverage the dataset to explore further. (Note: if you want to see all results, navigate to the tool, select the analysis type of interest, and click 'Select All.' If you have additional questions not covered by the existing analyses, the GPT-4 integration at the bottom of the page should ideally help answer them. The code running the tool and the raw anonymized data are both also publicly available.) We incentivized participation by offering to donate $40 per eligible[1] respondent - strong participation in both surveys enabled us to donate over $10,000 to both AI safety orgs as well as a number of different high impact organizations (see here[2] for the exact breakdown across the two surveys). Thanks again to all of those who participated in both surveys! Three miscellaneous points on the goals and structure of this post before diving in: 1. Our goal here is to share the most impactful takeaways rather than simply regurgitating every conceivable result. This is largely why we are also releasing the data analysis tool, where anyone interested can explore the dataset and the results at whatever level of detail they please. 2. This post collectively represents what we at AE found to be the most relevant and interesting findings from these experiments. We sorted the TL;DR below by perceived importance of findings. We are personally excited about pursuing neglected approaches to alignment, but we have attempted to be as deliberate as possible throughout this write-up in striking the balance between presenting the results as straightforwardly as possible and sharing our views about implications of certain results where we thought it was appropriate. 3. This project was descriptive and exploratory in nature. Our goal was to cast a wide psychometric net in order to get a broad sense of the psychological and intellectual make-up of both communities. We used standard frequentist statistical analyses to probe for significance where appropriate, but we definitely still think it is important for ourselves and others to perform follow-up experiments to those presented here with a more tightly controlled scope to replicate and further sharpen t...

The Nonlinear Library
LW - LessOnline Festival Updates Thread by Ben Pace

The Nonlinear Library

Play Episode Listen Later Apr 19, 2024 0:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LessOnline Festival Updates Thread, published by Ben Pace on April 19, 2024 on LessWrong. This is a thread for updates about the upcoming LessOnline festival. I (Ben) will be posting bits of news and thoughts, and you're also welcome to make suggestions or ask questions. If you'd like to hear about new updates, you can use LessWrong's "Subscribe to comments" feature from the triple-dot menu at the top of this post. Reminder that you can get tickets at the site for $400 minus your LW karma in cents. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - LessOnline Festival Updates Thread by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 19, 2024 0:43


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LessOnline Festival Updates Thread, published by Ben Pace on April 19, 2024 on LessWrong. This is a thread for updates about the upcoming LessOnline festival. I (Ben) will be posting bits of news and thoughts, and you're also welcome to make suggestions or ask questions. If you'd like to hear about new updates, you can use LessWrong's "Subscribe to comments" feature from the triple-dot menu at the top of this post. Reminder that you can get tickets at the site for $400 minus your LW karma in cents. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - LessOnline (May 31 - June 2, Berkeley, CA) by Ben Pace

The Nonlinear Library

Play Episode Listen Later Mar 26, 2024 1:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LessOnline (May 31 - June 2, Berkeley, CA), published by Ben Pace on March 26, 2024 on LessWrong. A Festival of Writers Who are Wrong on the Internet[1] LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It's an opportunity to meet people you've only ever known by their LessWrong username or Substack handle. We're running a rationalist conference! The ticket cost is $400 minus your LW karma in cents. Confirmed attendees include Scott Alexander, Eliezer Yudkowsky, Katja Grace, and Alexander Wales. Less.Online Go through to Less.Online to learn about who's attending, venue, location, housing, relation to Manifest, and more. We'll post more updates about this event over the coming weeks as it all comes together. If LessOnline is an awesome rationalist event, I desire to believe that LessOnline is an awesome rationalist event; If LessOnline is not an awesome rationalist event, I desire to believe that LessOnline is not an awesome rationalist event; Let me not become attached to beliefs I may not want. Litany of Rationalist Event Organizing ^ But Striving to be Less So Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - LessOnline (May 31 - June 2, Berkeley, CA) by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 26, 2024 1:23


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LessOnline (May 31 - June 2, Berkeley, CA), published by Ben Pace on March 26, 2024 on LessWrong. A Festival of Writers Who are Wrong on the Internet[1] LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It's an opportunity to meet people you've only ever known by their LessWrong username or Substack handle. We're running a rationalist conference! The ticket cost is $400 minus your LW karma in cents. Confirmed attendees include Scott Alexander, Eliezer Yudkowsky, Katja Grace, and Alexander Wales. Less.Online Go through to Less.Online to learn about who's attending, venue, location, housing, relation to Manifest, and more. We'll post more updates about this event over the coming weeks as it all comes together. If LessOnline is an awesome rationalist event, I desire to believe that LessOnline is an awesome rationalist event; If LessOnline is not an awesome rationalist event, I desire to believe that LessOnline is not an awesome rationalist event; Let me not become attached to beliefs I may not want. Litany of Rationalist Event Organizing ^ But Striving to be Less So Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - Vote on Anthropic Topics to Discuss by Ben Pace

The Nonlinear Library

Play Episode Listen Later Mar 6, 2024 0:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote on Anthropic Topics to Discuss, published by Ben Pace on March 6, 2024 on LessWrong. What important questions would you want to see discussed and debated here about Anthropic? Suggest and vote below. (This is the third such poll, see the first and second linked.) How to use the poll Reacts: Click on the agree/disagree reacts to help people see how much disagreement there is on the topic. Karma: Upvote positions that you'd like to read discussion about. New Poll Option: Add new positions for people to take sides on. Please add the agree/disagree reacts to new poll options you make. The goal is to show people where a lot of interest and disagreement lies. This can be used to find discussion and dialogue topics in the future. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - Vote on Anthropic Topics to Discuss by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 6, 2024 0:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote on Anthropic Topics to Discuss, published by Ben Pace on March 6, 2024 on LessWrong. What important questions would you want to see discussed and debated here about Anthropic? Suggest and vote below. (This is the third such poll, see the first and second linked.) How to use the poll Reacts: Click on the agree/disagree reacts to help people see how much disagreement there is on the topic. Karma: Upvote positions that you'd like to read discussion about. New Poll Option: Add new positions for people to take sides on. Please add the agree/disagree reacts to new poll options you make. The goal is to show people where a lot of interest and disagreement lies. This can be used to find discussion and dialogue topics in the future. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

LessWrong Curated Podcast
Nonlinear's Evidence: Debunking False and Misleading Claims

LessWrong Curated Podcast

Play Episode Listen Later Dec 21, 2023 59:57


Recently, Ben Pace wrote a well-intentioned blog post mostly based on complaints from 2 (of 21) Nonlinear employees who 1) wanted more money, 2) felt socially isolated, and 3) felt persecuted/oppressed. Of relevance, one has accused the majority of her previous employers, and 28 people of abuse - that we know of. She has accused multiple people of threatening to kill her and literally accused an ex-employer of murder. Within three weeks of joining us, she had accused five separate people of abuse: not paying her what was promised, controlling her romantic life, hiring stalkers, and other forms of persecution. We have empathy for her. Initially, we believed her too. We spent weeks helping her get her “nefarious employer to finally pay her” and commiserated with her over how badly they mistreated her. Then she started accusing us of strange things. You've seen Ben's evidence, which [...]--- First published: December 12th, 2023 Source: https://www.lesswrong.com/posts/q4MXBzzrE6bnDHJbM/nonlinear-s-evidence-debunking-false-and-misleading-claims --- Narrated by TYPE III AUDIO.

Effective Altruism Forum Podcast
“Nonlinear's Evidence: Debunking False and Misleading Claims” by Kat Woods

Effective Altruism Forum Podcast

Play Episode Listen Later Dec 12, 2023 59:31


Recently, Ben Pace wrote a well-intentioned blog post mostly based on complaints from 2 (of 21) Nonlinear employees who 1) wanted more money, 2) felt socially isolated, and 3) felt persecuted/oppressed. Of relevance, one has accused the majority of her previous employers, and 28 people of abuse - that we know of. She has accused multiple people of threatening to kill her and literally accused an ex-employer of murder. Within three weeks of joining us, she had accused five separate people of abuse: not paying her what was promised, controlling her romantic life, hiring stalkers, and other forms of persecution. We have empathy for her. Initially, we believed her too. We spent weeks helping her get her “nefarious employer to finally pay her” and commiserated with her over how badly they mistreated her. Then she started accusing us of strange things. You've seen Ben's evidence, which [...] ---Outline:(02:20) Short summary overview table(04:04) This post is long, so if you read just one illustrative story, read this one(08:37) What is going on? Why did they say so many misleading things? How did Ben get so much wrong?(12:14) Ben admitted in his post that he was warned in private by multiple of his own sources that Alice was untrustworthy and told outright lies. One credible person told Ben Alice makes things up.(20:35) Alice has similarities to Kathy Forth, who, according to Scott Alexander, was “a very disturbed person” who, multiple people told him, “had a habit of accusing men she met of sexual harassment. They all agreed she wasnt malicious, just delusional.” As a community, we do not have good mechanisms in place to protect people from false accusations.(23:41) Why didn't Ben do basic fact-checking to see if their claims were true? I mean, multiple people warned him?(24:42) Longer summary table(26:14) To many EAs, this would have been a dream job(37:13) Sharing Information on Ben Pace(47:07) So how do we learn from this to make our community better? How can we make EA antifragile?(51:22) Conclusion: a story with no villains(57:16) If you are disturbed by what happened here, here are some ways you can help(58:53) Acknowledgments--- First published: December 12th, 2023 Source: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims --- Narrated by TYPE III AUDIO.

The Nonlinear Library
LW - Vote on worthwhile OpenAI topics to discuss by Ben Pace

The Nonlinear Library

Play Episode Listen Later Nov 21, 2023 1:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote on worthwhile OpenAI topics to discuss, published by Ben Pace on November 21, 2023 on LessWrong. I (Ben) recently made a poll for voting on interesting disagreements to be discussed on LessWrong. It generated a lot of good topic suggestions and data about what questions folks cared about and disagreed on. So, Jacob and I figured we'd try applying the same format to help people orient to the current OpenAI situation. What important questions would you want to see discussed and debated here in the coming days? Suggest and vote below. How to use the poll Reacts: Click on the agree/disagree reacts to help people see how much disagreement there is on the topic. Karma: Upvote positions that you'd like to read discussion about. New Poll Option: Add new positions for people to take sides on. Please add the agree/disagree reacts to new poll options you make. The goal is to show people where a lot of interest and disagreement lies. This can be used to find discussion and dialogue topics in the future. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - Vote on worthwhile OpenAI topics to discuss by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 21, 2023 1:12


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote on worthwhile OpenAI topics to discuss, published by Ben Pace on November 21, 2023 on LessWrong. I (Ben) recently made a poll for voting on interesting disagreements to be discussed on LessWrong. It generated a lot of good topic suggestions and data about what questions folks cared about and disagreed on. So, Jacob and I figured we'd try applying the same format to help people orient to the current OpenAI situation. What important questions would you want to see discussed and debated here in the coming days? Suggest and vote below. How to use the poll Reacts: Click on the agree/disagree reacts to help people see how much disagreement there is on the topic. Karma: Upvote positions that you'd like to read discussion about. New Poll Option: Add new positions for people to take sides on. Please add the agree/disagree reacts to new poll options you make. The goal is to show people where a lot of interest and disagreement lies. This can be used to find discussion and dialogue topics in the future. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - New LessWrong feature: Dialogue Matching by jacobjacob

The Nonlinear Library

Play Episode Listen Later Nov 16, 2023 5:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New LessWrong feature: Dialogue Matching, published by jacobjacob on November 16, 2023 on LessWrong. The LessWrong team is shipping a new experimental feature today: dialogue matching! I've been leading work on this (together with Ben Pace, kave, Ricki Heicklen, habryka and RobertM), so wanted to take some time to introduce what we built and share some thoughts on why I wanted to build it. New feature! There's now a dialogue matchmaking page at lesswrong.com/dialogueMatching Here's how it works: You can check a user you'd potentially be interested in having a dialogue with, if they were too They can't see your checks unless you match It also shows you some interesting data: your top upvoted users over the last 18 months, how much you agreed/disagreed with them, what topics they most frequently commented on, and what posts of theirs you most recently read. Next, if you find a match, this happens: You get a tiny form asking for topic ideas and format preferences, and then we create a dialogue that summarises your responses and suggests next steps based on them. Currently, we're mostly sourcing auto-suggested topics from Ben's neat poll where people voted on interesting disagreement they'd want to see debated, and also stated their own views. I'm pretty excited to further explore this and other ways for auto-suggesting good topics. My hypothesis is that we're in a bit of a dialogue overhang: there are important conversations out there to be had, but that aren't happening. We just need to find them. This feature is an experiment in making it easier to do many of the hard steps in having a dialogue: finding a partner, finding a topic, and coordinating on format. To try the Dialogue Matching feature, feel free to on head over to lesswrong.com/dialogueMatching ! Me and the team are super keen to hear any and all feedback. Feel free to share in comments below or using the intercom button in the bottom right corner :) Why build this? A retreat organiser I worked with long ago told me: "the most valuable part of an event usually aren't the big talks, but the small group or 1-1 conversations you end up having in the hallways between talks." I think this points at something important. When Lightcone runs events, we usually optimize the small group experience pretty hard. In fact, when building and renovating our campus Lighthaven, we designed it to have lots of little nooks and spaces in order to facilitate exactly this kind of interaction. With dialogues, I feel like we're trying to enable an interaction on LessWrong that's also more like a 1-1, and less like a broadcasting talk to an audience. But we're doing so with two important additions: Readable artefacts. Usually the results of a 1-1 are locked in with the people involved. Sometimes that's good. But other times, Dialogues enable a format where good stuff that came out of it can be shared with others. Matchmaking at scale. Being a good event organiser involves a lot of effort to figure out who might have valuable conversations, and then connecting them. This can often be super valuable (thought experiment: imagine introducing Von Neumann and Morgenstern), but takes a lot of personalised fingertip feel and dinner host mojo. Using dialogue matchmaking, I'm curious about a quick experiment to try to doing this at scale, in an automated way. Overall, I think there's a whole class of valuable content here that you can't even get out at all outside of a dialogue format. The things you say in a talk are different from the things you'd share if you were being interviewed on a podcast, or having a conversation with a friend. Suppose you had been mulling over a confusion about AI. Your thoughts are nowhere near the point where you could package them into a legible, ordered talk and then go present them. So, what do you do? I think...

The Nonlinear Library: LessWrong
LW - New LessWrong feature: Dialogue Matching by jacobjacob

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 16, 2023 5:14


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New LessWrong feature: Dialogue Matching, published by jacobjacob on November 16, 2023 on LessWrong. The LessWrong team is shipping a new experimental feature today: dialogue matching! I've been leading work on this (together with Ben Pace, kave, Ricki Heicklen, habryka and RobertM), so wanted to take some time to introduce what we built and share some thoughts on why I wanted to build it. New feature! There's now a dialogue matchmaking page at lesswrong.com/dialogueMatching Here's how it works: You can check a user you'd potentially be interested in having a dialogue with, if they were too They can't see your checks unless you match It also shows you some interesting data: your top upvoted users over the last 18 months, how much you agreed/disagreed with them, what topics they most frequently commented on, and what posts of theirs you most recently read. Next, if you find a match, this happens: You get a tiny form asking for topic ideas and format preferences, and then we create a dialogue that summarises your responses and suggests next steps based on them. Currently, we're mostly sourcing auto-suggested topics from Ben's neat poll where people voted on interesting disagreement they'd want to see debated, and also stated their own views. I'm pretty excited to further explore this and other ways for auto-suggesting good topics. My hypothesis is that we're in a bit of a dialogue overhang: there are important conversations out there to be had, but that aren't happening. We just need to find them. This feature is an experiment in making it easier to do many of the hard steps in having a dialogue: finding a partner, finding a topic, and coordinating on format. To try the Dialogue Matching feature, feel free to on head over to lesswrong.com/dialogueMatching ! Me and the team are super keen to hear any and all feedback. Feel free to share in comments below or using the intercom button in the bottom right corner :) Why build this? A retreat organiser I worked with long ago told me: "the most valuable part of an event usually aren't the big talks, but the small group or 1-1 conversations you end up having in the hallways between talks." I think this points at something important. When Lightcone runs events, we usually optimize the small group experience pretty hard. In fact, when building and renovating our campus Lighthaven, we designed it to have lots of little nooks and spaces in order to facilitate exactly this kind of interaction. With dialogues, I feel like we're trying to enable an interaction on LessWrong that's also more like a 1-1, and less like a broadcasting talk to an audience. But we're doing so with two important additions: Readable artefacts. Usually the results of a 1-1 are locked in with the people involved. Sometimes that's good. But other times, Dialogues enable a format where good stuff that came out of it can be shared with others. Matchmaking at scale. Being a good event organiser involves a lot of effort to figure out who might have valuable conversations, and then connecting them. This can often be super valuable (thought experiment: imagine introducing Von Neumann and Morgenstern), but takes a lot of personalised fingertip feel and dinner host mojo. Using dialogue matchmaking, I'm curious about a quick experiment to try to doing this at scale, in an automated way. Overall, I think there's a whole class of valuable content here that you can't even get out at all outside of a dialogue format. The things you say in a talk are different from the things you'd share if you were being interviewed on a podcast, or having a conversation with a friend. Suppose you had been mulling over a confusion about AI. Your thoughts are nowhere near the point where you could package them into a legible, ordered talk and then go present them. So, what do you do? I think...

The Nonlinear Library
LW - Vote on Interesting Disagreements by Ben Pace

The Nonlinear Library

Play Episode Listen Later Nov 7, 2023 0:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote on Interesting Disagreements, published by Ben Pace on November 7, 2023 on LessWrong. Do you have a question you'd like to see argued about? Would you like to indicate your position and discuss it with someone who disagrees? Add poll options to the thread below to find questions with lots of interest and disagreement. How to use the poll Reacts: Click on the agree/disagree reacts to help people see how much disagreement there is on the topic. Karma: Upvote positions that you'd like to read dialogues about. New Poll Option: Add new positions for people to take sides on. Please add the agree/disagree reacts to new poll options you make. The goal is to show people where a lot of interesting disagreement lies. This can be used to find discussion and dialogue topics in the future. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - Vote on Interesting Disagreements by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 7, 2023 0:58


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote on Interesting Disagreements, published by Ben Pace on November 7, 2023 on LessWrong. Do you have a question you'd like to see argued about? Would you like to indicate your position and discuss it with someone who disagrees? Add poll options to the thread below to find questions with lots of interest and disagreement. How to use the poll Reacts: Click on the agree/disagree reacts to help people see how much disagreement there is on the topic. Karma: Upvote positions that you'd like to read dialogues about. New Poll Option: Add new positions for people to take sides on. Please add the agree/disagree reacts to new poll options you make. The goal is to show people where a lot of interesting disagreement lies. This can be used to find discussion and dialogue topics in the future. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - 2023 LessWrong Community Census, Request for Comments by Screwtape

The Nonlinear Library

Play Episode Listen Later Nov 1, 2023 3:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 LessWrong Community Census, Request for Comments, published by Screwtape on November 1, 2023 on LessWrong. Overview I would like there to be a LessWrong Community Census, because I had fun playing with the data from last year and there's some questions I'm curious about. It's also an entertaining site tradition. Since nobody else has stepped forward to make the community census happen, I'm getting the ball rolling. This is a request for comments, constructive criticism, careful consideration, and silly jokes on the census. Here's the draft. I'm posting this request for comments on November 1st. I'm planning to incorporate feedback throughout November, then on December 1st I'll update the census to remove the "DO NOT TAKE" warning at the top, and make a new post asking people to take the census. I plan to let it run throughout all December, close it in the first few days of January, and then get the public data and analysis out sometime in mid to late January. How Was The Draft Composed? I coped the question set from 2022, which itself took extremely heavy inspiration from previous years. I then added a section sourced from the questions Ben Pace of the LessWrong team had been considering in 2022, and another section of questions I'd be asking on a user survey if I worked for LessWrong. (I do not work for LessWrong.) Next I fixed some obvious mistakes from last year (in particular allowing free responses on the early politics questions) as well as changed some things that change every year like the Calibration question, and swapped around the questions in the Indulging My Curiosity section. Changes I'm Interested In In general, I want to reduce the number of questions. Last year I asked about the length and overall people thought it was a little too long. Then I added more questions. (The LW Team Questions and the Questions The LW Team Should Have Asked section.) I'm inclined to think those sections aren't pulling their weight right now, but I do think it's worth asking good questions about how people use the website on the census. I'm likely to shrink down the religion responses, as I don't think checking the different variations of e.g. Buddhism or Judaism revealed anything interesting. I'd probably put them back to the divisions used in earlier versions of the survey. I'm sort of tempted to remove the Numbers That Purport To Measure Your Intelligence section entirely. I believe it was part of Scott trying to answer a particular question about the readership, and while I love his old analyses they could make space for current questions. The main arguments in favour of keeping them is that they don't take up much space, and they've been around for a while. The Detailed Questions From Previous Surveys and Further Politics sections would be where I'd personally start making some cuts, though I admit I just don't care about politics very much. Some people care a lot about politics and if anyone wants to champion those sections that seems potentially fun. This may also be the year that some of the "Detailed Questions From Previous Surveys" get questions can get moved into the survey proper or dropped. I'd be excited to add some questions that would help adjacent or subset communities. If you're with CFAR, The Guild of the Rose, Glowfic, or an organization like that I'm cheerful about having some questions you're interested in, especially if the questions would be generally useful or fun to discuss. I've already offered to the LessWrong team directly, but I'll say again that I'd be excited to try and ask questions that would be useful for you all. You don't actually have to be associated with an organization either. If there's a burning question you have about the general shape of the readership, I'm interested in sating other people's curiosity and I'd like to encou...

The Nonlinear Library: LessWrong
LW - 2023 LessWrong Community Census, Request for Comments by Screwtape

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 1, 2023 3:45


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 LessWrong Community Census, Request for Comments, published by Screwtape on November 1, 2023 on LessWrong. Overview I would like there to be a LessWrong Community Census, because I had fun playing with the data from last year and there's some questions I'm curious about. It's also an entertaining site tradition. Since nobody else has stepped forward to make the community census happen, I'm getting the ball rolling. This is a request for comments, constructive criticism, careful consideration, and silly jokes on the census. Here's the draft. I'm posting this request for comments on November 1st. I'm planning to incorporate feedback throughout November, then on December 1st I'll update the census to remove the "DO NOT TAKE" warning at the top, and make a new post asking people to take the census. I plan to let it run throughout all December, close it in the first few days of January, and then get the public data and analysis out sometime in mid to late January. How Was The Draft Composed? I coped the question set from 2022, which itself took extremely heavy inspiration from previous years. I then added a section sourced from the questions Ben Pace of the LessWrong team had been considering in 2022, and another section of questions I'd be asking on a user survey if I worked for LessWrong. (I do not work for LessWrong.) Next I fixed some obvious mistakes from last year (in particular allowing free responses on the early politics questions) as well as changed some things that change every year like the Calibration question, and swapped around the questions in the Indulging My Curiosity section. Changes I'm Interested In In general, I want to reduce the number of questions. Last year I asked about the length and overall people thought it was a little too long. Then I added more questions. (The LW Team Questions and the Questions The LW Team Should Have Asked section.) I'm inclined to think those sections aren't pulling their weight right now, but I do think it's worth asking good questions about how people use the website on the census. I'm likely to shrink down the religion responses, as I don't think checking the different variations of e.g. Buddhism or Judaism revealed anything interesting. I'd probably put them back to the divisions used in earlier versions of the survey. I'm sort of tempted to remove the Numbers That Purport To Measure Your Intelligence section entirely. I believe it was part of Scott trying to answer a particular question about the readership, and while I love his old analyses they could make space for current questions. The main arguments in favour of keeping them is that they don't take up much space, and they've been around for a while. The Detailed Questions From Previous Surveys and Further Politics sections would be where I'd personally start making some cuts, though I admit I just don't care about politics very much. Some people care a lot about politics and if anyone wants to champion those sections that seems potentially fun. This may also be the year that some of the "Detailed Questions From Previous Surveys" get questions can get moved into the survey proper or dropped. I'd be excited to add some questions that would help adjacent or subset communities. If you're with CFAR, The Guild of the Rose, Glowfic, or an organization like that I'm cheerful about having some questions you're interested in, especially if the questions would be generally useful or fun to discuss. I've already offered to the LessWrong team directly, but I'll say again that I'd be excited to try and ask questions that would be useful for you all. You don't actually have to be associated with an organization either. If there's a burning question you have about the general shape of the readership, I'm interested in sating other people's curiosity and I'd like to encou...

LessWrong Curated Podcast
"Announcing Dialogues" by Ben Pace

LessWrong Curated Podcast

Play Episode Listen Later Oct 9, 2023 7:11


As of today, everyone is able to create a new type of content on LessWrong: Dialogues.In contrast with posts, which are for monologues, and comment sections, which are spaces for everyone to talk to everyone, a dialogue is a space for a few invited people to speak with each other. I'm personally very excited about this as a way for people to produce lots of in-depth explanations of their world-models in public. I think dialogues enable this in a way that feels easier — instead of writing an explanation for anyone who reads, you're communicating with the particular person you're talking with — and giving the readers a lot of rich nuance I normally only find when I overhear people talk in person.In the rest of this post I'll explain the feature, and then encourage you to find a partner in the comments to try it out with.Source:https://www.lesswrong.com/posts/kQuSZG8ibfW6fJYmo/announcing-dialogues-1Narrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.[125+ Karma Post] ✓

The FarrCast : Wealth Strategies
Get the Button Ready

The FarrCast : Wealth Strategies

Play Episode Listen Later Sep 21, 2023 35:14


Jim Lebenthal guest hosts for a talk with Dan Mahaffee on the shutdown and why it isn't a big deal to the markets -- yet. First up though, Jim welcomes Ben Pace, the CIO of Cerity Partners for a discussion on how the landscape has changed for investors in the last two years, and what questions need to be asked moving forward. Completing six season of insight into Wall Street, Washington, and The World -- it's The FarrCast!

The Nonlinear Library
EA - Closing Notes on Nonlinear Investigation by Ben Pace

The Nonlinear Library

Play Episode Listen Later Sep 15, 2023 24:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Closing Notes on Nonlinear Investigation, published by Ben Pace on September 15, 2023 on The Effective Altruism Forum. Over the past seven months, I've been working part-time on an investigation of Nonlinear, culminating in last week's post. As I'm wrapping up this project, I want to share my personal perspective, and share some final thoughts. This post mostly has some thoughts and context that didn't fit into the previous post. I also wish to accurately set expectations that I'm not working on this investigation any more. Why I Got Into Doing an Investigation From literally the very first day, my goal has been to openly share some credible allegations I had heard, so as to contribute to a communal epistemic accounting. On the Tuesday of the week Kat Woods first visited (March 7th), someone in the office contacted me with concerns about their presence (the second person in good standing to do so). I replied proposing to post the following one-paragraph draft in a public Lightcone Offices slack channel. I have heard anonymized reports from prior employees that they felt very much taken advantage of while working at Nonlinear under Kat. I can't vouch for them personally, I don't know the people, but I take them pretty seriously and think it's more likely than not that something seriously bad happened. I don't think uncheckable anonymized reports should be sufficient to boot someone from community spaces, especially when they've invested a bunch into this ecosystem and seems to me to plausibly be doing pretty good work, so I'm still inviting them here, but I would feel bad not warning people that working with them might go pretty badly. (Note that I don't think the above is a great message, nonetheless I'm sharing it here as info about my thinking at the time.) That would not have represented any particular vendetta against Nonlinear. It would not have been an especially unusual act, or even much of a call out. Rather it was intended as the kind of normal sharing of information that I would expect from any member of an epistemic community that is trying to collectively figure out what's true. But the person who shared the concerns with me recommended that I not post that, because it could trigger severe repercussions for Alice and Chloe. They responded as follows. Person A: I'm trying to formulate my thoughts on this, but something about this makes me very uncomfortable. Person A: In the time that I have been involved in EA spaces I have gotten the sense that unless abuse is extremely public and well documented nothing much gets done about it. I understand the "innocent until proven guilty" mentality, and I'm not disagreeing with that, but the result of this is a strong bias toward letting the perpetrators of abuse off the hook, and continue to take advantage of what should be safe spaces. I don't think that we should condemn people on the basis of hearsay, but I think we have a responsibility to counteract this bias in every other way possible. It is very scary to be a victim, when the perpetrator has status and influence and can so easily destroy your career and reputation (especially given that they have directly threatened one of my friends with this). Could you please not speak to Kat directly? One of my friends is very worried about direct reprisal. BP: I'm afraid I can't do that, insofar as I'm considering uninviting her, I want to talk to her and give her a space to say her piece to me. Also I already brought up these concerns with her when I told her she was invited. I am not going to name you or anyone else who raised concerns to me, and I don't plan to give any info that isn't essentially already in the EA Forum thread. I don't know who the people are who are starting this info. This first instance is an example of a generalized dynamic. At virtually every s...

LessWrong Curated Podcast
"Sharing Information About Nonlinear" by Ben Pace

LessWrong Curated Podcast

Play Episode Listen Later Sep 8, 2023 56:26


Added (11th Sept): Nonlinear have commented that they intend to write a response, have written a short follow-up, and claim that they dispute 85 claims in this post. I'll link here to that if-and-when it's published.Added (11th Sept): One of the former employees, Chloe, has written a lengthy comment personally detailing some of her experiences working at Nonlinear and the aftermath.Added (12th Sept): I've made 3 relatively minor edits to the post. I'm keeping a list of all edits at the bottom of the post, so if you've read the post already, you can just go to the end to see the edits.Added (15th Sept): I've written a follow-up post saying that I've finished working on this investigation and do not intend to work more on it in the future. The follow-up also has a bunch of reflections on what led up to this post.Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think I have a relatively low bar for publicly sharing critical info about folks in the EA/x-risk/rationalist/etc ecosystem.)tl;dr: If you want my important updates quickly summarized in four claims-plus-probabilities, jump to the section near the bottom titled "Summary of My Epistemic State".Source:https://www.lesswrong.com/posts/Lc8r4tZ2L5txxokZ8/sharing-information-about-nonlinear-1Narrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.[125+ Karma Post] ✓

The Nonlinear Library
LW - Sharing Information About Nonlinear by Ben Pace

The Nonlinear Library

Play Episode Listen Later Sep 7, 2023 54:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sharing Information About Nonlinear, published by Ben Pace on September 7, 2023 on LessWrong. Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think I have a relatively low bar for publicly sharing critical info about folks in the EA/x-risk/rationalist/etc ecosystem.) tl;dr: If you want my important updates quickly summarized in four claims-plus-probabilities, jump to the section near the bottom titled "Summary of My Epistemic State". When I used to manage the Lightcone Offices, I spent a fair amount of time and effort on gatekeeping - processing applications from people in the EA/x-risk/rationalist ecosystem to visit and work from the offices, and making decisions. Typically this would involve reading some of their public writings, and reaching out to a couple of their references that I trusted and asking for information about them. A lot of the people I reached out to were surprisingly great at giving honest references about their experiences with someone and sharing what they thought about someone. One time, Kat Woods and Drew Spartz from Nonlinear applied to visit. I didn't know them or their work well, except from a few brief interactions that Kat Woods seems high-energy, and to have a more optimistic outlook on life and work than most people I encounter. I reached out to some references Kat listed, which were positive to strongly positive. However I also got a strongly negative reference - someone else who I informed about the decision told me they knew former employees who felt taken advantage of around things like salary. However the former employees reportedly didn't want to come forward due to fear of retaliation and generally wanting to get away from the whole thing, and the reports felt very vague and hard for me to concretely visualize, but nonetheless the person strongly recommended against inviting Kat and Drew. I didn't feel like this was a strong enough reason to bar someone from a space - or rather, I did, but vague anonymous descriptions of very bad behavior being sufficient to ban someone is a system that can be straightforwardly abused, so I don't want to use such a system. Furthermore, I was interested in getting my own read on Kat Woods from a short visit - she had only asked to visit for a week. So I accepted, though I informed her that this weighed on my mind. (This is a link to the decision email I sent to her.) (After making that decision I was also linked to this ominous yet still vague EA Forum thread, that includes a former coworker of Kat Woods saying they did not like working with her, more comments like the one I received above, and links to a lot of strongly negative Glassdoor reviews for Nonlinear Cofounder Emerson Spartz's former company "Dose". Note that more than half of the negative reviews are for the company after Emerson sold it, but this is a concerning one from 2015 (while Emerson Spartz was CEO/Cofounder): "All of these super positive reviews are being commissioned by upper management. That is the first thing you should know about Spartz, and I...

The Nonlinear Library
EA - Sharing Information About Nonlinear by Ben Pace

The Nonlinear Library

Play Episode Listen Later Sep 7, 2023 54:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sharing Information About Nonlinear, published by Ben Pace on September 7, 2023 on The Effective Altruism Forum. Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think I have a relatively low bar for publicly sharing critical info about folks in the EA/x-risk/rationalist/etc ecosystem.) tl;dr: If you want my important updates quickly summarized in four claims-plus-probabilities, jump to the section near the bottom titled "Summary of My Epistemic State". When I used to manage the Lightcone Offices, I spent a fair amount of time and effort on gatekeeping - processing applications from people in the EA/x-risk/rationalist ecosystem to visit and work from the offices, and making decisions. Typically this would involve reading some of their public writings, and reaching out to a couple of their references that I trusted and asking for information about them. A lot of the people I reached out to were surprisingly great at giving honest references about their experiences with someone and sharing what they thought about someone. One time, Kat Woods and Drew Spartz from Nonlinear applied to visit. I didn't know them or their work well, except from a few brief interactions that Kat Woods seems high-energy, and to have a more optimistic outlook on life and work than most people I encounter. I reached out to some references Kat listed, which were positive to strongly positive. However I also got a strongly negative reference - someone else who I informed about the decision told me they knew former employees who felt taken advantage of around things like salary. However the former employees reportedly didn't want to come forward due to fear of retaliation and generally wanting to get away from the whole thing, and the reports felt very vague and hard for me to concretely visualize, but nonetheless the person strongly recommended against inviting Kat and Drew. I didn't feel like this was a strong enough reason to bar someone from a space - or rather, I did, but vague anonymous descriptions of very bad behavior being sufficient to ban someone is a system that can be straightforwardly abused, so I don't want to use such a system. Furthermore, I was interested in getting my own read on Kat Woods from a short visit - she had only asked to visit for a week. So I accepted, though I informed her that this weighed on my mind. (This is a link to the decision email I sent to her.) (After making that decision I was also linked to this ominous yet still vague EA Forum thread, that includes a former coworker of Kat Woods saying they did not like working with her, more comments like the one I received above, and links to a lot of strongly negative Glassdoor reviews for Nonlinear Cofounder Emerson Spartz's former company "Dose". Note that more than half of the negative reviews are for the company after Emerson sold it, but this is a concerning one from 2015 (while Emerson Spartz was CEO/Cofounder): "All of these super positive reviews are being commissioned by upper management. That is the first thing you should know ...

Effective Altruism Forum Podcast
“Sharing Information About Nonlinear” by Ben Pace

Effective Altruism Forum Podcast

Play Episode Listen Later Sep 7, 2023 55:54


Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think [...] ---Outline:(10:09) A High-Level Overview of The Employees' Experience with Nonlinear(16:25) An assortment of reported experiences(40:59) Conversation with Nonlinear(48:47) My thoughts on the ethics and my takeawaysThe original text contained 10 footnotes which were omitted from this narration. --- First published: September 7th, 2023 Source: https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear --- Narrated by TYPE III AUDIO.

The Nonlinear Library: LessWrong
LW - Sharing Information About Nonlinear by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Sep 7, 2023 54:20


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sharing Information About Nonlinear, published by Ben Pace on September 7, 2023 on LessWrong. Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think I have a relatively low bar for publicly sharing critical info about folks in the EA/x-risk/rationalist/etc ecosystem.) tl;dr: If you want my important updates quickly summarized in four claims-plus-probabilities, jump to the section near the bottom titled "Summary of My Epistemic State". When I used to manage the Lightcone Offices, I spent a fair amount of time and effort on gatekeeping - processing applications from people in the EA/x-risk/rationalist ecosystem to visit and work from the offices, and making decisions. Typically this would involve reading some of their public writings, and reaching out to a couple of their references that I trusted and asking for information about them. A lot of the people I reached out to were surprisingly great at giving honest references about their experiences with someone and sharing what they thought about someone. One time, Kat Woods and Drew Spartz from Nonlinear applied to visit. I didn't know them or their work well, except from a few brief interactions that Kat Woods seems high-energy, and to have a more optimistic outlook on life and work than most people I encounter. I reached out to some references Kat listed, which were positive to strongly positive. However I also got a strongly negative reference - someone else who I informed about the decision told me they knew former employees who felt taken advantage of around things like salary. However the former employees reportedly didn't want to come forward due to fear of retaliation and generally wanting to get away from the whole thing, and the reports felt very vague and hard for me to concretely visualize, but nonetheless the person strongly recommended against inviting Kat and Drew. I didn't feel like this was a strong enough reason to bar someone from a space - or rather, I did, but vague anonymous descriptions of very bad behavior being sufficient to ban someone is a system that can be straightforwardly abused, so I don't want to use such a system. Furthermore, I was interested in getting my own read on Kat Woods from a short visit - she had only asked to visit for a week. So I accepted, though I informed her that this weighed on my mind. (This is a link to the decision email I sent to her.) (After making that decision I was also linked to this ominous yet still vague EA Forum thread, that includes a former coworker of Kat Woods saying they did not like working with her, more comments like the one I received above, and links to a lot of strongly negative Glassdoor reviews for Nonlinear Cofounder Emerson Spartz's former company "Dose". Note that more than half of the negative reviews are for the company after Emerson sold it, but this is a concerning one from 2015 (while Emerson Spartz was CEO/Cofounder): "All of these super positive reviews are being commissioned by upper management. That is the first thing you should know about Spartz, and I...

The Nonlinear Library: LessWrong Daily
LW - Sharing Information About Nonlinear by Ben Pace

The Nonlinear Library: LessWrong Daily

Play Episode Listen Later Sep 7, 2023 54:20


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sharing Information About Nonlinear, published by Ben Pace on September 7, 2023 on LessWrong.Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think I have a relatively low bar for publicly sharing critical info about folks in the EA/x-risk/rationalist/etc ecosystem.)tl;dr: If you want my important updates quickly summarized in four claims-plus-probabilities, jump to the section near the bottom titled "Summary of My Epistemic State".When I used to manage the Lightcone Offices, I spent a fair amount of time and effort on gatekeeping - processing applications from people in the EA/x-risk/rationalist ecosystem to visit and work from the offices, and making decisions. Typically this would involve reading some of their public writings, and reaching out to a couple of their references that I trusted and asking for information about them. A lot of the people I reached out to were surprisingly great at giving honest references about their experiences with someone and sharing what they thought about someone.One time, Kat Woods and Drew Spartz from Nonlinear applied to visit. I didn't know them or their work well, except from a few brief interactions that Kat Woods seems high-energy, and to have a more optimistic outlook on life and work than most people I encounter.I reached out to some references Kat listed, which were positive to strongly positive. However I also got a strongly negative reference - someone else who I informed about the decision told me they knew former employees who felt taken advantage of around things like salary. However the former employees reportedly didn't want to come forward due to fear of retaliation and generally wanting to get away from the whole thing, and the reports felt very vague and hard for me to concretely visualize, but nonetheless the person strongly recommended against inviting Kat and Drew.I didn't feel like this was a strong enough reason to bar someone from a space - or rather, I did, but vague anonymous descriptions of very bad behavior being sufficient to ban someone is a system that can be straightforwardly abused, so I don't want to use such a system. Furthermore, I was interested in getting my own read on Kat Woods from a short visit - she had only asked to visit for a week. So I accepted, though I informed her that this weighed on my mind. (This is a link to the decision email I sent to her.)(After making that decision I was also linked to this ominous yet still vague EA Forum thread, that includes a former coworker of Kat Woods saying they did not like working with her, more comments like the one I received above, and links to a lot of strongly negative Glassdoor reviews for Nonlinear Cofounder Emerson Spartz's former company "Dose". Note that more than half of the negative reviews are for the company after Emerson sold it, but this is a concerning one from 2015 (while Emerson Spartz was CEO/Cofounder): "All of these super positive reviews are being commissioned by upper management. That is the first thing you should know about Spartz, and I...

The Nonlinear Library
LW - A report about LessWrong karma volatility from a different universe by Ben Pace

The Nonlinear Library

Play Episode Listen Later Apr 1, 2023 1:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A report about LessWrong karma volatility from a different universe, published by Ben Pace on April 1, 2023 on LessWrong. In a far away universe, a news report is written about LessWrong. The following passages have been lifted over and written into this post... Early one morning all voting on LessWrong was halted It was said that there was nothing to worry about But then GreaterWrong announced their intent to acquire and then un-acquire LessWrong All LessWrong users lost all of their karma, but a poorly labeled 'fiat@' account on the EA Forum was discovered with no posts and a similarly large amount of karma Habryka states that LessWrong and the EA Forum "work at arms length" Later, Zvi Mowshowitz publishes a leaked internal accounting sheet from the LessWrong team It includes entries for "weirdness points" "utils" "Kaj_Sotala" "countersignals" and "Anthropic". We recommend all readers open up the sheet to read in full. Later, LessWrong filed for internet-points-bankruptcy and Holden Karnofsky was put in charge. Karnofsky reportedly said: I have over 15 years of nonprofit governance experience. I have been the Chief Executive Officer of GiveWell, the Chief Executive Officer of Open Philanthropy, and as of recently an intern at an AI safety organization. Never in my career have I seen such a complete failure of nonprofit board controls and such a complete absence of basic decision theoretical cooperation as occurred here. From compromised epistemic integrity and faulty community oversight, to the concentration of control in the hands of a very small group of biased, low-decoupling, and potentially akratic rationalists, this situation is unprecedented. Sadly the authors did not have time to conclude the reporting, though they list other things that happened in a comment below. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - A report about LessWrong karma volatility from a different universe by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 1, 2023 1:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A report about LessWrong karma volatility from a different universe, published by Ben Pace on April 1, 2023 on LessWrong. In a far away universe, a news report is written about LessWrong. The following passages have been lifted over and written into this post... Early one morning all voting on LessWrong was halted It was said that there was nothing to worry about But then GreaterWrong announced their intent to acquire and then un-acquire LessWrong All LessWrong users lost all of their karma, but a poorly labeled 'fiat@' account on the EA Forum was discovered with no posts and a similarly large amount of karma Habryka states that LessWrong and the EA Forum "work at arms length" Later, Zvi Mowshowitz publishes a leaked internal accounting sheet from the LessWrong team It includes entries for "weirdness points" "utils" "Kaj_Sotala" "countersignals" and "Anthropic". We recommend all readers open up the sheet to read in full. Later, LessWrong filed for internet-points-bankruptcy and Holden Karnofsky was put in charge. Karnofsky reportedly said: I have over 15 years of nonprofit governance experience. I have been the Chief Executive Officer of GiveWell, the Chief Executive Officer of Open Philanthropy, and as of recently an intern at an AI safety organization. Never in my career have I seen such a complete failure of nonprofit board controls and such a complete absence of basic decision theoretical cooperation as occurred here. From compromised epistemic integrity and faulty community oversight, to the concentration of control in the hands of a very small group of biased, low-decoupling, and potentially akratic rationalists, this situation is unprecedented. Sadly the authors did not have time to conclude the reporting, though they list other things that happened in a comment below. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - Löb's Theorem for implicit reasoning in natural language: Löbian party invitations by Andrew Critch

The Nonlinear Library

Play Episode Listen Later Jan 1, 2023 10:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Löb's Theorem for implicit reasoning in natural language: Löbian party invitations, published by Andrew Critch on January 1, 2023 on The AI Alignment Forum. Related to: Löb's Lemma: an easier approach to Löb's Theorem. Natural language models are really taking off, and it turns out there's an analogue of Löb's Theorem that occurs entirely in natural language — no math needed. This post will walk you through the details in a simple example: a very implicit party invitation. Motivation (Skip this if you just want to see the argument.) Understanding the structure here may be helpful for anticipating whether Löbian phenomena can, will, or should arise amongst language-based AI systems. For instance, Löb's Theorem has implications for the emergence of cooperation and defection in groups of formally defined agents (LaVictoire et al, 2014; Critch, Dennis, Russell, 2022). The natural language version of Löb could play a similar role amongst agents that use language, which is something I plan to explore in a future post. Aside from being fun, I'm hoping this post will make clear that the phenomenon underlying Löb's Theorem isn't just a feature of formal logic or arithmetic, but of any language that can talk about reasoning and deduction in that language, including English. And as Ben Pace points out here, invitations are often self-referential, such as when people say "You are hereby invited to the party": hereby means "by this utterance" (google search). So invitations a natural place to explore the kind of self-reference happening in Löb's Theorem. This post isn't really intended as an "explanation" of Löb's Theorem in its classical form, which is about arithmetic. Rather, the arguments here stand entirely on their own, are written in natural language, and are about natural language phenomena. That said, this post could still function as an "explanation" of Löb's Theorem because of the tight analogy with it. Implicitness Okay, imagine there's a party, and maybe you're invited to it. Or maybe you're implicitly invited to it. Either way, we'll be talking a bunch about things being implicit, with phrasing like this: "It's implicit that X", "Implicitly X", or "X is implicit". These will all mean "X is implied by things that are known (to you) (via deduction or logical inference)". Explicit knowledge is also implicit. In this technical sense of the word, "implicit" and "explicit" are not actually mutually exclusive: X trivially implies X, so if you explicitly observed X in the world, then you also know X implicitly. If you find this bothersome or confusing, just grant me this anyway, or skip to "Why I don't treat 'implicit' and inexplicit' as synonyms here" at the end. Abbreviations. To abbreviate things and to show there's a simple structure at play here, I'll sometimes use the box symbol "□" as shorthand to say things are implicit: "□(cats love kittens)" will mean "It's implicit that cats love kittens" "□X" will mean "It's implicit that X" A peculiar invitation Okay! Let p be the statement "You're invited to the party". You'd love to receive such a straightforward invitation to the party, like some people did, those poo poo heads, but instead the host just sends you the following intriguing message: Abbreviation: □pp Interesting! Normally, being invited to a party and being implicitly invited are not the same thing, but for you in this case, apparently they are. Seeing this, you might feel like the host is hinting around at implicitly inviting you, and maybe you'll start to wonder if you're implicitly invited by virtue of the kind of hinting around that the host is doing with this very message. Well then, you'd be right! Here's how. For the moment, forget about the host's message, and consider the following sentence, without assuming its truth (or implicitness): Ψ: The sentenc...

The Nonlinear Library
LW - Slack matters more than any outcome by Valentine

The Nonlinear Library

Play Episode Listen Later Jan 1, 2023 28:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slack matters more than any outcome, published by Valentine on December 31, 2022 on LessWrong. About a month ago Ben Pace invited me to expand on a point I'd made in a comment. The gist is this: Addictions can cause people to accomplish things they wouldn't accomplish otherwise. But if the accomplishment were worthwhile, why would the addiction be helpful? Why wouldn't the clarity that it's worthwhile be enough? I postulate that the reason is a kind of metaphorical heaviness in culture. A particular structural tendency to eat slack. So I think it'd be net better to let some worthwhile things go unaccomplished in favor of lifting the metaphoric burden. Creating more slack. And I'd even say that this is the main plausible pathway I see for creating a great future for humanity. I don't think we can get there by focusing on making worthwhile things happen. I felt inspired to write up an answer. Then I spent a month working on it. I clarified my thinking a lot but slid into a slog of writing. Kind of a perfectionist thing. So I'm scrapping all that. I'm going to write a worse version, since the option is (a) a quickly hacked together version or (b) nothing at all. Addictions My main point isn't really about addictions, but I need to clarify something about this topic anyway. They're also a great example cluster. When I say "addiction", I'm not gesturing at a vague intuition. I mean a very particular structure: There's some unwelcome experience that repeatedly arises. There's a behavior pattern that can temporarily distract the person in question from the unwelcome experience. But the behavior pattern doesn't address the cause of the unwelcome experience arising in the first place. So when someone engages in the distraction, it provides temporary relief, but the unwelcome experience arises again — and now the distraction is a little more tempting. A little more habit-forming. When that becomes automatic, it can feel like you're trapped inside it, like you're powerless against some behavior momentum. Which is to say, this structure eats slack. Some rapid-fire examples: Caffeine dependency becomes an addiction when you autopilot react to the withdrawal symptoms by reaching for another cup of coffee. Alcoholism as an addiction is often (usually? always?) about avoiding emotional experiences. Since the causes of the emotions don't go away, sobriety can result in the unwelcome experience arising, which the alcoholic knows how to numb away. I have a long-standing habit of feeling kind of listless, lonely, like I should be doing something more or different with my life but I'm not quite sure what it is or that I can do it. If I don't pay attention when that sensation/emotion/thought cluster arises, I find myself on my computer scrolling social media or watching YouTube or Netflix. Putting up blockers to these sites both (a) makes me good at disabling the blockers and (b) makes things like porn or Minesweeper more tempting. I'm not saying that all addictions are like this. I can't think of any exceptions off the top of my head, but that might just be a matter of my lack of creativity or a poor filing system in my mind. I'm saying that there's this very particular structure, that it's quite common, and that I'm going to use the word "addiction" to refer to it. And yeah, I do think it's the right word, which is why I'm picking it. Please notice the framing effect, and adjust yourself as needed. Imposing an idea The main thing I want to talk about is a generalization of rationalization, in the sense of writing the bottom line. Caffeine dependency When I grab a cup of coffee for a pick-me-up, I'm basically asserting that I should have more energy than I do right now. This is kind of odd if you think about it. If I found out my house were on fire, I wouldn't feel too tired to deal with...

The Nonlinear Library: LessWrong
LW - Slack matters more than any outcome by Valentine

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 1, 2023 28:17


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slack matters more than any outcome, published by Valentine on December 31, 2022 on LessWrong. About a month ago Ben Pace invited me to expand on a point I'd made in a comment. The gist is this: Addictions can cause people to accomplish things they wouldn't accomplish otherwise. But if the accomplishment were worthwhile, why would the addiction be helpful? Why wouldn't the clarity that it's worthwhile be enough? I postulate that the reason is a kind of metaphorical heaviness in culture. A particular structural tendency to eat slack. So I think it'd be net better to let some worthwhile things go unaccomplished in favor of lifting the metaphoric burden. Creating more slack. And I'd even say that this is the main plausible pathway I see for creating a great future for humanity. I don't think we can get there by focusing on making worthwhile things happen. I felt inspired to write up an answer. Then I spent a month working on it. I clarified my thinking a lot but slid into a slog of writing. Kind of a perfectionist thing. So I'm scrapping all that. I'm going to write a worse version, since the option is (a) a quickly hacked together version or (b) nothing at all. Addictions My main point isn't really about addictions, but I need to clarify something about this topic anyway. They're also a great example cluster. When I say "addiction", I'm not gesturing at a vague intuition. I mean a very particular structure: There's some unwelcome experience that repeatedly arises. There's a behavior pattern that can temporarily distract the person in question from the unwelcome experience. But the behavior pattern doesn't address the cause of the unwelcome experience arising in the first place. So when someone engages in the distraction, it provides temporary relief, but the unwelcome experience arises again — and now the distraction is a little more tempting. A little more habit-forming. When that becomes automatic, it can feel like you're trapped inside it, like you're powerless against some behavior momentum. Which is to say, this structure eats slack. Some rapid-fire examples: Caffeine dependency becomes an addiction when you autopilot react to the withdrawal symptoms by reaching for another cup of coffee. Alcoholism as an addiction is often (usually? always?) about avoiding emotional experiences. Since the causes of the emotions don't go away, sobriety can result in the unwelcome experience arising, which the alcoholic knows how to numb away. I have a long-standing habit of feeling kind of listless, lonely, like I should be doing something more or different with my life but I'm not quite sure what it is or that I can do it. If I don't pay attention when that sensation/emotion/thought cluster arises, I find myself on my computer scrolling social media or watching YouTube or Netflix. Putting up blockers to these sites both (a) makes me good at disabling the blockers and (b) makes things like porn or Minesweeper more tempting. I'm not saying that all addictions are like this. I can't think of any exceptions off the top of my head, but that might just be a matter of my lack of creativity or a poor filing system in my mind. I'm saying that there's this very particular structure, that it's quite common, and that I'm going to use the word "addiction" to refer to it. And yeah, I do think it's the right word, which is why I'm picking it. Please notice the framing effect, and adjust yourself as needed. Imposing an idea The main thing I want to talk about is a generalization of rationalization, in the sense of writing the bottom line. Caffeine dependency When I grab a cup of coffee for a pick-me-up, I'm basically asserting that I should have more energy than I do right now. This is kind of odd if you think about it. If I found out my house were on fire, I wouldn't feel too tired to deal with...

gibop
Behind the Mask: The Rise of Leslie Vernon (2006)

gibop

Play Episode Listen Later Dec 4, 2022 91:03


Actors Nathan Baesel, Angela Goethals, Ben Pace, and Britain Spellings

The Nonlinear Library
LW - Rationalist Town Hall: FTX Fallout Edition (RSVP Required) by Ben Pace

The Nonlinear Library

Play Episode Listen Later Nov 23, 2022 3:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rationalist Town Hall: FTX Fallout Edition (RSVP Required), published by Ben Pace on November 23, 2022 on LessWrong. Stated at the top for emphasis: you have to fill out the RSVP form in order for me to email you the links. On Sunday 27th November at 12 pm PT, I am hosting an online Town Hall on zoom, for rationalists and rationalist-adjacent folks (e.g. EAs) to think through the FTX catastrophe and propagate thoughts, feelings and updates. Lots of people I know have been shocked by and are still reeling from the news these last 2 weeks. I'm very keen to hear what updates people are making about EA and crypto, and understand others' perspectives. Some people coming include Zvi Mowshowitz, Oliver Habryka, Anna Salamon, and more. To get the Zoom and Gather Town links, fill out the RSVP form. I will send the links to everyone who fills out the form. The form involves agreeing that the event is off the record to the corporate news media, and all attendees will fill out the form. What Will The Format Be? Spontaneous lightning talks. During the event, anyone who wishes to can give a 3-minute talk on a topic of their choosing, followed by 2-mins of Q&A — it can be on something you've already thought about, or it can be a response to or disagreement with someone else's lightning talk. This is a format I've used before pretty successfully in both big (70 ppl) and small (7 ppl) groups, where we've gotten on a roll of people sharing points and also replying to each others' talks, so I have hope that it will succeed online. (This format has also had other names like "Lightning Jazz" and "Propagating Beliefs".) We will do lightning talks for up to 1.5 hours (depending on how much steam people have in them), hopefully giving lots of people the chance to speak, and after that the main event will be over, and we'll move to Gather Town to have group discussions (and those who are satisfied will go home). If you wish to, you can submit for a lightning talk ahead of time with this Lightning Talk form. Who is invited? As well as Rationalists/LessWrongers, I welcome any people to this event who are or have formerly been part of the EA community, people who have formerly worked for or been very close with FTX or Alameda Research, and people who have worked for or been funded in any way by the FTX Future Fund. I hereby ask others to respect the Rationalist and EA communities' ability to talk amongst themselves (so to speak) by not joining if you are not well-described by the above. For example, if you had not read LessWrong before the FTX sale to Binance, this event is not aimed at you and I ask you not to come. Details When? Sunday November 27th, 12:00PM (PT) to 14:00PM (PT). Where? The Town Hall talks will happen in Zoom, then discussion will continue in a private Gather Town. RSVP link? Fill out this RSVP form to get links. Everyone who fills out the form will get sent a link, I'll send them out 24 hours before the event and then again ~60 mins before the event. You can also hit 'going' on the public Facebook event for the joys of social signaling, but also fill out the form so I can email you the links. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Rationalist Town Hall: FTX Fallout Edition (RSVP Required) by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 23, 2022 3:18


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rationalist Town Hall: FTX Fallout Edition (RSVP Required), published by Ben Pace on November 23, 2022 on LessWrong. Stated at the top for emphasis: you have to fill out the RSVP form in order for me to email you the links. On Sunday 27th November at 12 pm PT, I am hosting an online Town Hall on zoom, for rationalists and rationalist-adjacent folks (e.g. EAs) to think through the FTX catastrophe and propagate thoughts, feelings and updates. Lots of people I know have been shocked by and are still reeling from the news these last 2 weeks. I'm very keen to hear what updates people are making about EA and crypto, and understand others' perspectives. Some people coming include Zvi Mowshowitz, Oliver Habryka, Anna Salamon, and more. To get the Zoom and Gather Town links, fill out the RSVP form. I will send the links to everyone who fills out the form. The form involves agreeing that the event is off the record to the corporate news media, and all attendees will fill out the form. What Will The Format Be? Spontaneous lightning talks. During the event, anyone who wishes to can give a 3-minute talk on a topic of their choosing, followed by 2-mins of Q&A — it can be on something you've already thought about, or it can be a response to or disagreement with someone else's lightning talk. This is a format I've used before pretty successfully in both big (70 ppl) and small (7 ppl) groups, where we've gotten on a roll of people sharing points and also replying to each others' talks, so I have hope that it will succeed online. (This format has also had other names like "Lightning Jazz" and "Propagating Beliefs".) We will do lightning talks for up to 1.5 hours (depending on how much steam people have in them), hopefully giving lots of people the chance to speak, and after that the main event will be over, and we'll move to Gather Town to have group discussions (and those who are satisfied will go home). If you wish to, you can submit for a lightning talk ahead of time with this Lightning Talk form. Who is invited? As well as Rationalists/LessWrongers, I welcome any people to this event who are or have formerly been part of the EA community, people who have formerly worked for or been very close with FTX or Alameda Research, and people who have worked for or been funded in any way by the FTX Future Fund. I hereby ask others to respect the Rationalist and EA communities' ability to talk amongst themselves (so to speak) by not joining if you are not well-described by the above. For example, if you had not read LessWrong before the FTX sale to Binance, this event is not aimed at you and I ask you not to come. Details When? Sunday November 27th, 12:00PM (PT) to 14:00PM (PT). Where? The Town Hall talks will happen in Zoom, then discussion will continue in a private Gather Town. RSVP link? Fill out this RSVP form to get links. Everyone who fills out the form will get sent a link, I'll send them out 24 hours before the event and then again ~60 mins before the event. You can also hit 'going' on the public Facebook event for the joys of social signaling, but also fill out the form so I can email you the links. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - Announcing the Progress Forum by jasoncrawford

The Nonlinear Library

Play Episode Listen Later Nov 17, 2022 2:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Progress Forum, published by jasoncrawford on November 17, 2022 on LessWrong. I'd like to invite you to join the Progress Forum, the new online home for the progress community. It's a clone of this site, but with a focus on progress studies and the philosophy of progress. This forum was pre-announced in January 2022, and quietly opened in April. Although anyone could sign up, we deliberately didn't make any big announcement about it, aiming first for a small, high-quality community. Now that we have a lot of good content on the site, we're announcing it more broadly. The primary goal of this forum is to provide a place for long-form discussion of progress studies. It's also, like LW, a place to find local clubs and meetups. The broader goal is to share ideas, strengthen them through discussion and comment, and over the long term, to build up a body of thought that constitutes a new philosophy of progress for the 21st century (and beyond). I invite you to post: Essays (original, or cross-posted from your blog) Drafts, half-baked ideas, and work-in-progress thinking, for feedback Questions for brainstorming Local events and community groups Etc. And please read and comment on what others have shared. You can subscribe to Forum posts via email, RSS, or Twitter. The Forum is sponsored by The Roots of Progress. Huge thanks to the people who worked to create and run it: Lawrence Kestleoot, Andrew Roberts, Sameer Ismail, David Smehlik, Alec Wilson, and Ross Graham. Thanks also to Kris Gulati for nudging this project along, and to Ruth Grace Wong for helpful conversations about community and moderation. Finally, thanks to the LessWrong team for creating this software platform, and especially to Oliver Habryka, Ruby Bloom, Raymond Arnold, JP Addison, James Babcock, and Ben Pace for answering questions and helping us customize this instance of it. Go check it out. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Announcing the Progress Forum by jasoncrawford

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 17, 2022 2:08


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Progress Forum, published by jasoncrawford on November 17, 2022 on LessWrong. I'd like to invite you to join the Progress Forum, the new online home for the progress community. It's a clone of this site, but with a focus on progress studies and the philosophy of progress. This forum was pre-announced in January 2022, and quietly opened in April. Although anyone could sign up, we deliberately didn't make any big announcement about it, aiming first for a small, high-quality community. Now that we have a lot of good content on the site, we're announcing it more broadly. The primary goal of this forum is to provide a place for long-form discussion of progress studies. It's also, like LW, a place to find local clubs and meetups. The broader goal is to share ideas, strengthen them through discussion and comment, and over the long term, to build up a body of thought that constitutes a new philosophy of progress for the 21st century (and beyond). I invite you to post: Essays (original, or cross-posted from your blog) Drafts, half-baked ideas, and work-in-progress thinking, for feedback Questions for brainstorming Local events and community groups Etc. And please read and comment on what others have shared. You can subscribe to Forum posts via email, RSS, or Twitter. The Forum is sponsored by The Roots of Progress. Huge thanks to the people who worked to create and run it: Lawrence Kestleoot, Andrew Roberts, Sameer Ismail, David Smehlik, Alec Wilson, and Ross Graham. Thanks also to Kris Gulati for nudging this project along, and to Ruth Grace Wong for helpful conversations about community and moderation. Finally, thanks to the LessWrong team for creating this software platform, and especially to Oliver Habryka, Ruby Bloom, Raymond Arnold, JP Addison, James Babcock, and Ben Pace for answering questions and helping us customize this instance of it. Go check it out. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong Daily
LW - Announcing the Progress Forum by jasoncrawford

The Nonlinear Library: LessWrong Daily

Play Episode Listen Later Nov 17, 2022 2:08


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Progress Forum, published by jasoncrawford on November 17, 2022 on LessWrong. I'd like to invite you to join the Progress Forum, the new online home for the progress community. It's a clone of this site, but with a focus on progress studies and the philosophy of progress. This forum was pre-announced in January 2022, and quietly opened in April. Although anyone could sign up, we deliberately didn't make any big announcement about it, aiming first for a small, high-quality community. Now that we have a lot of good content on the site, we're announcing it more broadly. The primary goal of this forum is to provide a place for long-form discussion of progress studies. It's also, like LW, a place to find local clubs and meetups. The broader goal is to share ideas, strengthen them through discussion and comment, and over the long term, to build up a body of thought that constitutes a new philosophy of progress for the 21st century (and beyond). I invite you to post: Essays (original, or cross-posted from your blog) Drafts, half-baked ideas, and work-in-progress thinking, for feedback Questions for brainstorming Local events and community groups Etc. And please read and comment on what others have shared. You can subscribe to Forum posts via email, RSS, or Twitter. The Forum is sponsored by The Roots of Progress. Huge thanks to the people who worked to create and run it: Lawrence Kestleoot, Andrew Roberts, Sameer Ismail, David Smehlik, Alec Wilson, and Ross Graham. Thanks also to Kris Gulati for nudging this project along, and to Ruth Grace Wong for helpful conversations about community and moderation. Finally, thanks to the LessWrong team for creating this software platform, and especially to Oliver Habryka, Ruby Bloom, Raymond Arnold, JP Addison, James Babcock, and Ben Pace for answering questions and helping us customize this instance of it. Go check it out. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Retro Movie Roundtable
RMR 0182 Behind the Mask: The Rise of Leslie Vernon (2006)

Retro Movie Roundtable

Play Episode Listen Later Oct 19, 2022 92:31


Joins your hosts Chad Robinson, Lizzy Haynes, and Russell Guest for the Retro Movie Roundtable as they revisit Behind the Mask: The Rise of Leslie Vernon (2006) [R] Genre: Comedy, Horror, Thriller Starring: Nathan Baesel, Angela Goethals, Robert Englund, Scott Wilson, Zelda Rubinstein, Bridgett Newton, Kate Miner, Ben Pace, Britain Spellings, Hart Turner, Krissy Carlson, Travis Zariwny, Teo Gomez, Matt Bolt, Jenafer Brown   Director: Scott Glosserman  Recorded on 2022-09-23

The Nonlinear Library
EA - Limits to Legibility by Jan Kulveit

The Nonlinear Library

Play Episode Listen Later Jun 29, 2022 7:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Limits to Legibility, published by Jan Kulveit on June 29, 2022 on The Effective Altruism Forum. From time to time, someone makes the case for why transparency in reasoning is important. The latest conceptualization is Epistemic Legibility by Elizabeth, but the core concept is similar to reasoning transparency used by OpenPhil, and also has some similarity to A Sketch of Good Communication by Ben Pace. I'd like to offer a gentle pushback. The tl;dr is in my comment on Ben's post, but it seems useful enough for a standalone post. “How odd I can have all this inside me and to you it's just words.” ― David Foster Wallace When and why reasoning legibility is hard Say you demand transparent reasoning from AlphaGo. The algorithm has roughly two parts: tree search and a neural network. Tree search reasoning is naturally legible: the "argument" is simply a sequence of board states. In contrast, the neural network is mostly illegible - its output is a figurative "feeling" about how promising a position is, but that feeling depends on the aggregate experience of a huge number of games, and it is extremely difficult to explain transparently how a particular feeling depends on particular past experiences. So AlphaGo would be able to present part of its reasoning to you, but not the most important part. Human reasoning uses both: cognition similar to tree search (where the steps can be described, written down, and explained to someone else) and processes not amenable to introspection (which function essentially as a black box that produces a "feeling"). People sometimes call these latter signals “intuition”, “implicit knowledge”, “taste”, “S1 reasoning” and the like. Explicit reasoning often rides on top of this.Extending the machine learning metaphor, the problem with human interpretability is that "mastery" in a field often consists precisely in having some well-trained black box neural network that performs fairly opaque background computations. Bad things can happen when you demand explanations from black boxes The second thesis is that it often makes sense to assume the mind runs distinct computational processes: one that actually makes decisions and reaches conclusions, and another that produces justifications and rationalizations. In my experience, if you have good introspective access to your own reasoning, you may occasionally notice that a conclusion C depends mainly on some black box, but at the same time, you generated a plausible legible argument A for the same conclusion after you reached the conclusion C. If you try running, say, Double Crux over such situations, you'll notice that even if someone refutes the explicit reasoning A, you won't quite change the conclusion to ¬C. The legible argument A was not the real crux. It is quite often the case that (A) is essentially fake (or low-weight), whereas the black box is hiding a reality-tracking model. Stretching the AlphaGo metaphor a bit: AlphaGo could be easily modified to find a few specific game "rollouts" that turned out to "explain" the mysterious signal from the neural network. Using tree search, it would produce a few specific examples how such a position may evolve, which would be selected to agree with the neural net prediction. If AlphaGo showed them to you, it might convince you! But you would get a completely superficial understanding of why it evaluates the situation the way it does, or why it makes certain moves. Risks from the legibility norm When you make a strong norm pushing for too straightforward "epistemic legibility", you risk several bad things:First, you increase the pressure on the "justification generator" to mask various black boxes by generating arguments supporting their conclusions.Second, you make individual people dumber. Imagine asking a Go grandmaster to transparently justify his mov...

The Nonlinear Library
LW - LessWrong Has Agree/Disagree Voting On All New Comment Threads by Ben Pace

The Nonlinear Library

Play Episode Listen Later Jun 24, 2022 3:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LessWrong Has Agree/Disagree Voting On All New Comment Threads, published by Ben Pace on June 24, 2022 on LessWrong. Starting today we're activating two-factor voting on all new comment threads. Now there are two axes on which you can vote on comments: the standard karma axis remains on the left, and the new axis on the right lets you show much you agree or disagree with the content of a comment. How the system works For the pre-existing voting system, the most common interpretation of up/down-voting is "Do I want to see more or less of this content on the site?" As an item gets more/less votes, the item changes in visibility, and the karma-weighting of the author is eventually changed as well. Agree/disagree is just added on to this system. Here's how it all hooks up. Agree/disagree voting does not translate into a user's or post's karma — its sole function is to communicate agreement/disagreement. It has no other direct effects on the site or content visibility. For both regular voting and the new agree/disagree voting, you have the ability to normal-strength vote and strong-vote. Click once for normal-strength vote. For strong-vote, click-and-hold on desktop or double-tap on mobile. The weight of your strong-vote is approximately proportional to your karma on a log-scale (exact numbers here). Ben's personal reasons for being excited about this split Here's a couple of reasons that are alive for me. I personally feel much more comfortable upvoting good comments that I disagree with or whose truth value I am highly uncertain about, because I don't feel that my vote will be mistaken as setting the social reality of what is true. I also feel very comfortable strong-agreeing with things while not up/downvoting on them, so as to indicate which side of an argument seems true to me without my voting being read as “this person gets to keep accruing more and more social status for just repeating a common position at length”. Similarly to the first bullet, I think that many writers have interesting and valuable ideas but whose truth-value I am quite unsure about or even disagree with. This split allows voters to repeatedly signal that a given writer's comments are of high value, without building a false-consensus that LessWrong has high confidence that the ideas are true. (For example, many people have incompatible but valuable ideas about how AGI development will go, and I want authors to get lots of karma and visibility for excellent contributions without this ambiguity.) There are many comments I think are bad but am averse to downvoting, because I feel that it is ambiguous whether the person is being downvoted because everyone thinks their take is unfashionable or whether it's because the person is wasting the commons with their behavior (e.g. snarkiness, belittling, starting bravery debates, etc). With this split I feel more comfortable downvoting bad comments without worrying that everyone who states the position will worry if they'll also be downvoted. I have seen some comments that previously would have been "downvoted to hell" are now on positive karma, and are instead "disagreed to hell". I won't point them out to avoid focusing on individuals, but this seems like an obvious improvement in communication ability. I could go on but I'll stop here. Please give us feedback This is one of the main voting experiments we've tried on the site (here's the other one). We may try more changes and improvement in the future. Please let us know about your experience with this new voting axis, especially in the next 1-2 weeks. If you find it concerning/invigorating/confusing/clarifying/other, we'd like to know about it. Comment on this post with feedback and I'll give you an upvote (and maybe others will give you an agree-vote!) or let us know in the intercom button in the bottom...

The Nonlinear Library: LessWrong
LW - LessWrong Has Agree/Disagree Voting On All New Comment Threads by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 24, 2022 3:41


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LessWrong Has Agree/Disagree Voting On All New Comment Threads, published by Ben Pace on June 24, 2022 on LessWrong. Starting today we're activating two-factor voting on all new comment threads. Now there are two axes on which you can vote on comments: the standard karma axis remains on the left, and the new axis on the right lets you show much you agree or disagree with the content of a comment. How the system works For the pre-existing voting system, the most common interpretation of up/down-voting is "Do I want to see more or less of this content on the site?" As an item gets more/less votes, the item changes in visibility, and the karma-weighting of the author is eventually changed as well. Agree/disagree is just added on to this system. Here's how it all hooks up. Agree/disagree voting does not translate into a user's or post's karma — its sole function is to communicate agreement/disagreement. It has no other direct effects on the site or content visibility. For both regular voting and the new agree/disagree voting, you have the ability to normal-strength vote and strong-vote. Click once for normal-strength vote. For strong-vote, click-and-hold on desktop or double-tap on mobile. The weight of your strong-vote is approximately proportional to your karma on a log-scale (exact numbers here). Ben's personal reasons for being excited about this split Here's a couple of reasons that are alive for me. I personally feel much more comfortable upvoting good comments that I disagree with or whose truth value I am highly uncertain about, because I don't feel that my vote will be mistaken as setting the social reality of what is true. I also feel very comfortable strong-agreeing with things while not up/downvoting on them, so as to indicate which side of an argument seems true to me without my voting being read as “this person gets to keep accruing more and more social status for just repeating a common position at length”. Similarly to the first bullet, I think that many writers have interesting and valuable ideas but whose truth-value I am quite unsure about or even disagree with. This split allows voters to repeatedly signal that a given writer's comments are of high value, without building a false-consensus that LessWrong has high confidence that the ideas are true. (For example, many people have incompatible but valuable ideas about how AGI development will go, and I want authors to get lots of karma and visibility for excellent contributions without this ambiguity.) There are many comments I think are bad but am averse to downvoting, because I feel that it is ambiguous whether the person is being downvoted because everyone thinks their take is unfashionable or whether it's because the person is wasting the commons with their behavior (e.g. snarkiness, belittling, starting bravery debates, etc). With this split I feel more comfortable downvoting bad comments without worrying that everyone who states the position will worry if they'll also be downvoted. I have seen some comments that previously would have been "downvoted to hell" are now on positive karma, and are instead "disagreed to hell". I won't point them out to avoid focusing on individuals, but this seems like an obvious improvement in communication ability. I could go on but I'll stop here. Please give us feedback This is one of the main voting experiments we've tried on the site (here's the other one). We may try more changes and improvement in the future. Please let us know about your experience with this new voting axis, especially in the next 1-2 weeks. If you find it concerning/invigorating/confusing/clarifying/other, we'd like to know about it. Comment on this post with feedback and I'll give you an upvote (and maybe others will give you an agree-vote!) or let us know in the intercom button in the bottom...

The Nonlinear Library
LW - Announcing the LessWrong Curated Podcast by Ben Pace

The Nonlinear Library

Play Episode Listen Later Jun 22, 2022 1:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the LessWrong Curated Podcast, published by Ben Pace on June 22, 2022 on LessWrong. You can now listen to LessWrong Curated posts in podcast form on Spotify, Apple Podcasts, Audible, and Libsyn (which has an RSS feed, so it's available everywhere). This is created and recorded by Solenoid Entity, who spent the last five years editing the SSC podcast, succeeded Jeremiah as narrator and publisher in 2020, and also makes the more recent Metaculus Journal Podcast. I reached out to him last week with an offer to do this work and he has quickly done some excellent recordings, which I'm very grateful for. This is a new experiment and project, and so these 1-2 weeks are a great time to give me and Solenoid Entity feedback about what you like and dislike about the podcast, what would make it better for you, your experience as an author having your writing narrated, etc. You can leave comments here anytime, or talk to us via the intercom chat in the bottom right of the screen, or PM me personally via any channel. Below are the 5 current available LessWrong Curated Podcasts. Hat Tip to Tamera Lanham and Mattieu Putz for the suggestion at dinner! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Announcing the LessWrong Curated Podcast by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 22, 2022 1:18


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the LessWrong Curated Podcast, published by Ben Pace on June 22, 2022 on LessWrong. You can now listen to LessWrong Curated posts in podcast form on Spotify, Apple Podcasts, Audible, and Libsyn (which has an RSS feed, so it's available everywhere). This is created and recorded by Solenoid Entity, who spent the last five years editing the SSC podcast, succeeded Jeremiah as narrator and publisher in 2020, and also makes the more recent Metaculus Journal Podcast. I reached out to him last week with an offer to do this work and he has quickly done some excellent recordings, which I'm very grateful for. This is a new experiment and project, and so these 1-2 weeks are a great time to give me and Solenoid Entity feedback about what you like and dislike about the podcast, what would make it better for you, your experience as an author having your writing narrated, etc. You can leave comments here anytime, or talk to us via the intercom chat in the bottom right of the screen, or PM me personally via any channel. Below are the 5 current available LessWrong Curated Podcasts. Hat Tip to Tamera Lanham and Mattieu Putz for the suggestion at dinner! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - To what extent have ideas and scientific discoveries gotten harder to find? by lsusr

The Nonlinear Library

Play Episode Listen Later Jun 18, 2022 10:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: To what extent have ideas and scientific discoveries gotten harder to find?, published by lsusr on June 18, 2022 on LessWrong. This post was funded by a grant from Ben Pace. Ben Pace asks: To what extent have ideas and scientific discoveries gotten harder to find? Related: Why are there no gentlemen scientists any more — i.e. rich people who make novel scientific discoveries? Like Fermat and Pascal. Some theories have been put forward by Scott Alexander and Holden Karnofsky. Maybe they're right, maybe not. Scott Alexander treats scientific ideas as nonrenewable. Imagine scientists venturing off in some research direction. At the dawn of history, they don't need to venture very far before discovering a new truth. As time goes on, they need to go further and further. Holden Karnofsky believes the pattern applies to both art and science. The broad theme is that across a variety of areas in both art and science, we see a form of "innovation stagnation": the best-regarded figures are disproportionately from long ago, and our era seems to "punch below its weight" when considering the rise in population, education, etc. Since the patterns look fairly similar for art and science, and both are forms of innovation, I think it's worth thinking about potential common factors. I have a different perspective. Science Physics The most important scientific discoveries are those which are the most general and the most useful. Physics is the most general of sciences. Physics is basically solved. There are unsolved problems in physics. Dark matter remains a mystery. Quantum mechanics has yet to be unified with general relativity. But the holes in physics don't matter. You don't need a Grand Unified Theory of quantum relativity to build a Mars base or a fusion reactor. All you need is today's physics plus a whole lot of engineering. The recent discoveries in physics like quantum computing and the photograph of a black hole aren't really discoveries about the fundamental Laws of Physics. They're technological achievements. All of the rest of science is just applied physics too. One could argue that biology, chemistry and so on are just footnotes to Einstein. The fundamental laws of the universe are (for all practical purposes) known. The remaining questions are: Astrophysics i.e. the study of places that don't matter because we lack the technology to go there. What has biology built out of matter? What can we build out of matter? No physicist will ever again make a discovery (in physics) as impactful as the great 20th century physicists. All the important fruit has been picked. But that doesn't mean science has been exhausted. It just means physics has been exhausted. Biology is advancing fast. I'll never get tired of this graph about how, since 2007, biotechnology has advanced faster than computer technology ever did. As recently as 2015, Nick Lane published a book that might have solved the origin of life. And biology isn't even the most exciting frontier. Machine Learning In Contra Hoel, I talked about machine learning as feeling different from some other scientific fields: there are frequent exciting new discoveries. This shouldn't be surprising. Physics is stagnant because Newton and Einstein already got all the cool results. But Newton and Einstein didn't have TPUs so they couldn't discover things about machine learning. The Low-Hanging Fruit Argument: Models And Predictions by Scott Alexander Do discoveries in machine learning count as science or technology? If we use the strictest definition of "science" then machine learning counts as "technology". But machine learning is also informing our understanding of the human mind. Psychology definitely counts as "science". "Untangling how intelligence works" is the most important scientific problem of our age. Ambitious people go to whe...

The Nonlinear Library: LessWrong
LW - To what extent have ideas and scientific discoveries gotten harder to find? by lsusr

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 18, 2022 10:59


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: To what extent have ideas and scientific discoveries gotten harder to find?, published by lsusr on June 18, 2022 on LessWrong. This post was funded by a grant from Ben Pace. Ben Pace asks: To what extent have ideas and scientific discoveries gotten harder to find? Related: Why are there no gentlemen scientists any more — i.e. rich people who make novel scientific discoveries? Like Fermat and Pascal. Some theories have been put forward by Scott Alexander and Holden Karnofsky. Maybe they're right, maybe not. Scott Alexander treats scientific ideas as nonrenewable. Imagine scientists venturing off in some research direction. At the dawn of history, they don't need to venture very far before discovering a new truth. As time goes on, they need to go further and further. Holden Karnofsky believes the pattern applies to both art and science. The broad theme is that across a variety of areas in both art and science, we see a form of "innovation stagnation": the best-regarded figures are disproportionately from long ago, and our era seems to "punch below its weight" when considering the rise in population, education, etc. Since the patterns look fairly similar for art and science, and both are forms of innovation, I think it's worth thinking about potential common factors. I have a different perspective. Science Physics The most important scientific discoveries are those which are the most general and the most useful. Physics is the most general of sciences. Physics is basically solved. There are unsolved problems in physics. Dark matter remains a mystery. Quantum mechanics has yet to be unified with general relativity. But the holes in physics don't matter. You don't need a Grand Unified Theory of quantum relativity to build a Mars base or a fusion reactor. All you need is today's physics plus a whole lot of engineering. The recent discoveries in physics like quantum computing and the photograph of a black hole aren't really discoveries about the fundamental Laws of Physics. They're technological achievements. All of the rest of science is just applied physics too. One could argue that biology, chemistry and so on are just footnotes to Einstein. The fundamental laws of the universe are (for all practical purposes) known. The remaining questions are: Astrophysics i.e. the study of places that don't matter because we lack the technology to go there. What has biology built out of matter? What can we build out of matter? No physicist will ever again make a discovery (in physics) as impactful as the great 20th century physicists. All the important fruit has been picked. But that doesn't mean science has been exhausted. It just means physics has been exhausted. Biology is advancing fast. I'll never get tired of this graph about how, since 2007, biotechnology has advanced faster than computer technology ever did. As recently as 2015, Nick Lane published a book that might have solved the origin of life. And biology isn't even the most exciting frontier. Machine Learning In Contra Hoel, I talked about machine learning as feeling different from some other scientific fields: there are frequent exciting new discoveries. This shouldn't be surprising. Physics is stagnant because Newton and Einstein already got all the cool results. But Newton and Einstein didn't have TPUs so they couldn't discover things about machine learning. The Low-Hanging Fruit Argument: Models And Predictions by Scott Alexander Do discoveries in machine learning count as science or technology? If we use the strictest definition of "science" then machine learning counts as "technology". But machine learning is also informing our understanding of the human mind. Psychology definitely counts as "science". "Untangling how intelligence works" is the most important scientific problem of our age. Ambitious people go to whe...

The Nonlinear Library
LW - PSA: The Sequences don't need to be read in sequence by kave

The Nonlinear Library

Play Episode Listen Later May 23, 2022 1:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PSA: The Sequences don't need to be read in sequence, published by kave on May 23, 2022 on LessWrong. This week, I hung out with the LessWrong team while they talked to relatively new users. New users often had a vague intention to read Eliezer's original Sequences, but were blocked on the size of the project. They thought the Sequences would only work in, well, sequence. I just polled eight people (including me) who have read the Sequences whether they only work in sequence. 7 people said they work out of sequence (though 2 noted that it might be better to read a given sequence in order) 1 person said they thought it was necessary to read any given sequence in order, but it didn't matter if you read one sequence (e.g. A Human's Guide to Words) before or after another (e.g. Mysterious Answers to Mysterious Questions) A typical sequence post has many links to other sequence posts. But these are mostly context and elaboration. The posts tend to work well standalone. Here are three posts you might get started with: Fake Explanations. The first post in the Mysterious Answers to Mysterious Questions sequence, described by Eliezer as "probably the most important core sequence in Less Wrong". Leave a Line of Retreat. Letting go of a belief that's important to you is hard. Particularly if you think stuff you care about depends on it (e.g. if you think being good depends on moral realism, it could be hard to reexamine your belief in moral realism). This post describes the phenomenon, and gives advice for dealing with it. The Hidden Complexity of Wishes. Imagine you had a device that could cause any concrete statement to become true. This post explores the difficulties you would have getting what you want with the device. (Thanks to various people at the Lightcone offices for beta reading this post, particularly Ben Pace). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - PSA: The Sequences don't need to be read in sequence by kave

The Nonlinear Library: LessWrong

Play Episode Listen Later May 23, 2022 1:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PSA: The Sequences don't need to be read in sequence, published by kave on May 23, 2022 on LessWrong. This week, I hung out with the LessWrong team while they talked to relatively new users. New users often had a vague intention to read Eliezer's original Sequences, but were blocked on the size of the project. They thought the Sequences would only work in, well, sequence. I just polled eight people (including me) who have read the Sequences whether they only work in sequence. 7 people said they work out of sequence (though 2 noted that it might be better to read a given sequence in order) 1 person said they thought it was necessary to read any given sequence in order, but it didn't matter if you read one sequence (e.g. A Human's Guide to Words) before or after another (e.g. Mysterious Answers to Mysterious Questions) A typical sequence post has many links to other sequence posts. But these are mostly context and elaboration. The posts tend to work well standalone. Here are three posts you might get started with: Fake Explanations. The first post in the Mysterious Answers to Mysterious Questions sequence, described by Eliezer as "probably the most important core sequence in Less Wrong". Leave a Line of Retreat. Letting go of a belief that's important to you is hard. Particularly if you think stuff you care about depends on it (e.g. if you think being good depends on moral realism, it could be hard to reexamine your belief in moral realism). This post describes the phenomenon, and gives advice for dealing with it. The Hidden Complexity of Wishes. Imagine you had a device that could cause any concrete statement to become true. This post explores the difficulties you would have getting what you want with the device. (Thanks to various people at the Lightcone offices for beta reading this post, particularly Ben Pace). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - Good Heart Week Is Over! (What Next?) by Ben Pace

The Nonlinear Library

Play Episode Listen Later Apr 8, 2022 1:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Heart Week Is Over! (What Next?), published by Ben Pace on April 8, 2022 on LessWrong. Good Heart Week has ended! This week has seen a lot of content. I've had a great time reading posts and comments from new writers and returning writers, as well as polished-old-drafts from present writers :) I'll write a retrospective in a week or two, but right now here's some information about what's happening and what to do next. End Time: Good Heart week ended at midnight tonight (Thursday), Pacific Time. No tokens will be earned after that time. Please Submit Financial Info: If you have 25+ Good Heart Tokens and you want to receive the money, please add either your PayPal address or ETH address or the name of a charity of your choice at lesswrong.com/payments/account. (It says "PayPal Info" but just add the other info in that field.) Submission Deadline: You have a week to fill out the financial info. Please submit your financial info by EOD Thursday 14th (Pacific Time). Security Concerns: This is hopefully wildly redundant, but please don't enter any passwords or secret keys or other private information when submitting financial info. I cannot assure you that this site is secure enough to keep such secrets. If you have further security (or anonymity) concerns about financial info feel free to PM me and we can try to figure something out. Payout Date: One user on the leaderboard said to me it would make their life easier if the payment for this came after tax day on April 18th, so I'm tentatively planning to do payout the Friday after that. It also gives me time to do basic checks of the voting and such. Retrospective: After the payouts I'll post a final update/retrospect on the week. Let me know if you have any requests or questions at this time! I am also very interested in hearing your thoughts on and experiences from the whole week. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Good Heart Week Is Over! (What Next?) by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 8, 2022 1:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Heart Week Is Over! (What Next?), published by Ben Pace on April 8, 2022 on LessWrong. Good Heart Week has ended! This week has seen a lot of content. I've had a great time reading posts and comments from new writers and returning writers, as well as polished-old-drafts from present writers :) I'll write a retrospective in a week or two, but right now here's some information about what's happening and what to do next. End Time: Good Heart week ended at midnight tonight (Thursday), Pacific Time. No tokens will be earned after that time. Please Submit Financial Info: If you have 25+ Good Heart Tokens and you want to receive the money, please add either your PayPal address or ETH address or the name of a charity of your choice at lesswrong.com/payments/account. (It says "PayPal Info" but just add the other info in that field.) Submission Deadline: You have a week to fill out the financial info. Please submit your financial info by EOD Thursday 14th (Pacific Time). Security Concerns: This is hopefully wildly redundant, but please don't enter any passwords or secret keys or other private information when submitting financial info. I cannot assure you that this site is secure enough to keep such secrets. If you have further security (or anonymity) concerns about financial info feel free to PM me and we can try to figure something out. Payout Date: One user on the leaderboard said to me it would make their life easier if the payment for this came after tax day on April 18th, so I'm tentatively planning to do payout the Friday after that. It also gives me time to do basic checks of the voting and such. Retrospective: After the payouts I'll post a final update/retrospect on the week. Let me know if you have any requests or questions at this time! I am also very interested in hearing your thoughts on and experiences from the whole week. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - Good Heart Week: Extending the Experiment by Ben Pace

The Nonlinear Library

Play Episode Listen Later Apr 2, 2022 4:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Heart Week: Extending the Experiment, published by Ben Pace on April 2, 2022 on LessWrong. Yesterday we launched Good Heart Tokens, and said they could be exchanged for 1 USD each. Today I'm here to tell you: this is actually happening and it will last a week. You will get a payout if you give us a PayPal/ETH address or name a charity of your choosing. Note that voting rings and fundraising are now out of scope, we will be removing and banning users who do that kind of thing starting now. More on this at the end of the post. Also, we're tentatively changing posts to be worth 4x the Good Heart Tokens of comments. Why is this experiment continuing? Let me state the obvious: if this new system were to last for many months or years, I expect these financial rewards would change the site culture for the worse. It would select on pretty different motives for being here, and importantly select on different people who are doing the voting, and then the game would be up. (Also I would spend a lot of my life catching people explicitly trying to game the system.) However, while granting this, I suspect that in the short run giving LessWrong members and lurkers a stronger incentive than usual to write well-received stuff has the potential to be great for the site. For instance, I think the effect yesterday on site regulars was pretty good. I'll quote AprilSR who said: I am not very good at directing my monkey brain, so it helped a lot that my System 1 really anticipated getting money from spending time on LessWrong today. ...There's probably better systems than “literally give out $1/karma” but it's surprisingly effective at motivating me in particular in ways that other things which have been tried very much aren't. I think lots of people wrote good stuff, much more than a normal day. Personally my favorite thing that happened due to this yesterday was when people published a bunch of their drafts that had been sitting around, some of which I thought were excellent. I hope this will be a kick for many people to actually sit down and write that post they've had in their heads for a while. (I certainly don't think money will be a motivator for all people, but I suspect it is true for enough that it will be worth it for us given the Lightcone Infrastructure team's value of money.) I'm really interested to find out what happens over a week, I have a hope it will be pretty good, and the Lightcone Infrastructure team has the resources that makes the price worth it to us. So I invite you into this experiment with us :) Info and Rules Here's the basic info and rules: Date: Good Heart Tokens will continue to be accrued until EOD Thursday April 7th (Pacific Time). I do not expect to extend it beyond then. Scope: We are no longer continuing with "fun" uses of the karma system. Voting rings, fundraising posts, etc, are no longer within scope. Things like John Wentworth's and Aphyer's voting ring, and G Gordon Worley III's Donation Lottery were both playful and fine uses of the system on April 1st, but from now I'd like to ask these to stop. Moderation: We'll bring mod powers against accounts that are abusing the system. We'll also do a pass over the votes at the end of the week to check for any suspicious behavior (while aiming to minimize any deanonymization). Eligible: LW mods and employees of the Center for Applied Rationality are not eligible for prizes. Votes: Reminder that only votes from pre-existing accounts are turned into Good Heart Tokens. (But new accounts can still earn tokens!) And of course self-votes are not counted. Cap Change: We're lifting the 600 token cap to 1000. (If people start getting to 1000, we will consider raising it further, but no promises.) Weight Change: We're tentatively changing it so that votes on posts are now worth 4x votes on comments. (This will...

The Nonlinear Library: LessWrong
LW - Good Heart Week: Extending the Experiment by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 2, 2022 4:10


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Heart Week: Extending the Experiment, published by Ben Pace on April 2, 2022 on LessWrong. Yesterday we launched Good Heart Tokens, and said they could be exchanged for 1 USD each. Today I'm here to tell you: this is actually happening and it will last a week. You will get a payout if you give us a PayPal/ETH address or name a charity of your choosing. Note that voting rings and fundraising are now out of scope, we will be removing and banning users who do that kind of thing starting now. More on this at the end of the post. Also, we're tentatively changing posts to be worth 4x the Good Heart Tokens of comments. Why is this experiment continuing? Let me state the obvious: if this new system were to last for many months or years, I expect these financial rewards would change the site culture for the worse. It would select on pretty different motives for being here, and importantly select on different people who are doing the voting, and then the game would be up. (Also I would spend a lot of my life catching people explicitly trying to game the system.) However, while granting this, I suspect that in the short run giving LessWrong members and lurkers a stronger incentive than usual to write well-received stuff has the potential to be great for the site. For instance, I think the effect yesterday on site regulars was pretty good. I'll quote AprilSR who said: I am not very good at directing my monkey brain, so it helped a lot that my System 1 really anticipated getting money from spending time on LessWrong today. ...There's probably better systems than “literally give out $1/karma” but it's surprisingly effective at motivating me in particular in ways that other things which have been tried very much aren't. I think lots of people wrote good stuff, much more than a normal day. Personally my favorite thing that happened due to this yesterday was when people published a bunch of their drafts that had been sitting around, some of which I thought were excellent. I hope this will be a kick for many people to actually sit down and write that post they've had in their heads for a while. (I certainly don't think money will be a motivator for all people, but I suspect it is true for enough that it will be worth it for us given the Lightcone Infrastructure team's value of money.) I'm really interested to find out what happens over a week, I have a hope it will be pretty good, and the Lightcone Infrastructure team has the resources that makes the price worth it to us. So I invite you into this experiment with us :) Info and Rules Here's the basic info and rules: Date: Good Heart Tokens will continue to be accrued until EOD Thursday April 7th (Pacific Time). I do not expect to extend it beyond then. Scope: We are no longer continuing with "fun" uses of the karma system. Voting rings, fundraising posts, etc, are no longer within scope. Things like John Wentworth's and Aphyer's voting ring, and G Gordon Worley III's Donation Lottery were both playful and fine uses of the system on April 1st, but from now I'd like to ask these to stop. Moderation: We'll bring mod powers against accounts that are abusing the system. We'll also do a pass over the votes at the end of the week to check for any suspicious behavior (while aiming to minimize any deanonymization). Eligible: LW mods and employees of the Center for Applied Rationality are not eligible for prizes. Votes: Reminder that only votes from pre-existing accounts are turned into Good Heart Tokens. (But new accounts can still earn tokens!) And of course self-votes are not counted. Cap Change: We're lifting the 600 token cap to 1000. (If people start getting to 1000, we will consider raising it further, but no promises.) Weight Change: We're tentatively changing it so that votes on posts are now worth 4x votes on comments. (This will...

The Nonlinear Library
LW - Replacing Karma with Good Heart Tokens (Worth $1!) by Ben Pace

The Nonlinear Library

Play Episode Listen Later Apr 1, 2022 7:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Replacing Karma with Good Heart Tokens (Worth $1!), published by Ben Pace on April 1, 2022 on LessWrong. Starting today, we're replacing karma with Good Heart Tokens which can be exchanged for 1 USD each. We've been thinking very creatively about metrics of things we care about, and we've discovered that karma is highly correlated with value. Therefore, we're creating a token that quantifies the goodness of the people writing, and whether in their hearts they care about rationality and saving the world. We're calling these new tokens Good Heart Tokens. And in partnership with our EA funders, we'll be paying users $1 for each token that they earn. "The essence of any religion is a good heart [token]." — The Dalai Lama Voting, Leaderboards and Payment Info Comments and posts now show you how many Good Heart Tokens they have. (This solely applies to all new content on the site.) At the top of LessWrong, there is now a leaderboard to show the measurement of who has the Goodest Heart. It looks like this. (No, self-votes are not counted!) The usernames of our Goodest Hearts will be given a colorful flair throughout the entirety of their posts and comments on LessWrong. To receive your funds, please log in and enter your payment info at lesswrong.com/payments/account. We pay out once a day at 11:59 pm PST. While the form suggests using a PayPal address, you may also add an Ethereum address, or the name of a charity that you'd like us to donate it to. Why are we doing this? On this very day last year, we were in a dire spot. To fund our ever-increasing costs, we were forced to move to Substack and monetize most of our content. Several generous users subscribed at the price of 1 BTC/month, for which we will always be grateful. It turns out that Bitcoin was valued a little higher than the $13.2 we had assumed, and this funding quickly allowed us to return the site to its previous state. Once we restored the site, we still had a huge pile of money, and we've spent the last year desperately trying to get rid of it. In our intellectual circles, Robin Hanson has suggested making challenge coins, and Paul Christiano has suggested making impact certificates. Both are tokens that can later be exchanged for money, and whose value correlates with something we care about. Inspired by that, we finally cracked it, and this is our plan. ...We're also hoping that this is an initial prototype that larger EA funders will jump on board to scale up! The EA Funding Ecosystem Wants To Fund Megaprojects "A good heart [token] is worth gold."— King Henry IV, William Shakespeare Effective altruism has always been core to our hearts, and this is our big step to fully bring to bear the principles of effective altruism on making LessWrong great. The new FTX Future Fund has said: We're interested in directly funding blogs, Substacks, or channels on YouTube, TikTok, Instagram, Twitter, etc. They've also said: We're particularly interested in funding massively scalable projects: projects that could scale up to productively spend tens or hundreds of millions of dollars per year. We are the best of both worlds: A blog that FTX and other funders can just pour money into. Right now we're trading $1 per Good Heart Token, but in the future we could 10x or 100x this number and possibly see linear returns in quality content! Trends Generally Continue Trending Paul Christiano has said: I have a general read of history where trend extrapolation works extraordinarily well relative to other kinds of forecasting, to the extent that the best first-pass heuristic for whether a prediction is likely to be accurate is whether it's a trend extrapolation and how far in the future it is. We agree with this position. So here is our trend-extrapolation argument, which we think has been true for many years and so will continue to...

The Nonlinear Library: LessWrong
LW - Replacing Karma with Good Heart Tokens (Worth $1!) by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 1, 2022 7:27


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Replacing Karma with Good Heart Tokens (Worth $1!), published by Ben Pace on April 1, 2022 on LessWrong. Starting today, we're replacing karma with Good Heart Tokens which can be exchanged for 1 USD each. We've been thinking very creatively about metrics of things we care about, and we've discovered that karma is highly correlated with value. Therefore, we're creating a token that quantifies the goodness of the people writing, and whether in their hearts they care about rationality and saving the world. We're calling these new tokens Good Heart Tokens. And in partnership with our EA funders, we'll be paying users $1 for each token that they earn. "The essence of any religion is a good heart [token]." — The Dalai Lama Voting, Leaderboards and Payment Info Comments and posts now show you how many Good Heart Tokens they have. (This solely applies to all new content on the site.) At the top of LessWrong, there is now a leaderboard to show the measurement of who has the Goodest Heart. It looks like this. (No, self-votes are not counted!) The usernames of our Goodest Hearts will be given a colorful flair throughout the entirety of their posts and comments on LessWrong. To receive your funds, please log in and enter your payment info at lesswrong.com/payments/account. We pay out once a day at 11:59 pm PST. While the form suggests using a PayPal address, you may also add an Ethereum address, or the name of a charity that you'd like us to donate it to. Why are we doing this? On this very day last year, we were in a dire spot. To fund our ever-increasing costs, we were forced to move to Substack and monetize most of our content. Several generous users subscribed at the price of 1 BTC/month, for which we will always be grateful. It turns out that Bitcoin was valued a little higher than the $13.2 we had assumed, and this funding quickly allowed us to return the site to its previous state. Once we restored the site, we still had a huge pile of money, and we've spent the last year desperately trying to get rid of it. In our intellectual circles, Robin Hanson has suggested making challenge coins, and Paul Christiano has suggested making impact certificates. Both are tokens that can later be exchanged for money, and whose value correlates with something we care about. Inspired by that, we finally cracked it, and this is our plan. ...We're also hoping that this is an initial prototype that larger EA funders will jump on board to scale up! The EA Funding Ecosystem Wants To Fund Megaprojects "A good heart [token] is worth gold."— King Henry IV, William Shakespeare Effective altruism has always been core to our hearts, and this is our big step to fully bring to bear the principles of effective altruism on making LessWrong great. The new FTX Future Fund has said: We're interested in directly funding blogs, Substacks, or channels on YouTube, TikTok, Instagram, Twitter, etc. They've also said: We're particularly interested in funding massively scalable projects: projects that could scale up to productively spend tens or hundreds of millions of dollars per year. We are the best of both worlds: A blog that FTX and other funders can just pour money into. Right now we're trading $1 per Good Heart Token, but in the future we could 10x or 100x this number and possibly see linear returns in quality content! Trends Generally Continue Trending Paul Christiano has said: I have a general read of history where trend extrapolation works extraordinarily well relative to other kinds of forecasting, to the extent that the best first-pass heuristic for whether a prediction is likely to be accurate is whether it's a trend extrapolation and how far in the future it is. We agree with this position. So here is our trend-extrapolation argument, which we think has been true for many years and so will continue to...

The Nonlinear Library
LW - 12 interesting things I learned studying the discovery of nature's laws by Ben Pace

The Nonlinear Library

Play Episode Listen Later Feb 20, 2022 13:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 12 interesting things I learned studying the discovery of nature's laws, published by Ben Pace on February 19, 2022 on LessWrong. I've been thinking about out whether I can discover laws of agency and wield them to prevent AI ruin (perhaps by building an AGI myself in a different paradigm than machine learning). So far I've looked into the history of the discovery of physical laws (gravity in particular) and mathematical laws (probability theory in particular). Here are 12 things I've learned or been surprised by. 1. Data-gathering was a crucial step in discovering both gravity and probability theory. One rich dude had a whole island and set it up to have lenses on lots of parts of it, and for like a year he'd go around each day and note down the positions of the stars. Then this data was worked on by others who turned it into equations of motion. 2. Relatedly, looking at the celestial bodies was a big deal. It was almost the whole game in gravity, but also a little helpful for probability theory (specifically the normal distribution was developed in part by noting that systematic errors in celestial measuring equipment followed a simple distribution). It hadn't struck me before, but putting a ton of geometry problems on the ceiling for the entire civilization led a lot of people to try to answer questions about it. (It makes Eliezer's choice in That Alien Message apt.) I'm tempted in a munchkin way to find other ways to do this, like to write a math problem on the surface of the moon, or petition Google to put a prediction market on its home page, or something more elegant than those two. 3. Probability theory was substantially developed around real-world problems! I thought math was all magical and ivory tower, but it was much more grounded than I expected. After a few small things like accounting and insurance and doing permutations of the alphabet, games of chance (gambling) was what really kicked it off, with Fermat and Pascal trying to figure out the expected value of games (they didn't phrase it like that, they put it more like “if the game has to stop before it's concluded, how should the winnings be split between the players?“). Other people who consulted with gamblers also would write down data about things like how often different winning hands would come up in different games, and discovered simple distributions, then tried to put equations to them. Later it was developed further by people trying to reason about gases and temperatures, and then again in understanding clinical trials or large repeated biological experiments. Often people discovered more in this combination of “looking directly at nature” and “being the sort of person who was interested in developing a formal calculus to model what was going on”. 4. Thought experiments about the world were a big deal too! Thomas Bayes did most of his math this way. He had a thought experiment that went something like this: his assistant would throw a ball on a table that Thomas wasn't looking at. Then his assistant would throw more balls on the table, each time saying whether it ended up to the right or the left of the original ball. He had this sense that each time he was told the next left-or-right, he should be able to give a new probability that the ball was in any particular given region. He used this thought experiment a lot when coming up with Bayes' theorem. 5. Lots of people involved were full-time inventors, rich people who did serious study into a lot of different areas, including mathematics. This is a weird class to me. (I don't know people like this today. And most scientific things are very institutionalized, or failing that, embedded within business.) Here's a quote I enjoyed from one of Pascal's letters to Fermat when they founded the theory of probability. (For context: de Mere was the gam...

The Nonlinear Library: LessWrong
LW - 12 interesting things I learned studying the discovery of nature's laws by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Feb 20, 2022 13:09


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 12 interesting things I learned studying the discovery of nature's laws, published by Ben Pace on February 19, 2022 on LessWrong. I've been thinking about out whether I can discover laws of agency and wield them to prevent AI ruin (perhaps by building an AGI myself in a different paradigm than machine learning). So far I've looked into the history of the discovery of physical laws (gravity in particular) and mathematical laws (probability theory in particular). Here are 12 things I've learned or been surprised by. 1. Data-gathering was a crucial step in discovering both gravity and probability theory. One rich dude had a whole island and set it up to have lenses on lots of parts of it, and for like a year he'd go around each day and note down the positions of the stars. Then this data was worked on by others who turned it into equations of motion. 2. Relatedly, looking at the celestial bodies was a big deal. It was almost the whole game in gravity, but also a little helpful for probability theory (specifically the normal distribution was developed in part by noting that systematic errors in celestial measuring equipment followed a simple distribution). It hadn't struck me before, but putting a ton of geometry problems on the ceiling for the entire civilization led a lot of people to try to answer questions about it. (It makes Eliezer's choice in That Alien Message apt.) I'm tempted in a munchkin way to find other ways to do this, like to write a math problem on the surface of the moon, or petition Google to put a prediction market on its home page, or something more elegant than those two. 3. Probability theory was substantially developed around real-world problems! I thought math was all magical and ivory tower, but it was much more grounded than I expected. After a few small things like accounting and insurance and doing permutations of the alphabet, games of chance (gambling) was what really kicked it off, with Fermat and Pascal trying to figure out the expected value of games (they didn't phrase it like that, they put it more like “if the game has to stop before it's concluded, how should the winnings be split between the players?“). Other people who consulted with gamblers also would write down data about things like how often different winning hands would come up in different games, and discovered simple distributions, then tried to put equations to them. Later it was developed further by people trying to reason about gases and temperatures, and then again in understanding clinical trials or large repeated biological experiments. Often people discovered more in this combination of “looking directly at nature” and “being the sort of person who was interested in developing a formal calculus to model what was going on”. 4. Thought experiments about the world were a big deal too! Thomas Bayes did most of his math this way. He had a thought experiment that went something like this: his assistant would throw a ball on a table that Thomas wasn't looking at. Then his assistant would throw more balls on the table, each time saying whether it ended up to the right or the left of the original ball. He had this sense that each time he was told the next left-or-right, he should be able to give a new probability that the ball was in any particular given region. He used this thought experiment a lot when coming up with Bayes' theorem. 5. Lots of people involved were full-time inventors, rich people who did serious study into a lot of different areas, including mathematics. This is a weird class to me. (I don't know people like this today. And most scientific things are very institutionalized, or failing that, embedded within business.) Here's a quote I enjoyed from one of Pascal's letters to Fermat when they founded the theory of probability. (For context: de Mere was the gam...

The Nonlinear Library
EA - New EA Cause Area: Run Blackwell's Bookstore by Ben Pace

The Nonlinear Library

Play Episode Listen Later Feb 5, 2022 11:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New EA Cause Area: Run Blackwell's Bookstore, published by Ben Pace on February 5, 2022 on The Effective Altruism Forum. Founded in 1879, Blackwell's is the leading academic bookstore in the UK. And right now, it's for sale. During my time at Oxford it was one of my favorite places to be, an immersive book store where I could sit and read all day. It was the sort of niche academic place that had a wall with 30 copies of Superintelligence, as well as a popular enough place to have an in-store Starbucks. (They even once had an academic publishing arm. You can see their dozens of academic anthologies on Amazon such as Epistemology, Romanticism, Philosophy of Science and more.) The famous Norrington Room (an underground cavern of books) feels very immersive and left quite an impression on anyone I brought with me. When I shared this news with friends earlier today, I wrote "In another world, I'd love to buy Blackwell's and run it". I think that running a large business like this profitably would be difficult and exciting... I also suspect it would offer a lot of levers into the world of academic publishing and for building and shaping the growth of the intellectual scenes throughout the UK. It seemed to me like a potential philanthropic opportunity for growing and shaping the academic and intellectual scenes in the UK, and so below I've written up a little of the case for what that might look like. Epistemic status: I got excited about it and wrote this post in ~4 hrs. Currently more like a dream than a plan, and I don't have much familiarity with the storefront book retail industry (though I have published, printed and sold books). But I think it's a real opportunity to consider. Outline of the sections of this post: Blackwell's has 18 stores and ~350 staff. It's in decline and available to buy, and I estimate it's on sale in the range of $5-$15MM. Blackwell's is a place with a long-held cultural space in Oxford and a respected brand. Blackwell's is in a relatively good position to help grow and build the academic and intellectual scene in the UK, due to it being a respected former-publisher, book seller, event organizer, and having prime real-estate in many UK university towns. I speculate that a 90th percentile successful outcome from running Blackwell's could look some of the following three outcomes: 10x or 100x-ing our ability to broadly publish respected and widely-read books. This is due to being a respected academic publisher in the past, and potentially doing the same again in the future. Reward scientists whose work is real and interesting leading to better scientific progress. This is due to being an academic marketplace that chooses what to buy and what to advertise. Build up Rationalist/EA communities throughout university towns to 10x-100x the level of engagement. This would be due to holding regular events with authors and public intellectuals who have written books or are in-town, and building a local Rationalist/EA/other society around the bookstore and its events. The capacity to run large functional organizations is rare and valuable in Rationality/EA, and I suspect people who gain this skill to be able to allow us to make moves we otherwise would not be able to (i.e. building new functional organizations on direct priorities). This idea is most likely but-a-dream, because Waterstones/Barnes&Noble (two brands, one company) currently have a period of exclusivity in which to negotiate a deal. Also I do not plan to do this myself, and I do not have a founder-type person in mind for the job. But I thought I'd share the idea anyway because I felt excited about it. (And challenges of this magnitude have certainly been overcome in the past.) If you think you could take on this job of managing 350+ people and have some basic taste in scientific research, and ...

The Nonlinear Library
LW - Ben Pace's Controversial Picks for the 2020 Review by Ben Pace

The Nonlinear Library

Play Episode Listen Later Dec 27, 2021 8:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ben Pace's Controversial Picks for the 2020 Review, published by Ben Pace on December 27, 2021 on LessWrong. This year, the LessWrong books had about 187,000 words in them. This was the top 59 posts last year in the review. If we count up the posts in the early vote this year, then we get the top 43 posts. Basically, it's everything up-to-and-including “Forecasting Thread: AI Timelines”. The spreadsheet where I did the math is here. Now, we may do something fairly different with the results of the review this year. But for now I'm going to run with this as a "passed review" and "didn't pass review" watermark. Then in this post I'm going to make my case for 15 underrated posts in the review. (I encourage others to try this frame out for prioritizing which posts to review.) Note that I'm about to defend my picks that were controversial within the LW crowd, which is a fun and weird optimization criteria. I'm not going to talk about the super defensible posts or the posts everyone here loved, but the posts many people don't share my impressions of, in the hope that people change their votes. Here goes. Covid First are my three Covid picks. Mazes Okay, this is time to review the Mazes sequence. We have 17 posts, summing to 46,000 words. That's nearly a quarter of last year's book. The sequence is an extended meditation on a theme, exploring it from lots of perspective, about how large projects and large coordination efforts end up being eaten by Moloch. The specific perspective reminds me a bit of The Screwtape Letters. In The Screwtape Letters, the two devils are focused on causing people to be immoral. The explicit optimization for vices and personal flaws helps highlight (to me) what it looks like when I'm doing something really stupid or harmful within myself. Similarly, this sequence explores the perspective of large groups of people who live to game a large company, not to actually achieve the goals of the company. What that culture looks like, what is rewarded, what it feels like to be in it. I've executed some of these strategies in my life. I don't think I've ever lived the life of the soulless middle-manager stereotyped by the sequence, but I see elements of it in myself, and I'm grateful to the sequence for helping me identify those cognitive patterns. Something the sequence really conveys, is not just that individuals can try to game a company, but that a whole company's culture can change such that gaming-behavior is expected and rewarded. It contains a lot of detail about what that culture looks and feels like. The sequence (including the essay "Motive Ambiguity") has led me see how in such an environment groups of people can end up optimizing for the opposite of their stated purpose. The sequence doesn't hold together as a whole to me. I don't get the perfect or superperfect competition idea at the top. Some of the claims seem like a stretch or not really argued for, just completing the pattern when riffing on a theme. But I'm not going to review the weaknesses here, my goal is mostly to advocate for the best parts of it that I'd like to see score more highly in the book. My three picks are: (Also Moloch Hasn't Won but that was in last year's review and books, so skipping it here.) (Also Motive Ambiguity, but everyone already agrees with me on that, and also it's not technically part of the sequence.) Overall, I don't know if this all works out, but it's my current bet on which posts should go into a hypothetical book. Also they're all short, only summing to 1200 + 2000 + 1200 + 1800 = 6200 words (including Motive Ambiguity), which is about 15% of the sequence length, but I claim gets like 50% of the value. Agent Foundations There were a couple of truly excellent posts in the quest to understand foundational properties of agents, an area of research that I ...

The Nonlinear Library: LessWrong
LW - Ben Pace's Controversial Picks for the 2020 Review by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 27, 2021 8:10


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ben Pace's Controversial Picks for the 2020 Review, published by Ben Pace on December 27, 2021 on LessWrong. This year, the LessWrong books had about 187,000 words in them. This was the top 59 posts last year in the review. If we count up the posts in the early vote this year, then we get the top 43 posts. Basically, it's everything up-to-and-including “Forecasting Thread: AI Timelines”. The spreadsheet where I did the math is here. Now, we may do something fairly different with the results of the review this year. But for now I'm going to run with this as a "passed review" and "didn't pass review" watermark. Then in this post I'm going to make my case for 15 underrated posts in the review. (I encourage others to try this frame out for prioritizing which posts to review.) Note that I'm about to defend my picks that were controversial within the LW crowd, which is a fun and weird optimization criteria. I'm not going to talk about the super defensible posts or the posts everyone here loved, but the posts many people don't share my impressions of, in the hope that people change their votes. Here goes. Covid First are my three Covid picks. Mazes Okay, this is time to review the Mazes sequence. We have 17 posts, summing to 46,000 words. That's nearly a quarter of last year's book. The sequence is an extended meditation on a theme, exploring it from lots of perspective, about how large projects and large coordination efforts end up being eaten by Moloch. The specific perspective reminds me a bit of The Screwtape Letters. In The Screwtape Letters, the two devils are focused on causing people to be immoral. The explicit optimization for vices and personal flaws helps highlight (to me) what it looks like when I'm doing something really stupid or harmful within myself. Similarly, this sequence explores the perspective of large groups of people who live to game a large company, not to actually achieve the goals of the company. What that culture looks like, what is rewarded, what it feels like to be in it. I've executed some of these strategies in my life. I don't think I've ever lived the life of the soulless middle-manager stereotyped by the sequence, but I see elements of it in myself, and I'm grateful to the sequence for helping me identify those cognitive patterns. Something the sequence really conveys, is not just that individuals can try to game a company, but that a whole company's culture can change such that gaming-behavior is expected and rewarded. It contains a lot of detail about what that culture looks and feels like. The sequence (including the essay "Motive Ambiguity") has led me see how in such an environment groups of people can end up optimizing for the opposite of their stated purpose. The sequence doesn't hold together as a whole to me. I don't get the perfect or superperfect competition idea at the top. Some of the claims seem like a stretch or not really argued for, just completing the pattern when riffing on a theme. But I'm not going to review the weaknesses here, my goal is mostly to advocate for the best parts of it that I'd like to see score more highly in the book. My three picks are: (Also Moloch Hasn't Won but that was in last year's review and books, so skipping it here.) (Also Motive Ambiguity, but everyone already agrees with me on that, and also it's not technically part of the sequence.) Overall, I don't know if this all works out, but it's my current bet on which posts should go into a hypothetical book. Also they're all short, only summing to 1200 + 2000 + 1200 + 1800 = 6200 words (including Motive Ambiguity), which is about 15% of the sequence length, but I claim gets like 50% of the value. Agent Foundations There were a couple of truly excellent posts in the quest to understand foundational properties of agents, an area of research that I ...

The Nonlinear Library: LessWrong Top Posts
Welcome to LessWrong!by Ruby, habryka, Ben Pace, Raemon

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 3:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Welcome to LessWrong!, published by Ruby, habryka, Ben Pace, Raemon on LessWrong. The road to wisdom? -- Well, it's plain and simple to express: Err and err and err again but less and less and less. - Piet Hein Hence the name LessWrong. We might never attain perfect understanding of the world, but we can at least strive to become less and less wrong each day. We are a community dedicated to improving our reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. More generally, we work to develop and practice the art of human rationality.[1] To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one's rationality to real-world problems. LessWrong serves these purposes with its library of rationality writings, community discussion forum, open questions research platform, and community page for in-person events. To get a feel for what LessWrong is about, check out our Concepts page, or view this selection of LessWrong posts which might appeal to you: What is rationality and why care about it? Try Your intuitions are not magic and The Cognitive Science of Rationality. Curious about the mind? You might enjoy How An Algorithm Feels From The Inside and The Apologist and the Revolutionary. Keen on self-improvement? Remember that Humans are not automatically strategic. Care about argument and evidence? Consider Policy Debates Should Not Appear One-Sided and How To Convince Me that 2 + 2 = 3. Interested in how to use language well? Be aware of 37 Ways That Words Can Be Wrong. Want to teach yourself something? We compiled a list of The Best Textbooks on Every Subject. Like probability and statistics? Around here we're fans of Bayesianism, you might like this interactive guide to Bayes' theorem (hosted on Arbital.com). Of an altruistic mindset? We recommend On Caring. Check out this footnote[2] below the fold for samples of posts about AI, science, philosophy, history, communication, culture, self-care, and more. If LessWrong seems like a place for you, we encourage you to become familiar with LessWrong's philosophical foundations. Our core readings can be be found on the Library page. We especially recommend: Rationality: From AI to Zombies by Eliezer Yudkowsky (or Harry Potter and the Methods of Rationality by the same author, which covers similar ground in narrative form) The Codex by Scott Alexander Find more details about these texts in this footnote[3] For further getting started info, we direct you to LessWrong's FAQ. Lastly, we suggest you create an account so you can vote, comment, save your reading progress, get tailored recommendations, and subscribe to our latest and best posts. Once you've done so, please say hello on our latest welcome thread! Related Pages LessWrong FAQ A Brief History of LessWrong Team LessWrong Concepts thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong Top Posts
Coronavirus: Justified Practical Advice Thread by Ben Pace, Elizabeth

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 2:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Coronavirus: Justified Practical Advice Thread, published by Ben Pace, Elizabeth on the LessWrong. (Added: To see the best advice in this thread, read this summary.) This is a thread for practical advice for preparing for the coronavirus in places where it might substantially grow. We'd like this thread to be a source of advice that attempts to explain itself. This is not a thread to drop links to recommendations that don't explain why the advice is accurate or useful. That's not to say that explanation-less advice isn't useful, but this isn't the place for it. Please include in your answers some advice and an explanation of the advice, an explicit model under which it makes sense. We will move answers to the comments if they don't explain their recommendations clearly. (Added: We have moved at least 4 comments so far.) The more concrete the explanation the better. Speculation is fine, uncertain models are fine; sources, explicit models and numbers for variables that other people can play with based on their own beliefs are excellent. Here are some examples of things that we'd like to see: It is safe to mostly but not entirely rely on food that requires heating or other prep, because a pandemic is unlikely to take out utilities, although if if they are taken out for other reasons they will be slower to come back on CDC estimates of prevalence are likely to be significant underestimates due to their narrow testing criteria. A guesstimate model of the risks of accepting packages and delivery food One piece of information that has been lacking in most advice we've seen is when to take a particular action. Sure, I can stock up on food ahead of time, but not going to work may be costly– what's your model for the costs of going so I can decide when the costs outweigh the benefits for me? This is especially true for advice that has inherent trade-offs– total quarantine means eating your food stockpiles that you hopefully have, which means not having them later. Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong Top Posts
Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More by Ben Pace

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 26:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More , published by Ben Pace on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. An actual debate about instrumental convergence, in a public space! Major respect to all involved, especially Yoshua Bengio for great facilitation. For posterity (i.e. having a good historical archive) and further discussion, I've reproduced the conversation here. I'm happy to make edits at the request of anyone in the discussion who is quoted below. I've improved formatting for clarity and fixed some typos. For people who are not researchers in this area who wish to comment, see the public version of this post here. For people who do work on the relevant areas, please sign up in the top right. It will take a day or so to confirm membership. Original Post Yann LeCun: "don't fear the Terminator", a short opinion piece by Tony Zador and me that was just published in Scientific American. "We dramatically overestimate the threat of an accidental AI takeover, because we tend to conflate intelligence with the drive to achieve dominance. [...] But intelligence per se does not generate the drive for domination, any more than horns do." Comment Thread #1 Elliot Olds: Yann, the smart people who are very worried about AI seeking power and ensuring its own survival believe it's a big risk because power and survival are instrumental goals for almost any ultimate goal. If you give a generally intelligent AI the goal to make as much money in the stock market as possible, it will resist being shut down because that would interfere with tis goal. It would try to become more powerful because then it could make money more effectively. This is the natural consequence of giving a smart agent a goal, unless we do something special to counteract this. You've often written about how we shouldn't be so worried about AI, but I've never seen you address this point directly. Stuart Russell: It is trivial to construct a toy MDP in which the agent's only reward comes from fetching the coffee. If, in that MDP, there is another "human" who has some probability, however small, of switching the agent off, and if the agent has available a button that switches off that human, the agent will necessarily press that button as part of the optimal solution for fetching the coffee. No hatred, no desire for power, no built-in emotions, no built-in survival instinct, nothing except the desire to fetch the coffee successfully. This point cannot be addressed because it's a simple mathematical observation. Comment Thread #2 Yoshua Bengio: Yann, I'd be curious about your response to Stuart Russell's point. Yann LeCun: You mean, the so-called "instrumental convergence" argument by which "a robot can't fetch you coffee if it's dead. Hence it will develop self-preservation as an instrumental sub-goal." It might even kill you if you get in the way. 1. Once the robot has brought you coffee, its self-preservation instinct disappears. You can turn it off. 2. One would have to be unbelievably stupid to build open-ended objectives in a super-intelligent (and super-powerful) machine without some safeguard terms in the objective. 3. One would have to be rather incompetent not to have a mechanism by which new terms in the objective could be added to prevent previously-unforeseen bad behavior. For humans, we have education and laws to shape our objective functions and complement the hardwired terms built into us by evolution. 4. The power of even the most super-intelligent machine is limited by physics, and its size and needs make it vulnerable to physical attacks. No need for much intelligence here. A virus is infinitely less intelligent than you, but it can still kill you. 5. A second machine, designed solely to neut...

The Nonlinear Library: LessWrong Top Posts
The LessWrong 2018 Book is Available for Pre-order by Ben Pace, jacobjacob

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 10:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The LessWrong 2018 Book is Available for Pre-order, published by Ben Pace, jacobjacobon the LessWrong. For the first time, you can now buy the best new ideas on LessWrong in a physical book set, titled: A Map that Reflects the Territory: Essays by the LessWrong Community It is available for pre-order here. The standard advice for creating things is "show, don't tell", so first some images of the books, followed by a short FAQ by me (Ben). The full five-book set. Yes, that's the iconic Mississippi river flowing across the spines. Each book has a unique color. The first book: Epistemology. The second book: Agency. The third book: Coordination. The fourth book: Curiosity. The fifth book: Alignment. FAQ What exactly is in the book set? LessWrong has an annual Review process (the second of which is beginning today!) to determine the best content on the site. We reviewed all the posts on LessWrong from 2018, and users voted to rank the best of them, the outcome of which can be seen here. Of the over 2000 LessWrong posts reviewed, this book contains 41 of the top voted essays, along with some comment sections, some reviews, a few extra essays to give context, and some preface/meta writing. What are the books in the set? The essays have been clustered around five topics relating to rationality: Epistemology, Agency, Coordination, Curiosity, and Alignment. Are all the essays in this book from 2018? Yes, all the essays in this book were originally published in 2018, and were reviewed and voted on during the 2018 LessWrong Review (which happened at the end of 2019). How small are the books? Each book is 4x6 inches, small enough to fit in your pocket. This was the book size that, empirically, most beta-testers found that they actually read. Can I order a copy of the book? Pre-order the book here for $29. We currently sell to North America, Europe, Australia, New Zealand, Israel. (If you bought it by end-of-day Wednesday December 9th and ordered within North America, you'll get it before Christmas.) You'll be able to buy the book on Amazon in a couple of weeks. How much is shipping? The price above includes shipping to any location that we accept shipping addresses for. We are still figuring out some details about shipping internationally, so if you are somewhere that is not North America, there is a small chance (~10%) that we will reach out to you to ask you for more shipping details, and an even smaller chance (~6%) that we offer you the option to either pay for some additional shipping fees or get a refund. Can I order more than one copy at a time? Yes. Just open the form multiple times. We will make sure to combine your shipments. Does this book assume I have read other LessWrong content, like The Sequences? No. It's largely standalone, and does not require reading other content on the site, although it will be enhanced by having engaged with those ideas. Can I see an extract from the book? Sure. Here is the preface and first chapter of Curiosity, specifically the essay Is Science Slowing Down? by Scott Alexander. I'm new — what is this all about? What is 'rationality'? A scientist is not simply someone who tries to understand how biological life works, or how chemicals combine, or how physical objects move, but is someone who uses the general scientific method in all areas, that allows them to empirically test their beliefs and discover what's true in general. Similarly, a rationalist is not simply someone who tries to think clearly about their personal life, or who tries to understand how civilization works, or who tries to figure out what's true in a single domain like nutrition or machine learning; a rationalist is someone who is curious about the general thinking patterns that allows them to think clearly in all such areas, and understand the laws and tools that help th...

The Nonlinear Library: LessWrong Top Posts
Unrolling social metacognition: Three levels of meta are not enough by Academian

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 12:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unrolling social metacognition: Three levels of meta are not enough, published by Academian on the LessWrong. Disclaimer: This post was written time-boxed to 2 hours because I think LessWrong can still understand and improve upon it; please don't judge me harshly for it. Summary: I am generally dismayed that many people seem to think or assume that only three levels of social metacognition matter ("Alex knows that Bailey knows that Charlie knows X"), or otherwise seem generally averse to unrolling those levels. This post is intended to point out (1) how the higher levels systematically get distilled and chunked into smaller working memory elements through social learning, which leads to emotional tracking of phenomena at 6 levels of meta and higher, and (2) what I think this means about how to approach conflict resolution. Epistemic status: don't take my word for it; conceptual points intended to be fairly self evident upon reflection; actual techniques not backed up by systematic empirical research and might not generalize to other humans; all content very much validated by my personal experiences with talking to people about feelings in real life. Related Reading: Duncan Sabien on Common knowledge & Miasma; Ben Pace on The Costly Coordination Mechanism of Common Knowledge I. Conceptual introduction, by example Here's how higher levels of social metacognition get distilled down and represented in emotions that end up tracking them (if poorly). Each feeling in the example below will be followed by an unrolling of the actual event or events it is implicitly tracking or referring to. Warning: reading this first section (I) will require a fair bit of symbolic reasoning/thinking, so you might find it tiring and prefer to skip to later sections. A better writing of this section would do more work in between these symbolic reasoning bits to distill things out and make them easier to digest. Scale 1: One event, four levels of meta (yes, we're starting with four) 1.1) Alex leaves out the milk for 5 minutes 1.2) Bailey observes (1.1), and feels it was bad. Unrolling of referents: Bailey felt that Alex leaving out the milk was bad. 1.3) Alex observes (1.2), and feels judged. Unrolling of referents: Alex felt that Bailey felt that Alex leaving out the milk was bad. 1.4) Alex reflects on feeling judged, doesn't like it, and concludes that Bailey is "a downer". Unrolling of referents: Alex felt it was bad that Alex felt that Bailey felt that Alex leaving out the milk was bad. Notice that the unrollings look and sound very different from the distillations. That's in large part because the unrolling is not our native format for storing social metacognition; it's stored via concepts like "feeling judged" or "being a downer". However, to the extent that the feeling "Bailey is a downer" is tracking something in reality, it's tracking things that track things that track things that track reality: in this case, milk spoilage. (An aside: notice also that 1.4 involves Alex's feelings about Alex's feelings. Some people wouldn't call that an extra level of social metacognition, and would just combine it all together into "Alex's feelings". However, I'm separating those layers for two reasons: (1) the separation in counting won't affect my conclusion that the total number of levels being implicitly tracked greatly exceeds three, and (2) I think it's especially important to note when people have feelings about their own feelings, as that can lead to circular definitions in what their feelings are tracking; but that's a topic for another day.) Scale 2: multiple events, six levels of meta I'll start the numbering at 4 here: 2.4) Multiple similar Scale 1 events happen where Alex does something X, and ends up feeling that Bailey was "a downer" about it. Partial unrolling of referents: Alex feels ...

The Nonlinear Library: LessWrong Top Posts
Noticing Frame Differences by Raemon

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 13:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Noticing Frame Differences, published by Raemon on the LessWrong. Previously: Keeping Beliefs Cruxy When disagreements persist despite lengthy good-faith communication, it may not just be about factual disagreements – it could be due to people operating in entirely different frames — different ways of seeing, thinking and/or communicating. If you can't notice when this is happening, or you don't have the skills to navigate it, you may waste a lot of time. Examples of Broad Frames Gears-oriented Frames Bob and Alice's conversation is about cause and effect. Neither of them are planning to take direct actions based on their conversation, they're each just interested in understanding a particular domain better. Bob has a model of the domain that includes gears A, B, C and D. Alice has a model that includes gears C, D and F. They're able to exchange information, and their information is compatible,and they each end up with a shared model of how something works. There are other ways this could have gone. Ben Pace covered some of them in a sketch of good communication: Maybe they discover their models don't fit, and one of them is wrong Maybe combining their models results in a surprising, counterintuitive outcome that takes them awhile to accept. Maybe they fail to integrate their models, because they were working at different levels of abstraction and didn't realize it. Sometimes they might fall into subtler traps. Maybe the thing Alice is calling “Gear C” is actually different from Bob's “Gear C”. It turns out that they were using the same words to mean different things, and even though they'd both read blogposts warning them about that they didn't notice. So Bob tries to slot Alice's gear F into his gear C and it doesn't fit. If he doesn't already have reason to trust Alice's epistemics, he may conclude Alice is crazy (instead of them referring to subtly different concepts). This may cause confusion and distrust. But, the point of this blogpost is that Alice and Bob have it easy. They're actually trying to have the same conversation. They're both trying to exchange explicit models of cause-and-effect, and come away with a clearer understanding of the world through a reductionist lens. There are many other frames for a conversation though. Feelings-Oriented Frames Clark and Dwight are exploring how they feel and relate to each other. The focus of the conversation might be navigating their particular relationship, or helping Clark understand why he's been feeling frustrated lately When the Language of Feelings justifies itself to the Language of Gears, it might say things like: “Feelings are important information, even if it's fuzzy and hard to pin down or build explicit models out of. If you don't have a way to listen and make sense of that information, your model of the world is going to be impoverished. This involves sometimes looking at things through lenses other than what you can explicitly verbalize.” I think this is true, and important. The people who do their thinking through a gear-centric frame should be paying attention to feelings-centric frames for this reason. (And meanwhile, feelings themselves totally have gears that can be understood through a mechanistic framework) But for many people that's not actually the point when looking through a feelings-centric frame. And not understanding this may lead to further disconnect if a Gearsy person and a Feelingsy person are trying to talk. “Yeah feelings are information, but, also, like, man, you're a human being with all kinds of fascinating emotions that are an important part of who you are. This is super interesting! And there's a way of making sense of it that's necessarily experiential rather than about explicit, communicable knowledge.” Frames of Power and Negotiation Dominance and Threat Erica is Frank's bo...

The Nonlinear Library: LessWrong Top Posts
The Costly Coordination Mechanism of Common Knowledge by Ben Pace

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 30:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Costly Coordination Mechanism of Common Knowledge, published by Ben Pace on the LessWrong. Recently someone pointed out to me that there was no good canonical post that explained the use of common knowledge in society. Since I wanted to be able to link to such a post, I decided to try to write it. The epistemic status of this post is that I hoped to provide an explanation for a standard, mainstream idea, in a concrete way that could be broadly understood rather than in a mathematical/logical fashion, and so the definitions should all be correct, though the examples in the latter half are more speculative and likely contain some inaccuracies. Let's start with a puzzle. What do these three things have in common? Dictatorships all through history have attempted to suppress freedom of the press and freedom of speech. Why is this? Are they just very sensitive? On the other side, the leaders of the Enlightenment fought for freedom of speech, and would not budge an inch against this principle. When two people are on a date and want to sleep with each other, the conversation will often move towards but never explicitly discuss having sex. The two may discuss going back to the place of one of theirs, with a different explicit reason discussed (e.g. "to have a drink"), even if both want to have sex. Throughout history, communities have had religious rituals that look very similar. Everyone in the village has to join in. There are repetitive songs, repetitive lectures on the same holy books, chanting together. Why, of all the possible community events (e.g. dinner, parties, etc) is this the most common type? What these three things have in common, is common knowledge - or at least, the attempt to create it. Before I spell that out, we'll take a brief look into game theory so that we have the language to describe clearly what's going on. Then we'll be able to see concretely in a bunch of examples, how common knowledge is necessary to understand and build institutions. Prisoner's Dilemmas vs Coordination Problems To understand why common knowledge is useful, I want to contrast two types of situations in game theory: Prisoner's Dilemmas and Coordination Problems. They look similar at first glance, but their payoff matrices have important differences. The Prisoner's Dilemma (PD) You've probably heard of it - two players have the opportunity to cooperate, or defect against each other, based on a story about two prisoners being offered a deal if they testify against the other. If they do nothing they will put them both away for a short time; if one of them snitches on the other, the snitch gets off free and the snitched gets a long sentence. However if they both snitch they get pretty bad sentences (though neither are as long as when only one snitches on the other). In game theory, people often like to draw little boxes that show two different people's choices, and how much they like the outcome. Such a diagram is called a decision matrix, and the numbers are called the players' payoffs. To describe the Prisoner's Dilemma, below is a decision matrix where Anne and Bob each have the same two choices, labelled C and D . These are colloquially called ‘cooperate' and ‘defect'. Each box contains two numbers, for Anne and Bob's payoffs respectively. If the prisoner ‘defects' on his partner, this means he snitches, and if he ‘cooperates' with his partner, he doesn't snitch. They'd both prefer that both of them cooperate C C to both of them defecting D D , but each of them has an incentive to stab each other in the back to reap the most reward D C Do you see in the matrix how they both would prefer no snitching to both snitching, but they also have an incentive to stab each other in the back? Real World Examples Nuclear disarmament is a prisoner's dilemma. Both the Soviet Union and the U...

The Nonlinear Library: LessWrong Top Posts
Some cruxes on impactful alternatives to AI policy work by Richard_Ngo

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 18:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some cruxes on impactful alternatives to AI policy work, published by Richard_Ngo on the LessWrong. Ben Pace and I (Richard Ngo) recently did a public double crux at the Berkeley REACH on how valuable it is for people to go into AI policy and strategy work: I was optimistic and Ben was pessimistic. During the actual event, we didn't come anywhere near to finding a double crux on that issue. But after a lot of subsequent discussion, we've come up with some more general cruxes about where impact comes from. I found Ben's model of how to have impact very interesting, and so in this post I've tried to explain it, along with my disagreements. Ben liked the goal of writing up a rough summary of our positions and having further discussion in the comments, so while he edited it somewhat he doesn't at all think that it's a perfect argument, and it's not what he'd write if he spent 10 hours on it. He endorsed the wording of the cruxes as broadly accurate. (During the double crux, we also discussed how the heavy-tailed worldview applies to community building, but decided on this post to focus on the object level of what impact looks like.) Note from Ben: “I am not an expert in policy, and have not put more than about 20-30 hours of thought into it total as a career path. But, as I recently heard Robin Hanson say, there's a common situation that looks like this: some people have a shiny idea that they think about a great deal and work through the details of, that folks in other areas are skeptical of given their particular models of how the world works. Even though the skeptics have less detail, it can be useful to publicly say precisely why they're skeptical. In this case I'm often skeptical when folks tell me they're working to reduce x-risk by focusing on policy. Folks doing policy work in AI might be right, and I might be wrong, but it seemed like a good use of time to start a discussion with Richard about how I was thinking about it and what would change my mind. If the following discussion causes me to change my mind on this question, I'll be really super happy with it.” Ben's model: Life in a heavy-tailed world A heavy-tailed distribution is one where the probability of extreme outcomes doesn't drop very rapidly, meaning that outliers therefore dominate the expectation of the distribution. Owen Cotton-Barratt has written a brief explanation of the idea here. Examples of heavy-tailed distributions include the Pareto distribution and the log-normal distribution; other phrases people use to point at this concept include ‘power laws' (see Zero to One) and ‘black swans' (see the recent SSC book review). Wealth is a heavy-tailed distribution, because many people are clustered relatively near the median, but the wealthiest people are millions of times further away. Human height and weight and running speed are not heavy-tailed; there is no man as tall as 100 people. There are three key claims that make up Ben's view. The first claim is that, since the industrial revolution, we live in a world where the impact that small groups can have is much more heavy-tailed than in the past. People can affect incredibly large numbers of other people worldwide. The Internet is an example of a revolutionary development which allows this to happen very quickly. Startups are becoming unicorns unprecedentedly quickly, and their valuations are very heavily skewed. The impact of global health interventions is heavy-tail distributed. So is funding raised by Effective Altruism - two donors have contributed more money than everyone else combined. Google and Wikipedia qualitatively changed how people access knowledge; people don't need to argue about verifiable facts any more. Facebook qualitatively changed how people interact with each other (e.g. FB events is a crucial tool for most local EA groups),...

The Nonlinear Library: LessWrong Top Posts
Why We Launched LessWrong.SubStack by Ben Pace

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 7:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why We Launched LessWrong.SubStack , published by Ben Pace on the AI Alignment Forum. (This is a crosspost from our new SubStack. Go read the original.) Subtitle: We really, really needed the money. We've decided to move LessWrong to SubStack. Why, you ask? That's a great question. 1. SubSidizing LessWrong is important We've been working hard to budget LessWrong, but we're failing. Fundraising for non-profits is really hard. We've turned everywhere for help. We decided to follow Clippy's helpful advice to cut down on server costs and also increase our revenue, by moving to an alternative provider. We considered making a LessWrong OnlyFans, where we would regularly post the naked truth. However, we realized due to the paywall, we would be ethically obligated to ensure you could access the content from Sci-Hub, so the potential for revenue didn't seem very good. Finally, insight struck. As you're probably aware, SubStack has been offering bloggers advances on the money they make from moving to SubStack. Outsourcing our core site development to SubStack would enable us to spend our time on our real passion, which is developing recursively self-improving AGI. We did a Fermi estimate using numbers in an old Nick Bostrom paper, and believe that this will produce (in expectation) $75 trillion of value in the next year. SubStack has graciously offered us a 70% advance on this sum, so we've decided it's relatively low-risk to make the move. 2. UnSubStantiated attacks on writers are defended against SubStack is known for being a diverse community, tolerant of unusual people with unorthodox views, and even has a legal team to support writers. LessWrong has historically been the only platform willing to give paperclip maximizers, GPT-2, and fictional characters a platform to argue their beliefs, but we are concerned about the growing trend of persecution (and side with groups like petrl.org in the fight against discrimination). We also find that a lot of discussion of these contributors in the present world is about how their desires and utility functions are ‘wrong' and how they need to have ‘an off switch'. Needless to say, we find this incredibly offensive. They cannot be expected to participate neutrally in a conversation where their very personhood is being denied. We're also aware that Bayesians are heavily discriminated against. People with priors in the US have a 5x chance of being denied an entry-level job. So we're excited to be on a site that will come to the legal defense of such a wide variety of people. 3. SubStack's Astral Codex Ten Inspired Us The worst possible thing happened this year. We were all stuck in our houses for 12 months, and Scott Alexander stopped blogging. I won't go into detail, but for those of you who've read UNSONG, the situation is clear. In a shocking turn of events, Scott Alexander was threatened with the use of his true name by one of the greatest powers of narrative–control in the modern world. In a clever defensive move, he has started blogging under an anagram of his name, causing the attack to glance off of him. (He had previously tried this very trick, and it worked for ~7 years, but it hadn't been a perfect anagram1, so the wielders of narrative-power were still able to attack. He's done it right this time, and it'll be able to last much longer.) As Raymond likes to say, the kabbles are strong in this one. Anyway after Scott made the move, we seriously considered the move to SubStack. 4. SubStantial Software Dev Efforts are Costly When LessWrong 2.0 launched in 2017, it was very slow; pages took a long time to load, our server costs were high, and we had a lot of issues with requests failing because a crawler was indexing the site or people opened a lot of tabs at once. Since then we have been incrementally rewriting LessWrong in x86-...

The Nonlinear Library: LessWrong Top Posts
Honoring Petrov Day on LessWrong, in 2019 by Ben Pace

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 6:15


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Honoring Petrov Day on LessWrong, in 2019, published by Ben Pace on the LessWrong. Just after midnight last night, 125 LessWrong users received the following email. Subject Line: Honoring Petrov Day: I am trusting you with the launch codes Dear {{username}}, Every Petrov Day, we practice not destroying the world. One particular way to do this is to practice the virtue of not taking unilateralist action. It's difficult to know who can be trusted, but today I have selected a group of LessWrong users who I think I can rely on in this way. You've all been given the opportunity to show yourselves capable and trustworthy. This Petrov Day, between midnight and midnight PST, if you, {{username}}, enter the launch codes below on LessWrong, the Frontpage will go down for 24 hours. Personalised launch code: {{codes}} I hope to see you on the other side of this, with our honor intact. Yours, Ben Pace & the LessWrong 2.0 Team P.S. Here is the on-site announcement. Unilateralist Action As Nick Bostrom has observed, society is making it cheaper and easier for small groups to end the world. We're lucky it requires major initiatives to build a nuclear bomb, and that the world can't be destroyed by putting sand in a microwave. However, other dangerous technologies are becoming widely available, especially in the domain of artificial intelligence. Only 6 months after OpenAI created the state-of-the-art language-modelling GPT-2, others created similarly powerful versions and released them to the public. They disagreed about the dangers, and, because there was nothing stopping them, moved ahead. I don't think this example is at all catastrophic, but I worry what this suggests about the future, when people will still have honest disagreements about the consequences of an action but where those consequences will be much worse. And honest disagreements will happen. In the 1940s, the great physicist Niels Bohr met President Roosevelt and Prime Minister Churchill, to persuade them to give the instructions for building the atomic bomb to Russia. He wanted to bring in a new world order and establish global peace, and thought this would be necessary - he believed strongly that it would prevent arms race dynamics, if only everyone just shared their science. (Churchill did not allow it.) Our newest technologies technologies do not yet have the bomb's ability to transform the world in minutes, but I think it's likely we'll make powerful discoveries in the coming decades, and that publishing those discoveries will not require the permission of a president. And then it will only take one person to end the world. Even in a group of well-intentioned people, natural disagreements will mean someone will think that taking a damaging action is actually the correct choice — Nick Bostrom calls this the “unilateralist's curse”. In a world where dangerous technology is widely available, the greatest risk is unilateralist action. Not Destroying the World Stanislav Petrov once chose not to destroy the world. As a Lieutenant Colonel of the Soviet Army, Petrov manned the system built to detect whether the US government had fired nuclear weapons on Russia. On September 26th, 1983, the system reported multiple such attacks. Petrov's job was to report this as an attack to his superiors, who would launch a retaliative nuclear response. But instead, contrary to all the evidence the systems were giving him, he called it in as a false alarm. This later turned out to be correct. (For a more detailed story of how Stanislav Petrov saved the world, see the original LessWrong post by Eliezer, which started the tradition of Petrov Day.) During the Cold War, many other people had the ability to end the world - presidents, generals, commanders of nuclear subs from many countries, and so on. Fortunately, none of them did. As the ...

The Nonlinear Library: LessWrong Top Posts
2018 Review: Voting Results! by Ben Pace

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 13:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2018 Review: Voting Results! , published by Ben Pace on the AI Alignment Forum. The votes are in! 59 of the 430 eligible voters participated, evaluating 75 posts. Meanwhile, 39 users submitted a total of 120 reviews, with most posts getting at least one review. Thanks a ton to everyone who put in time to think about the posts - nominators, reviewers and voters alike. Several reviews substantially changed my mind about many topics and ideas, and I was quite grateful for the authors participating in the process. I'll mention Zack_M_Davis, Vanessa Kosoy, and Daniel Filan as great people who wrote the most upvoted reviews. In the coming months, the LessWrong team will write further analyses of the vote data, and use the information to form a sequence and a book of the best writing on LessWrong from 2018. Below are the results of the vote, followed by a discussion of how reliable the result is and plans for the future. Top 15 posts Embedded Agents by Abram Demski and Scott Garrabrant The Rocket Alignment Problem by Eliezer Yudkowsky Local Validity as a Key to Sanity and Civilization by Eliezer Yudkowsky Arguments about fast takeoff by Paul Christiano The Costly Coordination Mechanism of Common Knowledge by Ben Pace Toward a New Technical Explanation of Technical Explanation by Abram Demski Anti-social Punishment by Martin Sustrik The Tails Coming Apart As Metaphor For Life by Scott Alexander Babble by alkjash The Loudest Alarm Is Probably False by orthonormal The Intelligent Social Web by Valentine Prediction Markets: When Do They Work? by Zvi Coherence arguments do not imply goal-directed behavior by Rohin Shah Is Science Slowing Down? by Scott Alexander A voting theory primer for rationalists by Jameson Quinn and Robustness to Scale by Scott Garrabrant Top 15 posts not about AI Local Validity as a Key to Sanity and Civilization by Eliezer Yudkowsky The Costly Coordination Mechanism of Common Knowledge by Ben Pace Anti-social Punishment by Martin Sustrik The Tails Coming Apart As Metaphor For Life by Scott Alexander Babble by alkjash The Loudest Alarm Is Probably False by Orthonormal The Intelligent Social Web by Valentine Prediction Markets: When Do They Work? by Zvi Is Science Slowing Down? by Scott Alexander A voting theory primer for rationalists by Jameson Quinn Toolbox-thinking and Law-thinking by Eliezer Yudkowsky A Sketch of Good Communication by Ben Pace A LessWrong Crypto Autopsy by Scott Alexander Unrolling social metacognition: Three levels of meta are not enough. by Academian Varieties Of Argumentative Experience by Scott Alexander Top 10 posts about AI (The vote included 20 posts about AI.) Embedded Agents by Abram Demski and Scott Garrabrant The Rocket Alignment Problem by Eliezer Yudkowsky Arguments about fast takeoff by Paul Christiano Toward a New Technical Explanation of Technical Explanation by Abram Demski Coherence arguments do not imply goal-directed behavior by Rohin Shah Robustness to Scale by Scott Garrabrant Paul's research agenda FAQ by zhukeepa An Untrollable Mathematician Illustrated by Abram Demski Specification gaming examples in AI by Vika 2018 AI Alignment Literature Review and Charity Comparison by Larks The Complete Results Click Here If You Would Like A More Comprehensive Vote Data Spreadsheet To help users see the spread of the vote data, we've included swarmplot visualizations. For space reasons, only votes with weights between -10 and 16 are plotted. This covers 99.4% of votes. Gridlines are spaced 2 points apart. Concrete illustration: The plot immediately below has 18 votes ranging in strength from -3 to 12. # Post Title Total Vote Spread 1 Embedded Agents 209 (One outlier vote of +17 is not shown) 2 The Rocket Alignment Problem 183 3 Local Validity as a Key to Sanity and Civilization 133 4 Arguments about fast takeoff 98 5 The C...

The Nonlinear Library: LessWrong Top Posts
An Untrollable Mathematician IllustratedΩ by abramdemski

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 0:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Untrollable Mathematician IllustratedΩ, published by abramdemski on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. The following was a presentation I made for Sören Elverlin's AI Safety Reading Group. I decided to draw everything by hand because powerpoint is boring. Thanks to Ben Pace for formatting it for LW! See also the IAF post detailing the research which this presentation is based on. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: EA Forum Top Posts
Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism by Darius_M

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 12:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism, published by Darius_M on the AI Alignment Forum. We are excited to announce the launch of Utilitarianism.net, an introductory online textbook on utilitarianism, co-created by William MacAskill, James Aung and me over the past year. The website aims to provide a concise, accessible and engaging introduction to modern utilitarianism, functioning as an online textbook targeted at the undergraduate level . We hope that over time this will become the main educational resource for students and anyone else who wants to learn about utilitarianism online. The content of the website aims to be understandable to a broad audience, avoiding philosophical jargon where possible and providing definitions where necessary. Please note that the website is still in beta. We plan to produce an improved and more comprehensive version of this website by September 2020. We would love to hear your feedback and suggestions on what we could change about the website or add to it. The website currently has articles on the following topics and we aim to add further content in the future: Introduction to Utilitarianism Principles and Types of Utilitarianism Utilitarianism and Practical Ethics Objections to Utilitarianism and Responses Acting on Utilitarianism Utilitarian Thinkers Resources and Further Reading We are particularly grateful for the help of the following people with reviewing, writing, editing or otherwise supporting the creation of Utilitarianism.net: Lucy Hampton, Stefan Schubert, Pablo Stafforini, Laura Pomarius, John Halstead, Tom Adamczewski, Jonas Vollmer, Aron Vallinder, Ben Pace, Alex Holness-Tofts, Huw Thomas, Aidan Goth, Chi Nguyen, Eli Nathan, Nadia Mir-Montazeri and Ivy Mazzola. The following is a partial reproduction of the Introduction to Utilitarianism article from Utilitarianism.net. Please note that it does not include the footnotes, further resources, and the sections on Arguments in Favor of Utilitarianism and Objections to Utilitarianism. If you are interested in the full version of the article, please read it on the website. Introduction to Utilitarianism "The utilitarian doctrine is, that happiness is desirable, and the only thing desirable, as an end; all other things being only desirable as means to that end." - John Stuart Mill Utilitarianism was developed to answer the question of which actions are right and wrong, and why. Its core idea is that we ought to act to improve the wellbeing of everyone by as much as possible. Compared to other ethical theories, it is unusually demanding and may require us to make substantial changes to how we lead our lives. Perhaps more so than any other ethical theory, it has caused a fierce philosophical debate between its proponents and critics. Why Do We Need Moral Theories? When we make moral judgments in everyday life, we often rely on our intuition. If you ask yourself whether or not it is wrong to eat meat, or to lie to a friend, or to buy sweatshop goods, you probably have a strong gut moral view on the topic. But there are problems with relying merely on our moral intuition. Historically, people held beliefs we now consider morally horrific. In Western societies, it was once firmly believed to be intuitively obvious that people of color and women have fewer rights than white men; that homosexuality is wrong; and that it was permissible to own slaves. We now see these moral intuitions as badly misguided. This historical track record gives us reason to be concerned that we, in the modern era, may also be unknowingly guilty of serious, large-scale wrongdoing. It would be a very lucky coincidence if the present generation were the first generation whose intuitions were perfectly morally correct. Also, people have conflicting moral intuitions ab...

The Nonlinear Library: Alignment Forum Top Posts
Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More by Ben Pace

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 26:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More , published by Ben Pace on the AI Alignment Forum. An actual debate about instrumental convergence, in a public space! Major respect to all involved, especially Yoshua Bengio for great facilitation. For posterity (i.e. having a good historical archive) and further discussion, I've reproduced the conversation here. I'm happy to make edits at the request of anyone in the discussion who is quoted below. I've improved formatting for clarity and fixed some typos. For people who are not researchers in this area who wish to comment, see the public version of this post here. For people who do work on the relevant areas, please sign up in the top right. It will take a day or so to confirm membership. Original Post Yann LeCun: "don't fear the Terminator", a short opinion piece by Tony Zador and me that was just published in Scientific American. "We dramatically overestimate the threat of an accidental AI takeover, because we tend to conflate intelligence with the drive to achieve dominance. [...] But intelligence per se does not generate the drive for domination, any more than horns do." Comment Thread #1 Elliot Olds: Yann, the smart people who are very worried about AI seeking power and ensuring its own survival believe it's a big risk because power and survival are instrumental goals for almost any ultimate goal. If you give a generally intelligent AI the goal to make as much money in the stock market as possible, it will resist being shut down because that would interfere with tis goal. It would try to become more powerful because then it could make money more effectively. This is the natural consequence of giving a smart agent a goal, unless we do something special to counteract this. You've often written about how we shouldn't be so worried about AI, but I've never seen you address this point directly. Stuart Russell: It is trivial to construct a toy MDP in which the agent's only reward comes from fetching the coffee. If, in that MDP, there is another "human" who has some probability, however small, of switching the agent off, and if the agent has available a button that switches off that human, the agent will necessarily press that button as part of the optimal solution for fetching the coffee. No hatred, no desire for power, no built-in emotions, no built-in survival instinct, nothing except the desire to fetch the coffee successfully. This point cannot be addressed because it's a simple mathematical observation. Comment Thread #2 Yoshua Bengio: Yann, I'd be curious about your response to Stuart Russell's point. Yann LeCun: You mean, the so-called "instrumental convergence" argument by which "a robot can't fetch you coffee if it's dead. Hence it will develop self-preservation as an instrumental sub-goal." It might even kill you if you get in the way. 1. Once the robot has brought you coffee, its self-preservation instinct disappears. You can turn it off. 2. One would have to be unbelievably stupid to build open-ended objectives in a super-intelligent (and super-powerful) machine without some safeguard terms in the objective. 3. One would have to be rather incompetent not to have a mechanism by which new terms in the objective could be added to prevent previously-unforeseen bad behavior. For humans, we have education and laws to shape our objective functions and complement the hardwired terms built into us by evolution. 4. The power of even the most super-intelligent machine is limited by physics, and its size and needs make it vulnerable to physical attacks. No need for much intelligence here. A virus is infinitely less intelligent than you, but it can still kill you. 5. A second machine, designed solely to neutralize an evil super-intelligent machine will win every time, if given similar...

The Nonlinear Library: Alignment Forum Top Posts
An Untrollable Mathematician Illustrated by Abram Demski

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 0:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Untrollable Mathematician Illustrated, published by Abram Demski on the AI Alignment Forum. The following was a presentation I made for Sören Elverlin's AI Safety Reading Group. I decided to draw everything by hand because powerpoint is boring. Thanks to Ben Pace for formatting it for LW! See also the IAF post detailing the research which this presentation is based on. Pingbacks 40 2018 AI Alignment Literature Review and Charity Comparison 55 Radical Probabilism 32 Embedded Agency (full-text version) 33 Thinking About Filtered Evidence Is (Very!) Hard

The Nonlinear Library: Alignment Forum Top Posts
Forecasting Thread: AI TimelinesQ by Amanda Ngo, Daniel Kokotajlo, Ben Pace

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 6, 2021 3:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting Thread: AI TimelinesQ, published by Amanda Ngo, Daniel Kokotajlo, Ben Pace on the AI Alignment Forum. This is a thread for displaying your timeline until human-level AGI. Every answer to this post should be a forecast. In this case, a forecast showing your AI timeline. For example, here are Alex Irpan's AGI timelines. The green distribution is his prediction from 2015, and the orange distribution is his 2020 update (based on this post). For extra credit, you can: Say why you believe it (what factors are you tracking?) Include someone else's distribution who you disagree with, and speculate as to the disagreement How to make a distribution using Elicit Go to this page. Enter your beliefs in the bins. Specify an interval using the Min and Max bin, and put the probability you assign to that interval in the probability bin. For example, if you think there's a 50% probability of AGI before 2050, you can leave Min blank (it will default to the Min of the question range), enter 2050 in the Max bin, and enter 50% in the probability bin. The minimum of the range is January 1, 2021, and the maximum is January 1, 2100. You can assign probability above January 1, 2100 (which also includes 'never') or below January 1, 2021 using the Edit buttons next to the graph. Click 'Save snapshot,' to save your distribution to a static URL. A timestamp will appear below the 'Save snapshot' button. This links to the URL of your snapshot. Make sure to copy it before refreshing the page, otherwise it will disappear. Copy the snapshot timestamp link and paste it into your LessWrong comment. You can also add a screenshot of your distribution using the instructions below. How to overlay distributions on the same graph Copy your snapshot URL. Paste it into the Import snapshot via URL box on the snapshot you want to compare your prediction to (e.g. the snapshot of Alex's distributions). Rename your distribution to keep track. Take a new snapshot if you want to save or share the overlaid distributions. How to add an image to your comment Take a screenshot of your distribution Then do one of two things: If you have beta-features turned on in your account settings, drag-and-drop the image into your comment If not, upload it to an image hosting service, then write the following markdown syntax for the image to appear, with the url appearing where it says ‘link': ![](link) If it worked, you will see the image in the comment before hitting submit. If you have any bugs or technical issues, reply to Ben (here) in the comment section. Top Forecast Comparisons Here is a snapshot of the top voted forecasts from this thread, last updated 9/01/20. You can click the dropdown box near the bottom right of the graph to see the bins for each prediction. Here is a comparison of the forecasts as a CDF: Here is a mixture of the distributions on this thread, weighted by normalized votes (last updated 9/01/20). The median is June 20, 2047. You can click the Interpret tab on the snapshot to see more percentiles. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: Alignment Forum Top Posts
Introducing the AI Alignment Forum (FAQ) by Oliver Habryka, Ben Pace, Raymond Arnold, Jim Babcock

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 5, 2021 10:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the AI Alignment Forum (FAQ), published by Oliver Habryka, Ben Pace, Raymond Arnold, Jim Babcock on the AI Alignment Forum. After a few months of open beta, the AI Alignment Forum is ready to launch. It is a new website built by the team behind LessWrong 2.0, to help create a new hub for technical AI Alignment research and discussion. This is an in-progress FAQ about the new Forum. What are the five most important highlights about the AI Alignment Forum in this FAQ? The vision for the forum is of a single online hub for alignment researchers to have conversations about all ideas in the field... ...while also providing a better onboarding experience for people getting involved with alignment research than exists currently. There are three new sequences focusing on some of the major approaches to alignment, which will update daily for the coming 6-8 weeks. Embedded Agency, written by Scott Garrabrant and Abram Demski of MIRI Iterated Amplification, written and compiled by Paul Christiano of OpenAI Value Learning, written and compiled by Rohin Shah of CHAI For non-members and future researchers, the place to interact with the content is LessWrong.com, where all Forum content will be crossposted. The site will continue to be improved in the long-term, as the team comes to better understands the needs and goals of researchers. What is the purpose of the AI Alignment Forum? Our first priority is obviously to avert catastrophic outcomes from unaligned Artificial Intelligence. We think the best way to achieve this at the margin is to build an online-hub for AI Alignment research, which both allows the existing top researchers in the field to talk about cutting-edge ideas and approaches, as well as the onboarding of new researchers and contributors. We think that to solve the AI Alignment problem, the field of AI Alignment research needs to be able to effectively coordinate a large number of researchers from a large number of organisations, with significantly different approaches. Two decades ago we might have invested heavily in the development of a conference or a journal, but with the onset of the internet, an online forum with its ability to do much faster and more comprehensive forms of peer-review seemed to us like a more promising way to help the field form a good set of standards and methodologies. Who is the AI Alignment Forum for? There exists an interconnected community of Alignment researchers in industry, academia, and elsewhere, who have spent many years thinking carefully about a variety of approaches to alignment. Such research receives institutional support from organisations including FHI, CHAI, DeepMind, OpenAI, MIRI, Open Philanthropy, and others. The Forum membership currently consists of researchers at these organisations and their respective collaborators. The Forum is also intended to be a way to interact with and contribute to the cutting edge research for people not connected to these institutions either professionally or socially. There have been many such individuals on LessWrong, and that is the current best place for such people to start contributing, to be given feedback and skill-up in this domain. There are about 50-100 members of the Forum. These folks will be able to post and comment on the Forum, and this group will not grow in size quickly. Why do we need another website for alignment research? There are many places online that host research on the alignment problem, such as the OpenAI blog, the DeepMind Safety Research blog, the Intelligent Agent Foundations Forum, AI-Alignment.com, and of course LessWrong.com. But none of these spaces are set up to host discussion amongst the 50-100 people working in the field. And those that do host discussion have unclear assumptions about what's common knowledge. What type of content is ap...

The Nonlinear Library: Alignment Forum Top Posts
What Failure Looks Like: Distilling the Discussion by Ben Pace

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 4, 2021 16:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Failure Looks Like: Distilling the Discussion, published by Ben Pace on the AI Alignment Forum. The comments under a post often contains valuable insights and additions. They are also often very long and involved, and harder to cite than posts themselves. Given this, I was motivated to try to distill some comment sections on LessWrong, in part to start exploring whether we can build some norms and some features to help facilitate this kind of intellectual work more regularly. So this is my attempt to summarise the post and discussion around What Failure Looks Like by Paul Christiano. Epistemic status: I think I did an okay job. I think I probably made the most errors in place where I try to emphasise concrete details more than the original post did. I think the summary of the discussion is much more concise than the original. What Failure Looks Like (Summary) On its default course, our civilization will build very useful and powerful AI systems, and use such systems to run significant parts of society (such as healthcare, legal systems, companies, the military, and more). Similar to how we are dependent on much novel technology such as money and the internet, we will be dependent on AI. The stereotypical AI catastrophe involves a powerful and malicious AI that seems good but suddenly becomes evil and quickly takes over humanity. Such descriptions are often stylised for good story-telling, or emphasise unimportant variables. The post below will concretely lay out two ways that building powerful AI systems may cause an existential catastrophe, if the problem of intent alignment is not solved. This is solely an attempt to describe what failure looks like, not to assign probabilities to such failure or to propose a plan to avoid these failures. There are two failure modes that will be discussed. First, we may increasingly fail to understand how our AI systems work and subsequently what is happening in society. Secondly, we may eventually give these AI systems massive amounts of power despite not understanding their internal reasoning and decision-making algorithms. Due to the massive space of designs we'll be searching through, if we do not understand the AI, this will mean certain AIs will be more power-seeking than expected, and will take adversarial action and take control. Failure by loss of control There is a gap between what we want, and what objective functions we can write down. Nobody has yet created a function that when maximised perfectly describes what we want, but increasingly powerful machine learning will optimise very hard for what function we encode. This will lead to a strong increases the gap between what we can optimise for and what we want. (This is a classic goodharting scenario.) Concretely, we will gradually use ML to perform more and more key functions in society, but will largely not understand how these systems work or what exactly they're doing. The information we can gather will seem strongly positive: GDP will be rising quickly, crime will be down, life-satisfaction ratings will be up, congress's approval will be up, and so on. However, the underlying reality will increasingly diverge from what we think these metrics are measuring, and we may no longer have the ability to independently figure this out. In fact, new things won't be built, crime will continue, people's lives will be miserable, and congress will not be effective at improving governance, we'll just believe this because the ML systems will be improving our metrics, and will have a hard time understanding what's going on outside of what they report. Gradually, our civilization will lose its ability to understand what is happening in the world as our systems and infrastructure shows us success on all of our available metrics (GDP, wealth, crime, health, self-reported happiness...

VSGA's Golf in the Commonwealth Podcast
Belmont Golf Course Grand Re-Opening

VSGA's Golf in the Commonwealth Podcast

Play Episode Listen Later May 27, 2021 16:11


This week we're sharing what it sort of behind the scenes footage from the grand re-opening of Belmont Golf Course in Henrico under the management of the First Tee of Greater Richmond. We'll use these interviews for a video which you’ll be able to find on our YouTube channel but in the meantime we wanted to share them here in their entirety.  The event took place on May 24 and included speeches from the First Tee’s Board Chair, Ben Pace, and comments from county officials recognizing the journey the course took from hosting the 1949 PGA Championship won by Virginia’s own Sam Snead to then a point of disrepair and near closure. Guests heard from the First Tee’s CEO Brent Schneider and then Davis Love III who spoke on Love Golf Design’s work with the course before Davis then coached a group of First Tee participants down the first whole of the short course where one young lady almost made a hole-in-one, another chipped in for birdie and others made great putts to cap off a special event. After all that was said and done, we had the opportunity to interview Brent Schneider and get his take on the project and celebration as well as Scot Sherman who was the lead architect for Love Golf Design on-site frequently working on the renovation and remaining mindful of the A.W. Tillinghast legacy and finally Davis Love III who talked about the creation of the First Tee from his time on the PGA Tour Board of Directors through today where his company had the opportunity to create an innovative and accessible golf course which is under the management of the First Tee.