Podcasts about applied rationality

Nonprofit organization based in Berkeley, California

  • 35PODCASTS
  • 58EPISODES
  • 41mAVG DURATION
  • ?INFREQUENT EPISODES
  • Sep 18, 2024LATEST
applied rationality

POPULARITY

20172018201920202021202220232024


Best podcasts about applied rationality

Latest podcast episodes about applied rationality

SaaS Expert Voices presented by Maxio
Diagnose Before You Deploy: Smarter Strategies for CEOs with Michelle Valentine

SaaS Expert Voices presented by Maxio

Play Episode Listen Later Sep 18, 2024 39:40


This week on the Expert Voices podcast, Randy Wootton, CEO of Maxio, welcomes back Michelle Valentine, Co-founder and CEO of Anrok and a prominent figure in the SaaS industry known for her innovative strategies and effective leadership. Randy and Michelle explore the "Seven Secrets of Success," focusing on delivering results, building winning strategies, and shaping company values. Michelle shares her unique experiences and methodologies, including the concept of "murphyjitsu," a strategic approach to anticipate and mitigate potential failures in business projects. She also shares how creating culture holds people accountable and fosters growth, elaborating on the power of intuition and making space for reflection. QuotesPeople want to work on winning teams, and so results matter. Rallying the team and seeing what you're doing together is really important. Creating a culture that holds people accountable and stretches people for those goals is absolutely the number one thing that a CEO needs to keep their eye on." -Michelle Valentine [02:40]“The best CEOs tend to be right a lot, and the only way to do that is to invest in building your intuition. In our last conversation, you and I talked about how to quantify how confident you are about something. This could be like, "I have low epistemic confidence, I'm medium, I have high epistemic confidence." Then you could even start prescribing a percentage on that. It really is so subjective.” -Michelle Valentine [26:57]Expert Takeaways Outcome-Driven Culture: Driving results and creating a culture that holds people accountable are paramount for a CEO's success.Murphy Jitsu Framework: An innovative approach borrowed from the Center for Applied Rationality, focusing on anticipating and mitigating what could go wrong in strategic plans.Shaping Values and Standards: Building a company's values and standards through a mix of aspirational and actual values, ensuring they evolve as the company grows.Building Intuition: The path to being 'right a lot' as a CEO involves honing one's intuition through curiosity, pattern matching, and continuous learning.Effective Team Dynamics: The importance of transparent communication within executive teams, balancing one-on-one interactions with group problem-solving to enhance decision-making.Timestamps(00:05) Seven Secrets of Success for CEOs(03:35) Applying Murphyjitsu for Effective Risk Management in Business(12:58) Shaping Company Values and Standards for Long-Term Success(21:58) Balancing One-on-Ones and Group Discussions for Effective Leadership(32:51) The Importance of Reflection and Meditation for Problem Solving(35:28) Diagnose Before Deploying Resources for Efficient Strategy and OperationsLinksMaxioUpcoming EventsMaxio Institute ReportRandy Wootton LinkedIn

LessWrong Curated Podcast
[HUMAN VOICE] "Attitudes about Applied Rationality" by Camille Berger

LessWrong Curated Podcast

Play Episode Listen Later Feb 14, 2024 7:35


Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/5jdqtpT6StjKDKacw/attitudes-about-applied-rationalityNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[Curated Post] ✓

The Nonlinear Library
LW - Theories of Applied Rationality by Camille Berger

The Nonlinear Library

Play Episode Listen Later Feb 4, 2024 6:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Theories of Applied Rationality, published by Camille Berger on February 4, 2024 on LessWrong. tl;dr: within the LW community, there are many clusters of strategies to achieve rationality: doing basic exercices, using jargon, reading, partaking workshops, privileging object-level activities, and several other opinions like putting an accent on feedback loops, difficult conversations or altered states of consciousness. Epistemic status: This is a vague model to help me understand other rationalists and why some of them keep doing things I think are wrong, or suggest me to do things I think are wrong. This is not based on real data. I will update according to possible discussions in the comments. Please be critical. Spending time in the rationalist community made me realize that there were several endeavors at reaching rationality that seemed to exist, some of which conflicted with others. This made me quite frustrated as I thought that my interpretation was the only one. The following list is an attempt at distinguishing the several approaches I've noticed. Of course, any rationalist will probably have elements of all theories at the same time. See each theory as the claim that a particular set of elements prevails above others. Believing in one theory usually goes on par with being fairly suspicious of others. Finally, remember that these categories are an attempt to distinguish what people are doing, not a guide about what side you should pick (if the sides exist at all). I suspect that most people end up applying one theory for practical reasons, more than because they have deeply thought about it at all. Basics Theory Partakers of the basics theory put a high emphasis on activities such as calibration, forecasting, lifehacks, and other fairly standard practices of epistemic and instrumental rationality. They don't see any real value in reading extensively LessWrong or going to workshops. They first and foremost believe in real-life, readily available practice. For them, spending too much time in the rationalist community, as opposed to doing simple exercises, is the main failure mode to avoid. Speaking Theory Partakers of the Speaking theory, although often relying on basics, usually put a high emphasis on using concepts met on LessWrong in daily parlance, although they generally do not necessarily insist on reading content on LessWrong. They may also insist on the importance of talking and discussing disagreements in a fairly regular way, while heavily relying on LessWrong terms and references in order to shape their thinking more rationally. They disagree with the statement that jargon should be avoided. For them, keeping your language, thinking, writing and discussion style the same way that it was before encountering rationality is the main failure mode to avoid. Reading Theory Partakers of the Reading theory put a high emphasis on reading LessWrong, more usually than not the " Canon ", but some might go to a further extent and insist on reading other materials as well, such as the books recommended on the CFAR website, rationalist blogs, or listening to a particular set of podcasts. They can be sympathetic or opposed to relying on LessWrong Speak, but don't consider it important. They can also be fairly familiar with the basics. For them, relying on LW Speak or engaging with the community while not mastering the relevant corpus is the main failure mode to avoid. Workshop Theory Partakers of the Workshop Theory consider most efforts of the Reading and Speaking theory to be somehow misleading. Since rationality is to be learned, it has to be deliberately practiced, if not ultralearned, and workshops such as CFAR are an important piece of this endeavor. Importantly, they do not really insist on reading the Sequences. Faced with the question " Do I need to read X...

The Nonlinear Library: LessWrong
LW - Theories of Applied Rationality by Camille Berger

The Nonlinear Library: LessWrong

Play Episode Listen Later Feb 4, 2024 6:37


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Theories of Applied Rationality, published by Camille Berger on February 4, 2024 on LessWrong. tl;dr: within the LW community, there are many clusters of strategies to achieve rationality: doing basic exercices, using jargon, reading, partaking workshops, privileging object-level activities, and several other opinions like putting an accent on feedback loops, difficult conversations or altered states of consciousness. Epistemic status: This is a vague model to help me understand other rationalists and why some of them keep doing things I think are wrong, or suggest me to do things I think are wrong. This is not based on real data. I will update according to possible discussions in the comments. Please be critical. Spending time in the rationalist community made me realize that there were several endeavors at reaching rationality that seemed to exist, some of which conflicted with others. This made me quite frustrated as I thought that my interpretation was the only one. The following list is an attempt at distinguishing the several approaches I've noticed. Of course, any rationalist will probably have elements of all theories at the same time. See each theory as the claim that a particular set of elements prevails above others. Believing in one theory usually goes on par with being fairly suspicious of others. Finally, remember that these categories are an attempt to distinguish what people are doing, not a guide about what side you should pick (if the sides exist at all). I suspect that most people end up applying one theory for practical reasons, more than because they have deeply thought about it at all. Basics Theory Partakers of the basics theory put a high emphasis on activities such as calibration, forecasting, lifehacks, and other fairly standard practices of epistemic and instrumental rationality. They don't see any real value in reading extensively LessWrong or going to workshops. They first and foremost believe in real-life, readily available practice. For them, spending too much time in the rationalist community, as opposed to doing simple exercises, is the main failure mode to avoid. Speaking Theory Partakers of the Speaking theory, although often relying on basics, usually put a high emphasis on using concepts met on LessWrong in daily parlance, although they generally do not necessarily insist on reading content on LessWrong. They may also insist on the importance of talking and discussing disagreements in a fairly regular way, while heavily relying on LessWrong terms and references in order to shape their thinking more rationally. They disagree with the statement that jargon should be avoided. For them, keeping your language, thinking, writing and discussion style the same way that it was before encountering rationality is the main failure mode to avoid. Reading Theory Partakers of the Reading theory put a high emphasis on reading LessWrong, more usually than not the " Canon ", but some might go to a further extent and insist on reading other materials as well, such as the books recommended on the CFAR website, rationalist blogs, or listening to a particular set of podcasts. They can be sympathetic or opposed to relying on LessWrong Speak, but don't consider it important. They can also be fairly familiar with the basics. For them, relying on LW Speak or engaging with the community while not mastering the relevant corpus is the main failure mode to avoid. Workshop Theory Partakers of the Workshop Theory consider most efforts of the Reading and Speaking theory to be somehow misleading. Since rationality is to be learned, it has to be deliberately practiced, if not ultralearned, and workshops such as CFAR are an important piece of this endeavor. Importantly, they do not really insist on reading the Sequences. Faced with the question " Do I need to read X...

Nobody Told Me!
Julia Galef: ...some people see things clearly and others don't

Nobody Told Me!

Play Episode Listen Later Nov 8, 2023 30:58


Your mindset is the topic on this episode. Do you wish you had emotional skills, habits and ways of looking at the world that served you better? Our guest, Julia Galef, says you can learn new ways of looking at the world and you should! Julia is the co-founder of the Center for Applied Rationality, the host of the podcast, "Rationally Speaking", and the author of the book, "The Scout Mindset: Why Some People See Things Clearly and Others Don't".  Her website is https://juliagalef.com/

Nobody Told Me!
Julia Galef: ...some people see things clearly and others don't

Nobody Told Me!

Play Episode Listen Later Nov 19, 2022 35:09


Your mindset is the topic on this episode. Do you wish you had emotional skills, habits and ways of looking at the world that served you better? Our guest, Julia Galef, says you can learn new ways of looking at the world and you should! Julia is the co-founder of the Center for Applied Rationality, the host of the podcast, "Rationally Speaking", and the author of the book, "The Scout Mindset: Why Some People See Things Clearly and Others Don't".  Her website is https://juliagalef.com/   Note: This episode was previously aired.   Thanks to our sponsor, Lomi, the world's first Smart Waste Appliance.  If you've struggled with composting and feel it's too much work, or feel bad that you're not doing your part to help the environment, you have to check out Lomi, the countertop electric composter.  Just about anything you'd put into the kitchen disposer can be put into the Lomi on your countertop and turned into dirt in four hours.  Use that dirt in your garden! There's no smell when Lomi runs and it's really quiet.  Food waste is gross, Lomi is your solution!  With the holidays just around the corner, Lomi will make the perfect gift for someone on your shopping list.  Just head to LOMI.COM/NTM and use the promo code NTM to get $50 off your Lomi! Thanks to our sponsor, Hover.  If you have a brand that you've always dreamt of building or a business you want to take online, the first step is finding your domain name. Hover makes this super simple with a clear and straightforward user experience, easy-to-use tools, and truly amazing support. Getting online has helped thousands of people around the world reach new heights with their businesses. In addition to the classics like .COM, you can get extensions like .SHOP, .TECH, and .ART, with over 400 more to choose from. You can buy a domain, set up custom email boxes, and point it to your website in just a few clicks.  Get your idea off the ground with the perfect domain name. Head to hover.com/NOBODY to get 10% off your first Hover purchase!

The Nonlinear Library
LW - You are better at math (and alignment) than you think by Trevor1

The Nonlinear Library

Play Episode Listen Later Oct 13, 2022 35:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You are better at math (and alignment) than you think, published by Trevor1 on October 13, 2022 on LessWrong. I was absolutely dazzled by the Viliam-Valentine Math-Education debate, which was in the comments section of the Seeking PCK chapter in the Center for Applied Rationality's Rationality Handbook. The debate gives an autopsy of why education systems inflicts math on children during their most formative years, resulting in the vast majority of the population falsely believing that they cannot enjoy math. In reality, you can probably get very good at math and have a great time doing it too; and, in fact, you even have a very serious chance of becoming one of the 300 AI safety researchers on earth. Odds are also good that you have a big advantage in terms of "superior-at-actually-using-math-in-real life" genes, which have a surprisingly weak correlation with the "inferior at learning math in a classroom at age 7" genes (such as being praise-motivated, obedient, and comfortable doing the same thing hundreds of times without asking why). I strongly recommend reading this debate yourself if you currently don't see yourself as quantitatively skilled or quantitatively employed, and also showing it to other people who might have had strong potential for quantitative skill all along. The phenomena described below seems to be the main reason why such a small proportion of people are willing to do the quantitative work necessary for technical AI alignment, and therefore they are a major alignment bottleneck that is worth tackling. Viliam: I think that in Slovakia an d Czechia, this style of teaching [PCK, aka paying attention to what it's like to learn something while you are teaching] is referred to as "constructivist education". On the other hand, for English-speaking audience, the word "constructivism" seems to refer to quite different things (1, 2, 3). And when I try to explain concepts like this in English, I sometimes get surprising responses when people seem to automatically run along the chain of associations: "trying to understand the student's model" = "constructivism" = "you should never explain math" = "math wars" = "total failure". I tried to figure out how this could have happened, and my current best guess is that in the past some people in USA promoted some really stupid and harmful ideas under the banner of "constructivism", which made people associate the word "constructivism" with those stupid ideas. Meanwhile, some of the original good ideas are still taught, but carefully under different labels. (Longer version here.) So perhaps the people who want to learn how to teach well, could find something useful in the writings of Piaget and Vygotsky. However, a Google search for "constructivism" might just return a list of horror stories. By the way, I would expect pedagogical content knowledge of STEM topics to be super rare, because it requires an intersection of being good at psychology and math. And, at least in my experience, psychologists are often quite math- and tech-phobic. On the other hand, people good at math often fail to empathize with the beginners, and just keep writing complex equations, preferably without explanation of what the symbols mean. Valentine: On the other hand, for English-speaking audience, the word "constructivism" seems to refer to quite different things (1, 2, 3). And when I try to explain concepts like this in English, I sometimes get surprising responses when people seem to automatically run along the chain of associations: "trying to understand the student's model" = "constructivism" = "you should never explain math" = "math wars" = "total failure". IIRC, this was the result of trying to implement the good version of constructivism in the USA. It wasn't just that some people had bad ideas and called those "constructivism" to...

The Nonlinear Library: LessWrong
LW - You are better at math (and alignment) than you think by Trevor1

The Nonlinear Library: LessWrong

Play Episode Listen Later Oct 13, 2022 35:33


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You are better at math (and alignment) than you think, published by Trevor1 on October 13, 2022 on LessWrong. I was absolutely dazzled by the Viliam-Valentine Math-Education debate, which was in the comments section of the Seeking PCK chapter in the Center for Applied Rationality's Rationality Handbook. The debate gives an autopsy of why education systems inflicts math on children during their most formative years, resulting in the vast majority of the population falsely believing that they cannot enjoy math. In reality, you can probably get very good at math and have a great time doing it too; and, in fact, you even have a very serious chance of becoming one of the 300 AI safety researchers on earth. Odds are also good that you have a big advantage in terms of "superior-at-actually-using-math-in-real life" genes, which have a surprisingly weak correlation with the "inferior at learning math in a classroom at age 7" genes (such as being praise-motivated, obedient, and comfortable doing the same thing hundreds of times without asking why). I strongly recommend reading this debate yourself if you currently don't see yourself as quantitatively skilled or quantitatively employed, and also showing it to other people who might have had strong potential for quantitative skill all along. The phenomena described below seems to be the main reason why such a small proportion of people are willing to do the quantitative work necessary for technical AI alignment, and therefore they are a major alignment bottleneck that is worth tackling. Viliam: I think that in Slovakia an d Czechia, this style of teaching [PCK, aka paying attention to what it's like to learn something while you are teaching] is referred to as "constructivist education". On the other hand, for English-speaking audience, the word "constructivism" seems to refer to quite different things (1, 2, 3). And when I try to explain concepts like this in English, I sometimes get surprising responses when people seem to automatically run along the chain of associations: "trying to understand the student's model" = "constructivism" = "you should never explain math" = "math wars" = "total failure". I tried to figure out how this could have happened, and my current best guess is that in the past some people in USA promoted some really stupid and harmful ideas under the banner of "constructivism", which made people associate the word "constructivism" with those stupid ideas. Meanwhile, some of the original good ideas are still taught, but carefully under different labels. (Longer version here.) So perhaps the people who want to learn how to teach well, could find something useful in the writings of Piaget and Vygotsky. However, a Google search for "constructivism" might just return a list of horror stories. By the way, I would expect pedagogical content knowledge of STEM topics to be super rare, because it requires an intersection of being good at psychology and math. And, at least in my experience, psychologists are often quite math- and tech-phobic. On the other hand, people good at math often fail to empathize with the beginners, and just keep writing complex equations, preferably without explanation of what the symbols mean. Valentine: On the other hand, for English-speaking audience, the word "constructivism" seems to refer to quite different things (1, 2, 3). And when I try to explain concepts like this in English, I sometimes get surprising responses when people seem to automatically run along the chain of associations: "trying to understand the student's model" = "constructivism" = "you should never explain math" = "math wars" = "total failure". IIRC, this was the result of trying to implement the good version of constructivism in the USA. It wasn't just that some people had bad ideas and called those "constructivism" to...

The Nonlinear Library
LW - Carrying the Torch: A Response to Anna Salamon by the Guild of the Rose by moridinamael

The Nonlinear Library

Play Episode Listen Later Jul 6, 2022 9:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Carrying the Torch: A Response to Anna Salamon by the Guild of the Rose, published by moridinamael on July 6, 2022 on LessWrong. In a recent comment and followup post, Anna Salamon described some of the rocks upon which the Center for Applied Rationality has run aground, and invited anyone interested in the project of improving human rationality to pick up the torch. We wanted to take this opportunity to remind you that the Guild of the Rose is here to carry that torch onward. On a personal note, I figured I ought to also explicitly mention that the Guild is, in no small part, the result of many parties from all over the world reading and responding to the bat signal that I shined into the sky in 2018. In that post, an alien visitor with a rationality textbook exhorted us to dream a little bigger regarding the potential of the rationality movement. If that post spoke to you, then you ought to know it spoke to a lot of people, and, well, the Guild is what we're doing about it together. And I think we're doing a pretty good job! Anna outlined a list of problems that she felt CFAR ran into, and I figured this would be a good place to describe how the Guild dealt with each of those problems. Wait a minute – dealt with those problems? Anna just posted a few weeks ago! When we started the Guild in 2020, we looked to CFAR as both an example to emulate and also differentiate ourselves from. We diagnosed many of the same problems that Anna describes in her post, though not necessarily in the same framing. We designed our organization to avoid those problems. We are grateful to CFAR for having pioneered this path. Problem 1: Differentiating effective interventions from unfalsifiable woo. The Guild focuses on actions and habits, not psychotherapy. I think ~0% of what we teach can be called unfalsifiable woo. Even when we tread more esoteric ground (e.g. the decision theory course) we focus on the practical and implementable. To sketch a perhaps not-totally-generous metaphor, imagine there are two martial arts schools you're trying to choose between: One of these schools is esoteric, focuses on dogmatic cultivation of Warrior Spirit, and demands intensive meditation. This school promises a kind of transcendent self-mastery and hints that ki-blasts are not entirely off the table. The other school focuses on punching bags, footwork drills, takedowns, and sparring. This school promises that if you need to throw hands, you'll probably come out of it alive. I think the vast majority of Human Potential Movement-adjacent organizations look more like the first school. Meanwhile, boring-looking organizations like the Boy Scouts of America, which focus almost entirely on the pragmatic practices of how to tie knots and start fires using sticks, probably succeed more at actually cultivating the "human potential" of their members. Thus, the Guild focuses on the pragmatic. Our workshops cover effective, "boring" interventions like better nutrition, using your speaking voice more effectively, improving your personal financial organization, emergency preparedness, and implementing a knowledge management system, among many others. There is a new workshop almost every week. Of course, we also teach what could be considered explicit rationality training. We have workshops focusing on epistemics, and on practical decision theory. But it's our belief that one way to become exceptional is to simply not be below-average at anything important. It is sort of embarrassing to focus extremely hard on "having correct beliefs" while not having a basic survival kit in your car. Problem 2: Instructors having impure motives, and mistakes amplifying when it comes to rewiring your brain. We have put some policies in place to mitigate against this kind of thing. We mitigate against the risks of rewiring members' b...

The Nonlinear Library: LessWrong
LW - Carrying the Torch: A Response to Anna Salamon by the Guild of the Rose by moridinamael

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 6, 2022 9:55


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Carrying the Torch: A Response to Anna Salamon by the Guild of the Rose, published by moridinamael on July 6, 2022 on LessWrong. In a recent comment and followup post, Anna Salamon described some of the rocks upon which the Center for Applied Rationality has run aground, and invited anyone interested in the project of improving human rationality to pick up the torch. We wanted to take this opportunity to remind you that the Guild of the Rose is here to carry that torch onward. On a personal note, I figured I ought to also explicitly mention that the Guild is, in no small part, the result of many parties from all over the world reading and responding to the bat signal that I shined into the sky in 2018. In that post, an alien visitor with a rationality textbook exhorted us to dream a little bigger regarding the potential of the rationality movement. If that post spoke to you, then you ought to know it spoke to a lot of people, and, well, the Guild is what we're doing about it together. And I think we're doing a pretty good job! Anna outlined a list of problems that she felt CFAR ran into, and I figured this would be a good place to describe how the Guild dealt with each of those problems. Wait a minute – dealt with those problems? Anna just posted a few weeks ago! When we started the Guild in 2020, we looked to CFAR as both an example to emulate and also differentiate ourselves from. We diagnosed many of the same problems that Anna describes in her post, though not necessarily in the same framing. We designed our organization to avoid those problems. We are grateful to CFAR for having pioneered this path. Problem 1: Differentiating effective interventions from unfalsifiable woo. The Guild focuses on actions and habits, not psychotherapy. I think ~0% of what we teach can be called unfalsifiable woo. Even when we tread more esoteric ground (e.g. the decision theory course) we focus on the practical and implementable. To sketch a perhaps not-totally-generous metaphor, imagine there are two martial arts schools you're trying to choose between: One of these schools is esoteric, focuses on dogmatic cultivation of Warrior Spirit, and demands intensive meditation. This school promises a kind of transcendent self-mastery and hints that ki-blasts are not entirely off the table. The other school focuses on punching bags, footwork drills, takedowns, and sparring. This school promises that if you need to throw hands, you'll probably come out of it alive. I think the vast majority of Human Potential Movement-adjacent organizations look more like the first school. Meanwhile, boring-looking organizations like the Boy Scouts of America, which focus almost entirely on the pragmatic practices of how to tie knots and start fires using sticks, probably succeed more at actually cultivating the "human potential" of their members. Thus, the Guild focuses on the pragmatic. Our workshops cover effective, "boring" interventions like better nutrition, using your speaking voice more effectively, improving your personal financial organization, emergency preparedness, and implementing a knowledge management system, among many others. There is a new workshop almost every week. Of course, we also teach what could be considered explicit rationality training. We have workshops focusing on epistemics, and on practical decision theory. But it's our belief that one way to become exceptional is to simply not be below-average at anything important. It is sort of embarrassing to focus extremely hard on "having correct beliefs" while not having a basic survival kit in your car. Problem 2: Instructors having impure motives, and mistakes amplifying when it comes to rewiring your brain. We have put some policies in place to mitigate against this kind of thing. We mitigate against the risks of rewiring members' b...

The Nonlinear Library
LW - CFAR Handbook: Introduction by CFAR!Duncan

The Nonlinear Library

Play Episode Listen Later Jun 28, 2022 2:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CFAR Handbook: Introduction, published by CFAR!Duncan on June 28, 2022 on LessWrong. The Center for Applied Rationality is a Bay Area non-profit that, among other things, ran lots of workshops to offer people tools and techniques for solving problems and improving their thinking. Those workshops were accompanied by a reference handbook, which has been available as a PDF since 2020. The handbook hasn't been substantially updated since it was written in 2017, but it remains a fairly straightforward primer for a lot of core rationality content. The LW team, working with the handbook's author Duncan Sabien, have decided to republish it as a lightly-edited sequence, so that each section can be linked on its own. In the workshop context, the handbook was a supplement to lectures, activities, and conversations taking place between participants and staff. Care was taken to emphasize the fact that each tool or technique or perspective was only as good as it was effectively applied to one's problems, plans, and goals. The workshop was intentionally structured to cause participants to actually try things (including iterating on or developing their own versions of what they were being shown), rather than simply passively absorb content. Keep this in mind as you read—mere knowledge of how to exercise does not confer the benefits of exercise! Discussion is strongly encouraged, and disagreement and debate are explicitly welcomed. Many LWers (including the staff of CFAR itself) have been tinkering with these concepts for years, and will have developed new perspectives on them, or interesting objections to them, or thoughts about how they work or break in practice. What follows is a historical artifact—the rough state-of-the-art at the time the handbook was written, circa 2017. That's an excellent jumping-off point, especially for newcomers, but there's been a lot of scattered progress since then, and we hope some of it will make its way into the comments. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - CFAR Handbook: Introduction by CFAR!Duncan

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 28, 2022 2:01


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CFAR Handbook: Introduction, published by CFAR!Duncan on June 28, 2022 on LessWrong. The Center for Applied Rationality is a Bay Area non-profit that, among other things, ran lots of workshops to offer people tools and techniques for solving problems and improving their thinking. Those workshops were accompanied by a reference handbook, which has been available as a PDF since 2020. The handbook hasn't been substantially updated since it was written in 2017, but it remains a fairly straightforward primer for a lot of core rationality content. The LW team, working with the handbook's author Duncan Sabien, have decided to republish it as a lightly-edited sequence, so that each section can be linked on its own. In the workshop context, the handbook was a supplement to lectures, activities, and conversations taking place between participants and staff. Care was taken to emphasize the fact that each tool or technique or perspective was only as good as it was effectively applied to one's problems, plans, and goals. The workshop was intentionally structured to cause participants to actually try things (including iterating on or developing their own versions of what they were being shown), rather than simply passively absorb content. Keep this in mind as you read—mere knowledge of how to exercise does not confer the benefits of exercise! Discussion is strongly encouraged, and disagreement and debate are explicitly welcomed. Many LWers (including the staff of CFAR itself) have been tinkering with these concepts for years, and will have developed new perspectives on them, or interesting objections to them, or thoughts about how they work or break in practice. What follows is a historical artifact—the rough state-of-the-art at the time the handbook was written, circa 2017. That's an excellent jumping-off point, especially for newcomers, but there's been a lot of scattered progress since then, and we hope some of it will make its way into the comments. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Decision Corner
Soldiers and Scouts with Julia Galef

The Decision Corner

Play Episode Listen Later Jun 27, 2022 40:07


In this episode of the podcast, Brooke chats with Julia Galef - co-founder of the Center for Applied Rationality and host of the podcast 'Rationally Speaking'. They discuss the topic or Julia's book, 'The Scout Mindset' which looks at the underlying motivations that guide our beliefs and behaviors. Some of the things covered include… - Scout versus soldier mindset - how they differ and why we rely on both, depending on the situation. - The downsides of soldier mindset and why our tendency to defend our beliefs no matter what can get us into trouble. - The benefits of adopting an evidence-based mindset and being open to things that challenge our beliefs, aka 'drawing the map in pencil'. - Practical ways we can embrace a scout mindset in our personal and professional lives.

Reach Truth Podcast
Kenshō and Game Theory with Michael Smith

Reach Truth Podcast

Play Episode Listen Later Jun 15, 2022 110:26


Tasshin talks with Michael Smith (@Morphenius) about Immortalism, Forteanism, founding the Center for Applied Rationality, cult leaders, kensho, and more... Michael Smith on Twitter MAGE Michael's YouTube Channel Michael's Facebook Michael's Instagram If you enjoyed this episode, consider supporting Tasshin and the Reach Truth Podcast on Patreon.

The Nonlinear Library
LW - What is Going On With CFAR? by niplav

The Nonlinear Library

Play Episode Listen Later May 28, 2022 1:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is Going On With CFAR?, published by niplav on May 28, 2022 on LessWrong. Whispers have been going around on the internet. People have been talking, using words like "defunct" or "inactive" (not yet "dead"). The last update to the website was December 2020 (the copyright on the website states "© Copyright 2011-2021 Center for Applied Rationality. All rights reserved."), the last large-scale public communication was end of 2019 (that I know of). If CFAR is now "defunct", it might be useful for the rest of the world to know about that, because the problem of making humans and groups more rational hasn't disappeared, and some people might want to pick up the challenge (and perhaps talk to people who were involved in it to rescue some of the conclusions and insights). Additionally, it would be interesting to hear why the endeavour was abandoned in the end, to avoid going on wild goose-chases oneself (or, in the very boring case, to discover that they ran out of funding (though that appears unlikely to me)). If CFAR isn't "defunct", I can see a few possibilities: It's working on some super-secret projects, perhaps in conjunction with MIRI (which sounds reasonable enough, but there's still value left on the table with distributing rationality training and raising the civilizational sanity) They are going about their regular business, but the social network they operate in is large enough that they don't need to advertise on their website (I think this is unlikely, it contradicts most of the evidence in the comments linked above) So, what is going on? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - What is Going On With CFAR? by niplav

The Nonlinear Library: LessWrong

Play Episode Listen Later May 28, 2022 1:39


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is Going On With CFAR?, published by niplav on May 28, 2022 on LessWrong. Whispers have been going around on the internet. People have been talking, using words like "defunct" or "inactive" (not yet "dead"). The last update to the website was December 2020 (the copyright on the website states "© Copyright 2011-2021 Center for Applied Rationality. All rights reserved."), the last large-scale public communication was end of 2019 (that I know of). If CFAR is now "defunct", it might be useful for the rest of the world to know about that, because the problem of making humans and groups more rational hasn't disappeared, and some people might want to pick up the challenge (and perhaps talk to people who were involved in it to rescue some of the conclusions and insights). Additionally, it would be interesting to hear why the endeavour was abandoned in the end, to avoid going on wild goose-chases oneself (or, in the very boring case, to discover that they ran out of funding (though that appears unlikely to me)). If CFAR isn't "defunct", I can see a few possibilities: It's working on some super-secret projects, perhaps in conjunction with MIRI (which sounds reasonable enough, but there's still value left on the table with distributing rationality training and raising the civilizational sanity) They are going about their regular business, but the social network they operate in is large enough that they don't need to advertise on their website (I think this is unlikely, it contradicts most of the evidence in the comments linked above) So, what is going on? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - Good Heart Week: Extending the Experiment by Ben Pace

The Nonlinear Library

Play Episode Listen Later Apr 2, 2022 4:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Heart Week: Extending the Experiment, published by Ben Pace on April 2, 2022 on LessWrong. Yesterday we launched Good Heart Tokens, and said they could be exchanged for 1 USD each. Today I'm here to tell you: this is actually happening and it will last a week. You will get a payout if you give us a PayPal/ETH address or name a charity of your choosing. Note that voting rings and fundraising are now out of scope, we will be removing and banning users who do that kind of thing starting now. More on this at the end of the post. Also, we're tentatively changing posts to be worth 4x the Good Heart Tokens of comments. Why is this experiment continuing? Let me state the obvious: if this new system were to last for many months or years, I expect these financial rewards would change the site culture for the worse. It would select on pretty different motives for being here, and importantly select on different people who are doing the voting, and then the game would be up. (Also I would spend a lot of my life catching people explicitly trying to game the system.) However, while granting this, I suspect that in the short run giving LessWrong members and lurkers a stronger incentive than usual to write well-received stuff has the potential to be great for the site. For instance, I think the effect yesterday on site regulars was pretty good. I'll quote AprilSR who said: I am not very good at directing my monkey brain, so it helped a lot that my System 1 really anticipated getting money from spending time on LessWrong today. ...There's probably better systems than “literally give out $1/karma” but it's surprisingly effective at motivating me in particular in ways that other things which have been tried very much aren't. I think lots of people wrote good stuff, much more than a normal day. Personally my favorite thing that happened due to this yesterday was when people published a bunch of their drafts that had been sitting around, some of which I thought were excellent. I hope this will be a kick for many people to actually sit down and write that post they've had in their heads for a while. (I certainly don't think money will be a motivator for all people, but I suspect it is true for enough that it will be worth it for us given the Lightcone Infrastructure team's value of money.) I'm really interested to find out what happens over a week, I have a hope it will be pretty good, and the Lightcone Infrastructure team has the resources that makes the price worth it to us. So I invite you into this experiment with us :) Info and Rules Here's the basic info and rules: Date: Good Heart Tokens will continue to be accrued until EOD Thursday April 7th (Pacific Time). I do not expect to extend it beyond then. Scope: We are no longer continuing with "fun" uses of the karma system. Voting rings, fundraising posts, etc, are no longer within scope. Things like John Wentworth's and Aphyer's voting ring, and G Gordon Worley III's Donation Lottery were both playful and fine uses of the system on April 1st, but from now I'd like to ask these to stop. Moderation: We'll bring mod powers against accounts that are abusing the system. We'll also do a pass over the votes at the end of the week to check for any suspicious behavior (while aiming to minimize any deanonymization). Eligible: LW mods and employees of the Center for Applied Rationality are not eligible for prizes. Votes: Reminder that only votes from pre-existing accounts are turned into Good Heart Tokens. (But new accounts can still earn tokens!) And of course self-votes are not counted. Cap Change: We're lifting the 600 token cap to 1000. (If people start getting to 1000, we will consider raising it further, but no promises.) Weight Change: We're tentatively changing it so that votes on posts are now worth 4x votes on comments. (This will...

The Nonlinear Library: LessWrong
LW - Good Heart Week: Extending the Experiment by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 2, 2022 4:10


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Heart Week: Extending the Experiment, published by Ben Pace on April 2, 2022 on LessWrong. Yesterday we launched Good Heart Tokens, and said they could be exchanged for 1 USD each. Today I'm here to tell you: this is actually happening and it will last a week. You will get a payout if you give us a PayPal/ETH address or name a charity of your choosing. Note that voting rings and fundraising are now out of scope, we will be removing and banning users who do that kind of thing starting now. More on this at the end of the post. Also, we're tentatively changing posts to be worth 4x the Good Heart Tokens of comments. Why is this experiment continuing? Let me state the obvious: if this new system were to last for many months or years, I expect these financial rewards would change the site culture for the worse. It would select on pretty different motives for being here, and importantly select on different people who are doing the voting, and then the game would be up. (Also I would spend a lot of my life catching people explicitly trying to game the system.) However, while granting this, I suspect that in the short run giving LessWrong members and lurkers a stronger incentive than usual to write well-received stuff has the potential to be great for the site. For instance, I think the effect yesterday on site regulars was pretty good. I'll quote AprilSR who said: I am not very good at directing my monkey brain, so it helped a lot that my System 1 really anticipated getting money from spending time on LessWrong today. ...There's probably better systems than “literally give out $1/karma” but it's surprisingly effective at motivating me in particular in ways that other things which have been tried very much aren't. I think lots of people wrote good stuff, much more than a normal day. Personally my favorite thing that happened due to this yesterday was when people published a bunch of their drafts that had been sitting around, some of which I thought were excellent. I hope this will be a kick for many people to actually sit down and write that post they've had in their heads for a while. (I certainly don't think money will be a motivator for all people, but I suspect it is true for enough that it will be worth it for us given the Lightcone Infrastructure team's value of money.) I'm really interested to find out what happens over a week, I have a hope it will be pretty good, and the Lightcone Infrastructure team has the resources that makes the price worth it to us. So I invite you into this experiment with us :) Info and Rules Here's the basic info and rules: Date: Good Heart Tokens will continue to be accrued until EOD Thursday April 7th (Pacific Time). I do not expect to extend it beyond then. Scope: We are no longer continuing with "fun" uses of the karma system. Voting rings, fundraising posts, etc, are no longer within scope. Things like John Wentworth's and Aphyer's voting ring, and G Gordon Worley III's Donation Lottery were both playful and fine uses of the system on April 1st, but from now I'd like to ask these to stop. Moderation: We'll bring mod powers against accounts that are abusing the system. We'll also do a pass over the votes at the end of the week to check for any suspicious behavior (while aiming to minimize any deanonymization). Eligible: LW mods and employees of the Center for Applied Rationality are not eligible for prizes. Votes: Reminder that only votes from pre-existing accounts are turned into Good Heart Tokens. (But new accounts can still earn tokens!) And of course self-votes are not counted. Cap Change: We're lifting the 600 token cap to 1000. (If people start getting to 1000, we will consider raising it further, but no promises.) Weight Change: We're tentatively changing it so that votes on posts are now worth 4x votes on comments. (This will...

The Nonlinear Library
LW - Good Heart Donation Lottery by G Gordon Worley III

The Nonlinear Library

Play Episode Listen Later Apr 1, 2022 2:02


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Heart Donation Lottery, published by G Gordon Worley III on April 1, 2022 on LessWrong. Greetings, all! Good Heart Tokens are clearly the mechanism we need to do more Good in the world. To that end, I'm starting a daily donation lottery. Here's how it works. First, strong up vote this post. We need to try to get it to at least $600 worth of upvotes to maximize the payout. (though see update 1; there's no limit) Next, give an answer to this "question" specifying the charity you'd like to see donated to. Then, upvote answers. This will generate a weighted distribution from which I will randomly pick the winner of the lottery. So, for example, if there's 1 answer for MIRI that gets 69 Good Heart Tokens and 1 answer for AMF that gets 420 Good Heart Tokens, then I'll do a random weighted drawing where MIRI has a weight of 69 and AMF has a weight of 420. The winner gets the pot of tokens generated by upvoting this post. You get to do what you like with the tokens on your answer. I think in good conscience you should donate them to the winner (announced in an update to the post the next day), however I have no way to enforce that. No duplicate answers for the same charity. If you accidentally post a duplicate please delete it. If you don't, whenever I notice it I'll keep only the one that has the earliest time stamp. Good luck! Update 1: After posting this I quickly realized there's no $600 cap. From the payments page: "If you receive more than $600 in a year, you'll need to be entered into Center for Applied Rationality's payment system. CFAR will contact you via your LessWrong email address about next steps. (Make sure it's an email that you check regularly)". So up vote as much as you can, the sky is the limit! To that end I'll be posting some seed answers to vote on since I'm not the bottleneck. All tokens earned on my answers will be added to the donation pool. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Good Heart Donation Lottery by G Gordon Worley III

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 1, 2022 2:02


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Heart Donation Lottery, published by G Gordon Worley III on April 1, 2022 on LessWrong. Greetings, all! Good Heart Tokens are clearly the mechanism we need to do more Good in the world. To that end, I'm starting a daily donation lottery. Here's how it works. First, strong up vote this post. We need to try to get it to at least $600 worth of upvotes to maximize the payout. (though see update 1; there's no limit) Next, give an answer to this "question" specifying the charity you'd like to see donated to. Then, upvote answers. This will generate a weighted distribution from which I will randomly pick the winner of the lottery. So, for example, if there's 1 answer for MIRI that gets 69 Good Heart Tokens and 1 answer for AMF that gets 420 Good Heart Tokens, then I'll do a random weighted drawing where MIRI has a weight of 69 and AMF has a weight of 420. The winner gets the pot of tokens generated by upvoting this post. You get to do what you like with the tokens on your answer. I think in good conscience you should donate them to the winner (announced in an update to the post the next day), however I have no way to enforce that. No duplicate answers for the same charity. If you accidentally post a duplicate please delete it. If you don't, whenever I notice it I'll keep only the one that has the earliest time stamp. Good luck! Update 1: After posting this I quickly realized there's no $600 cap. From the payments page: "If you receive more than $600 in a year, you'll need to be entered into Center for Applied Rationality's payment system. CFAR will contact you via your LessWrong email address about next steps. (Make sure it's an email that you check regularly)". So up vote as much as you can, the sky is the limit! To that end I'll be posting some seed answers to vote on since I'm not the bottleneck. All tokens earned on my answers will be added to the donation pool. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Think with Pinker
Being right

Think with Pinker

Play Episode Listen Later Feb 3, 2022 29:03


Why getting it right might mean admitting you're wrong. What if we were to replace intellectual combat with genuine discussion and treat beliefs as hypotheses to be tested rather than treasures to be defended? In his guide to thinking better, Professor Steven Pinker is joined by: Julia Galef of the Center for Applied Rationality and author of ‘The Scout Mindset' Daniel Willingham, professor of psychology at the University of Virginia and the author of ‘Cognition and Raising Kids who Read' Producers: Imogen Walford and Joe Kent Editor: Emma Rippon Think with Pinker is produced in partnership with The Open University.

World of DaaS
Julia Galef: How to be Wrong Correctly

World of DaaS

Play Episode Listen Later Jan 25, 2022 44:54 Transcription Available


Julia Galef, co-founder of the Center for Applied Rationality, host of the podcast Rationally Speaking, and author of The Scout Mindset joins World of DaaS host Auren Hoffman. Auren and Julia explore building a scout mindset as defined in Julia's new book, why embracing being wrong is important and tactical approaches to shifting your mindset. They also cover how entrepreneurs approach risk and how the scout mindset manifests in unique ways across different professions.  World of DaaS is brought to you by SafeGraph. For more episodes, visit safegraph.com/podcastsYou can find Auren Hoffman (CEO of SafeGraph) on Twitter at @auren

The Nonlinear Library: LessWrong Top Posts
Preface by Eliezer Yudkowsky

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 5:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Preface, published Eliezer Yudkowsky on LessWrong. You hold in your hands a compilation of two years of daily blog posts. In retrospect, I look back on that project and see a large number of things I did completely wrong. I'm fine with that. Looking back and not seeing a huge number of things I did wrong would mean that neither my writing nor my understanding had improved since 2009. Oops is the sound we make when we improve our beliefs and strategies; so to look back at a time and not see anything you did wrong means that you haven't learned anything or changed your mind since then. It was a mistake that I didn't write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples. In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing, which was that I didn't realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn't realize that part was the priority; and regarding this I can only say “Oops” and “Duh.” Yes, sometimes those big issues really are big and really are important; but that doesn't change the basic truth that to master skills you need to practice them and it's harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.) A third huge mistake I made was to focus too much on rational belief, too little on rational action. The fourth-largest mistake I made was that I should have better organized the content I was presenting in the sequences. In particular, I should have created a wiki much earlier, and made it easier to read the posts in sequence. That mistake at least is correctable. In the present work Rob Bensinger has reordered the posts and reorganized them as much as he can without trying to rewrite all the actual material (though he's rewritten a bit of it). My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won't lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream. Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt. Despite my mistake, I am happy to say that my readership has so far been amazingly good about not using my rhetoric as an excuse to bully or belittle others. (I want to single out Scott Alexander in particular here, who is a nicer person than I am and an increasingly amazing writer on these topics, and may deserve part of the credit for making the culture of Less Wrong a healthy one.) To be able to look backwards and say that you've “failed” implies that you had goals. So what was it that I was trying to do? Th...

The Nonlinear Library: LessWrong Top Posts
Reality-Revealing and Reality-Masking Puzzles by AnnaSalamon

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 20:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reality-Revealing and Reality-Masking Puzzles, published by AnnaSalamon on LessWrong. Write a Review Tl;dr: I'll try here to show how CFAR's “art of rationality” has evolved over time, and what has driven that evolution. In the course of this, I'll introduce the distinction between what I'll call “reality-revealing puzzles” and “reality-masking puzzles”—a distinction that I think is almost necessary for anyone attempting to develop a psychological art in ways that will help rather than harm. (And one I wish I'd had explicitly back when the Center for Applied Rationality was founded.) I'll also be trying to elaborate, here, on the notion we at CFAR have recently been tossing around about CFAR being an attempt to bridge between common sense and Singularity scenarios—an attempt to figure out how people can stay grounded in common sense and ordinary decency and humane values and so on, while also taking in (and planning actions within) the kind of universe we may actually be living in. Arts grow from puzzles. I like to look at mathematics, or music, or ungodly things like marketing, and ask: What puzzles were its creators tinkering with that led them to leave behind these structures? (Structures now being used by other people, for other reasons.) I picture arts like coral reefs. Coral polyps build shell-bits for their own reasons, but over time there accumulates a reef usable by others. Math built up like this—and math is now a powerful structure for building from. [Sales and Freud and modern marketing/self-help/sales etc. built up some patterns too—and our basic way of seeing each other and ourselves is now built partly in and from all these structures, for better and for worse.] So let's ask: What sort of reef is CFAR living within, and adding to? From what puzzles (what patterns of tinkering) has our “rationality” accumulated? Two kinds of puzzles: “reality-revealing” and “reality-masking” First, some background. Some puzzles invite a kind of tinkering that lets the world in and leaves you smarter. A kid whittling with a pocket knife is entangling her mind with bits of reality. So is a driver who notices something small about how pedestrians dart into streets, and adjusts accordingly. So also is the mathematician at her daily work. And so on. Other puzzles (or other contexts) invite a kind of tinkering that has the opposite effect. They invite a tinkering that gradually figures out how to mask parts of the world from your vision. For example, some months into my work as a math tutor I realized I'd been unconsciously learning how to cue my students into acting like my words made sense (even when they didn't). I'd learned to mask from my own senses the clues about what my students were and were not learning. We'll be referring to these puzzle-types a lot, so it'll help to have a term for them. I'll call these puzzles “good” or “reality-revealing” puzzles, and “bad” or “reality-masking” puzzles, respectively. Both puzzle-types appear abundantly in most folks' lives, often mixed together. The same kid with the pocket knife who is busy entangling her mind with data about bark and woodchips and fine motor patterns (from the “good” puzzle of “how can I whittle this stick”), may simultaneously be busy tinkering with the “bad” puzzle of “how can I not-notice when my creations fall short of my hopes.” (Even “good” puzzles can cause skill loss: a person who studies Dvorak may lose some of their QWERTY skill, and someone who adapts to the unselfconscious arguing of the math department may do worse for a while in contexts requiring tact. The distinction is that “good” puzzles do this only incidentally. Good puzzles do not invite a search for configurations that mask bits of reality. Whereas with me and my math tutees, say, there was a direct reward/conditioning response that happe...

The Nonlinear Library: LessWrong Top Posts
Checklist of Rationality Habits by AnnaSalamon

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 13:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Checklist of Rationality Habits, published by AnnaSalamon on the LessWrong. As you may know, the Center for Applied Rationality has run several workshops, each teaching content similar to that in the core sequences, but made more practical, and more into fine-grained habits. Below is the checklist of rationality habits we have been using in the minicamps' opening session. It was co-written by Eliezer, myself, and a number of others at CFAR. As mentioned below, the goal is not to assess how "rational" you are, but, rather, to develop a personal shopping list of habits to consider developing. We generated it by asking ourselves, not what rationality content it's useful to understand, but what rationality-related actions (or thinking habits) it's useful to actually do. I hope you find it useful; I certainly have. Comments and suggestions are most welcome; it remains a work in progress. (It's also available as a pdf.) This checklist is meant for your personal use so you can have a wish-list of rationality habits, and so that you can see if you're acquiring good habits over the next year—it's not meant to be a way to get a 'how rational are you?' score, but, rather, a way to notice specific habits you might want to develop. For each item, you might ask yourself: did you last use this habit... Never Today/yesterday Last week Last month Last year Before the last year Reacting to evidence / surprises / arguments you haven't heard before; flagging beliefs for examination. When I see something odd - something that doesn't fit with what I'd ordinarily expect, given my other beliefs - I successfully notice, promote it to conscious attention and think "I notice that I am confused" or some equivalent thereof. (Example: You think that your flight is scheduled to depart on Thursday. On Tuesday, you get an email from Travelocity advising you to prepare for your flight “tomorrow”, which seems wrong. Do you successfully raise this anomaly to the level of conscious attention? (Based on the experience of an actual LWer who failed to notice confusion at this point and missed their plane flight.) When somebody says something that isn't quite clear enough for me to visualize, I notice this and ask for examples. (Recent example from Eliezer: A mathematics student said they were studying "stacks". I asked for an example of a stack. They said that the integers could form a stack. I asked for an example of something that was not a stack.) (Recent example from Anna: Cat said that her boyfriend was very competitive. I asked her for an example of "very competitive." She said that when he's driving and the person next to him revs their engine, he must be the one to leave the intersection first—and when he's the passenger he gets mad at the driver when they don't react similarly.) I notice when my mind is arguing for a side (instead of evaluating which side to choose), and flag this as an error mode. (Recent example from Anna: Noticed myself explaining to myself why outsourcing my clothes shopping does make sense, rather than evaluating whether to do it.) I notice my mind flinching away from a thought; and when I notice, I flag that area as requiring more deliberate exploration. (Recent example from Anna: I have a failure mode where, when I feel socially uncomfortable, I try to make others feel mistaken so that I will feel less vulnerable. Pulling this thought into words required repeated conscious effort, as my mind kept wanting to just drop the subject.) I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, ...

The Nonlinear Library: LessWrong Top Posts
Three ways CFAR has changed my view of rationality by Julia_Galef

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 8:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Three ways CFAR has changed my view of rationality, published by Julia_Galef on the LessWrong. The Center for Applied Rationality's perspective on rationality is quite similar to Less Wrong's. In particular, we share many of Less Wrong's differences from what's sometimes called "traditional" rationality, such as Less Wrong's inclusion of Bayesian probability theory and the science on heuristics and biases. But after spending the last year and a half with CFAR as we've developed, tested, and attempted to teach hundreds of different versions of rationality techniques, I've noticed that my picture of what rationality looks like has shifted somewhat from what I perceive to be the most common picture of rationality on Less Wrong. Here are three ways I think CFAR has come to see the landscape of rationality differently than Less Wrong typically does – not disagreements per se, but differences in focus or approach. (Disclaimer: I'm not speaking for the rest of CFAR here; these are my own impressions.) 1. We think less in terms of epistemic versus instrumental rationality. Formally, the methods of normative epistemic versus instrumental rationality are distinct: Bayesian inference and expected utility maximization. But methods like "use Bayes' Theorem" or "maximize expected utility" are usually too abstract and high-level to be helpful for a human being trying to take manageable steps towards improving her rationality. And when you zoom in from that high-level description of rationality down to the more concrete level of "What five-second mental habits should I be training?" the distinction between epistemic and instrumental rationality becomes less helpful. Here's an analogy: epistemic rationality is like physics, where the goal is to figure out what's true about the world, and instrumental rationality is like engineering, where the goal is to accomplish something you want as efficiently and effectively as possible. You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal. However, there's plenty of physics that's done for its own sake, and doesn't have obvious practical applications, at least not yet. (String theory, for example.) Similarly, you need a fair amount of epistemic rationality in order to be instrumentally rational, though there are parts of epistemic rationality that many of us practice for their own sake, and not as a means to an end. (For example, I appreciate clarifying my thinking about free will even though I don't expect it to change any of my behavior.) In this analogy, many skills we focus on at CFAR are akin to essential math, like linear algebra or differential equations, which compose the fabric of both physics and engineering. It would be foolish to expect someone who wasn't comfortable with math to successfully calculate a planet's trajectory or design a bridge. And it would be similarly foolish to expect you to successfully update like a Bayesian or maximize your utility if you lacked certain underlying skills. Like, for instance: Noticing your emotional reactions, and being able to shift them if it would be useful. Doing thought experiments. Noticing and overcoming learned helplessness. Visualizing in concrete detail. Preventing yourself from flinching away from a thought. Rewarding yourself for mental habits you want to reinforce. These and other building blocks of rationality are essential both for reaching truer beliefs, and for getting what you value; they don't fall cleanly into either an "epistemic" or an "instrumental" category. Which is why, when I consider what pieces of rationality CFAR should be developing, I've been thinking less in terms of "How can we be more epistemically rational?" or "How can we be more instrumentally rational?" and instead using queries li...

The Nonlinear Library: EA Forum Top Posts
Introducing Training for Good (TFG) by Cillian Crosson, Jan-WillemvanPutten, SteveThompson

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 20:26


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Training for Good (TFG), published by Cillian Crosson, Jan-WillemvanPutten, SteveThompson on the Effective Altruism Forum. We are excited to announce the launch of a new effective altruism training organisation, Training for Good (TFG). trainingforgood.com TFG aims to upskill people to tackle the most pressing global problems. TFG will identify the most critical skill gaps in the EA movement and address them through training. TFG was incubated by Charity Entrepreneurship in 2021. This post introduces TFG, provides an overview of the problems we seek to tackle and presents our immediate plans for addressing them. The following is structured into: Overview of TFG TFG's short term plans Decisions and underlying assumptions How you can help Ask us anything We thank Brian Tan, Charles He, Devon Fritz, Isaac Dunn, James Ozden, and Sam Hilton for their invaluable feedback on this announcement. All errors and shortcomings are our own. Overview of TFG Why training? Track record Some EA organisations have experienced moderate success running training programmes and online courses. Animal Advocacy Careers ran a ~9 week online course, teaching core content about effective animal advocacy, effective altruism, and impact-focused career strategy. They recently published the results of two longitudinal studies they ran comparing and testing the cost-effectiveness of this online course and their one-to-one advising calls. Their results weakly suggested that while one-to-one calls are slightly more effective per participant, online courses are a slightly more cost-effective service Charity Entrepreneurship's two-month incubation programme aims to equip participants with the skills needed to found an effective non-profit. Through this programme, they have helped launch 16 effective organisations to date. The Centre for Effective Altruism uses online courses as a high fidelity method of spreading EA ideas and growing the movement. They run an Introductory EA Programme which introduces the core ideas of effective altruism through 1-hour discussions over the course of eight weeks. Other programmes offered by Peter Singer, the Centre for Applied Rationality, the Good Food Institute, and 80,000 Hours have also proved popular, suggesting that there is further demand for such courses. Movement demographics Movement demographics suggest that EAs are a promising audience for training. 80% are aged under 35 and a large proportion are still deciding what career to pursue or building up career capital. Over 50% of EAs also place career capital as a focus above direct impact. These demographic factors suggest a strong interest in gaining skills and participating in training programmes. Cause neutral and flexible Training is a cause neutral intervention. Cross-cutting programmes can be run which benefit several cause areas simultaneously or multiple targeted programmes can be run for different cause areas. Flexibility is particularly important when we consider that EA is a relatively young movement and that there may be cause areas which deserve our attention that we are currently neglecting. If information arises to suggest that we should switch our attention to another cause area (even temporarily), TFG could easily do so. Moreover, we believe that such organisational flexibility could help enable movement flexibility, as it creates the space for intellectual exploration to take place. Comparative advantage Our co-founding team has a relative amount of expertise designing and delivering training programmes. In particular, Steve has extensive experience in both the design and facilitation of large scale training and development programmes. He has spent over ten years in the corporate sector training and coaching across multinational firms. Cillian and Jan-Willem also have experience fac...

The Nonlinear Library: EA Forum Top Posts
EA Leaders Forum: Survey on EA priorities (data and analysis) by Aaron Gertler

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 27:38


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. This is: EA Leaders Forum: Survey on EA priorities (data and analysis), published by Aaron Gertler on the effective altruism forum. Thanks to Alexander Gordon-Brown, Amy Labenz, Ben Todd, Jenna Peters, Joan Gass, Julia Wise, Rob Wiblin, Sky Mayhew, and Will MacAskill for assisting in various parts of this project, from finalizing survey questions to providing feedback on the final post. Clarification on pronouns: “We” refers to the group of people who worked on the survey and helped with the writeup. “I” refers to me; I use it to note some specific decisions I made about presenting the data and my observations from attending the event. This post is the second in a series of posts where we aim to share summaries of the feedback we have received about our own work and about the effective altruism community more generally. The first can be found here. Overview Each year, the EA Leaders Forum, organized by CEA, brings together executives, researchers, and other experienced staffers from a variety of EA-aligned organizations. At the event, they share ideas and discuss the present state (and possible futures) of effective altruism. This year (during a date range centered around ~1 July), invitees were asked to complete a “Priorities for Effective Altruism” survey, compiled by CEA and 80,000 Hours, which covered the following broad topics: The resources and talents most needed by the community How EA's resources should be allocated between different cause areas Bottlenecks on the community's progress and impact Problems the community is facing, and mistakes we could be making now This post is a summary of the survey's findings (N = 33; 56 people received the survey). Here's a list of organizations respondents worked for, with the number of respondents from each organization in parentheses. Respondents included both leadership and other staff (an organization appearing on this list doesn't mean that the org's leader responded). 80,000 Hours (3) Animal Charity Evaluators (1) Center for Applied Rationality (1) Centre for Effective Altruism (3) Centre for the Study of Existential Risk (1) DeepMind (1) Effective Altruism Foundation (2) Effective Giving (1) Future of Humanity Institute (4) Global Priorities Institute (2) Good Food Institute (1) Machine Intelligence Research Institute (1) Open Philanthropy Project (6) Three respondents work at organizations small enough that naming the organizations would be likely to de-anonymize the respondents. Three respondents don't work at an EA-aligned organization, but are large donors and/or advisors to one or more such organizations. What this data does and does not represent This is a snapshot of some views held by a small group of people (albeit people with broad networks and a lot of experience with EA) as of July 2019. We're sharing it as a conversation-starter, and because we felt that some people might be interested in seeing the data. These results shouldn't be taken as an authoritative or consensus view of effective altruism as a whole. They don't represent everyone in EA, or even every leader of an EA organization. If you're interested in seeing data that comes closer to this kind of representativeness, consider the 2018 EA Survey Series, which compiles responses from thousands of people. Talent Needs What types of talent do you currently think [your organization // EA as a whole] will need more of over the next 5 years? (Pick up to 6) This question was the same as a question asked to Leaders Forum participants in 2018 (see 80,000 Hours' summary of the 2018 Talent Gaps survey for more). Here's a graph showing how the most common responses from 2019 compare to the same categories in the 2018 talent needs survey from 80,000 Hours, for EA as a whole: And for the respondent's organization: The following table contains data on every category ...

The Ezra Klein Show
Predicting the Future Is Possible. ‘Superforecasters' Know How.

The Ezra Klein Show

Play Episode Listen Later Dec 3, 2021 52:51


Can we predict the future more accurately?It's a question we humans have grappled with since the dawn of civilization — one that has massive implications for how we run our organizations, how we make policy decisions, and how we live our everyday lives.It's also the question that Philip Tetlock, a psychologist at the University of Pennsylvania and a co-author of “Superforecasting: The Art and Science of Prediction,” has dedicated his career to answering. In 2011, he recruited and trained a team of ordinary citizens to compete in a forecasting tournament sponsored by the U.S. intelligence community. Participants were asked to place numerical probabilities from 0 to 100 percent on questions like “Will North Korea launch a new multistage missile in the next year” and “Is Greece going to leave the eurozone in the next six months?” Tetlock's group of amateur forecasters would go head-to-head against teams of academics as well as career intelligence analysts, including those from the C.I.A., who had access to classified information that Tetlock's team didn't have.The results were shocking, even to Tetlock. His team won the competition by such a large margin that the government agency funding the competition decided to kick everyone else out, and just study Tetlock's forecasters — the best of whom were dubbed “superforecasters” — to see what intelligence experts might learn from them.So this conversation is about why some people, like Tetlock's “superforecasters,” are so much better at predicting the future than everyone else — and about the intellectual virtues, habits of mind, and ways of thinking that the rest of us can learn to become better forecasters ourselves. It also explores Tetlock's famous finding that the average expert is roughly as accurate as “a dart-throwing chimpanzee” at predicting future events, the inverse correlation between a person's fame and their ability to make accurate predictions, how superforecasters approach real-life questions like whether robots will replace white-collar workers, why government bureaucracies are often resistant to adopt the tools of superforecasting and more.Mentioned:Expert Political Judgment by Philip E. Tetlock“What do forecasting rationales reveal about thinking patterns of top geopolitical forecasters?” by Christopher W. Karvetski et al.Book recommendations:Thinking, Fast and Slow by Daniel KahnemanEnlightenment Now by Steven PinkerPerception and Misperception in International Politics by Robert JervisThis episode is guest-hosted by Julia Galef, a co-founder of the Center for Applied Rationality, host of the “Rationally Speaking” podcast and author of “The Scout Mindset: Why Some People See Things Clearly and Others Don't.” You can follow her on Twitter @JuliaGalef. (Learn more about the other guest hosts during Ezra's parental leave here.)Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of "The Ezra Klein Show" at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.“The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin and Alison Bruzek.

Capital Ideas Investing Podcast
Sharpen your critical thinking with The Scout Mindset

Capital Ideas Investing Podcast

Play Episode Listen Later Sep 10, 2021 39:34


How we can get better at thinking critically and objectively is the far-reaching topic of this discussion. Author Julia Galef discusses the findings of her new book, The Scout Mindset: Why Some People See Things Clearly and Others Don't, including why many of us cling to our beliefs without the benefit of good reasoning. Host Matt Miller helps bring the importance of adopting the “scout mindset” front and center to the world of investing. Julia Galef hosts the podcast Rationally Speaking and is the co-founder of the Center for Applied Rationality. For industry-leading insights, support tools and more, subscribe to Capital Ideas at getcapitalideas.com. The Capital Ideas website is not intended for use outside the U.S. In Canada visit capitalgroup.com/ca for Capital Group insights.

The Jordan Harbinger Show
536: Julia Galef | Why Some People See Things Clearly and Others Don't

The Jordan Harbinger Show

Play Episode Listen Later Jul 20, 2021 75:22


Julia Galef (@juliagalef) is the host of the Rationally Speaking podcast, co-founder of The Center for Applied Rationality, and author of The Scout Mindset: Why Some People See Things Clearly and Others Don't. What We Discuss with Julia Galef: How to spot bad arguments and faulty thinking -- even when the source is you. The difference between having a soldier mindset that defends whatever you want to be true, and a scout mindset that's motivated to seek out the truth regardless of how unpleasant it might be (and which you should try to cultivate). How to tell if you're making reasonable mistakes or foolhardy leaps of faith that carry consequences far outweighing the value of the lesson. The best ways to manage and respond to uncertainty. How your brain matches arguments you misunderstand with ones you've already decided you don't agree with -- and what to do about it. And much more... Full show notes and resources can be found here: jordanharbinger.com/536 Sign up for Six-Minute Networking -- our free networking and relationship development mini course -- at jordanharbinger.com/course! Like this show? Please leave us a review here -- even one sentence helps! Consider including your Twitter handle so we can thank you personally!

POD OF JAKE
#67 - JULIA GALEF

POD OF JAKE

Play Episode Listen Later Jun 17, 2021 61:16


Julia is an intellectual leader in the rationalist community and the author of the book, The Scout Mindset. She previously co-founded the Center for Applied Rationality, a nonprofit organization devoted to helping people improve their reasoning and decision-making. Julia has hosted the Rationally Speaking Podcast since 2010. Visit her website at juliagalef.com and follow Julia on Twitter @juliagalef. [0:59] - How Julia came to study rationality and decision making [5:04] - Breaking down rationality and decision making [11:09] - Epistemic vs. instrumental rationality [15:39] - The decade-long history of the Rationally Speaking Podcast [21:42] - Soldier Mindset vs. Scout Mindset [29:46] - Seeking truth over comfort [39:53] - Vitalik Buterin's intellectual honesty [47:30] - Recognizing biases and changing your mind [54:17] - Keeping a small identity --- homeofjake.com

The Learning Leader Show With Ryan Hawk
423: Julia Galef - Why Some People See Things Clearly & Others Don't

The Learning Leader Show With Ryan Hawk

Play Episode Listen Later Jun 13, 2021 58:25


Text LEARNERS to 44222 for more... Full show notes at www.LearningLeader.com Twitter/IG: @RyanHawk12  https://twitter.com/RyanHawk12 Julia Galef is co-founder of the Center for Applied Rationality. She is the author of The Scout Mindset: Why Some People See Things Clearly and Others Don't. Notes: What is the scout mindset? “The motivation to see things as they are, not as you wish them to be.” The Scout Mindset allows you to recognize when you were wrong, to seek out your blind spots, to test your assumptions and change course. It's what prompts you to honestly ask yourself questions like “Was I at fault in that argument?” or “Is this risk really worth it?” As the physicist Richard Feynman said: “The first rule is that you must not fool yourself – and you are the easiest person to fool.” The three prongs: Realize that trust isn't in conflict with your other goals Learn tools that make it easier to see clearly Appreciate the emotional rewards of scout mindset She closes her TED talk with this quote from Antoine de Saint-Exupery: "If you want to build a ship, don't drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea." "The biggest bottleneck is not knowledge. It's motivation. You need to cultivate the motivation to see things clearly." “Julia Galef is an intellectual leader of the rationalist community, and in The Scout Mindset you will find an engaging, clearly written distillation of her very important accumulated wisdom on these topics.” -- Tyler Cowen We should assume that we are wrong. We need to build the skill to change our mind. "Our goal should be to be less wrong over time." How do you work on this? The key principle is the way you think about being wrong. "Don't accept the premise that being wrong means you screwed up." Jeff Bezos left his job on Wall Street to start Amazon and acknowledged the uncertainty. He estimated that his idea had about a 30% chance to work. The Scout versus Soldier mindset: A lot of times, humans are in a soldier mindset - "Belief was strong, unshakeable, opposed argument. A soldier is having to defend." Scout mindset - survey and see what's true. Form an accurate map. Practical application: Be cognizant how you seek out and respond to criticism. Don't ask leading questions. Recognize the tendency to describe the conflict accurately. Also... Not all arguments are worth having. Show signals of good faith. Distinguish between two kinds of confidence: Social - Poised, charismatic, relaxed body language, be worth listening to Epistemic - How much certainty that you have in your views Persuade while still expressing uncertainty: "I think there's a 70% chance this won't work." Lyndon Johnson - Need to understand why someone wouldn't agree with you... We are all the sum of our experiences... Approach people, places, and things with curiosity Life/Career advice: You're creating a brand - Be conscious of the type of people you're attracting. Work to attract those that make you a better version of yourself. Make the choice to attract people who like intellectual honesty like Vitalik Buterin (founder of Ethereum)

Modern Wisdom
#332 - Julia Galef - Learn To Improve Your Decision Making

Modern Wisdom

Play Episode Listen Later Jun 10, 2021 56:56


Julia Galef is the co-founder of the Center for Applied Rationality, a podcaster and an author. Boris Johnson's former chief adviser Dominic Cummings said that tens of thousands of Covid deaths could have been prevented if the Government had read Julia's book. Why is it that he swears by Julia's rationalist manifesto? Expect to learn what most people get wrong about confidence, the difference between a solider and scout mindset, why attitude is more important than knowledge for effective judgement, how to avoid being self-deceptive, what the rationality movement has got most wrong and much more... Sponsors: Reclaim your fitness and book a Free Consultation Call with ActiveLifeRX at http://bit.ly/rxwisdom Get 20% discount on the highest quality CBD Products from Pure Sport at https://puresportcbd.com/modernwisdom (use code: MW20) Extra Stuff: Buy The Scout Mindset - https://amzn.to/2RM1RNT Follow Julia on Twitter - https://twitter.com/juliagalef  Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: modernwisdompodcast@gmail.com

Audiobook Reviews in Five Minutes
Review of The Scout Mindset: Why Some People See Things Clearly and Others Don't by Julia Galef

Audiobook Reviews in Five Minutes

Play Episode Listen Later May 11, 2021 6:42


Author Julia Galef is the co-founder of the Center for Applied Rationality and host of Rationally Speaking, the official podcast of New York City Skeptics. She defines “scout mindset” as the motivation to see things as they are, not as you wish they were – and to be intellectually honest and curious about what's actually true. Goodreads: https://www.goodreads.com/book/show/42041926-the-scout-mindset?from_search=true&from_srp=true&qid=nTcE5FUFRh&rank=1 (https://www.goodreads.com/book/show/42041926-the-scout-mindset?from_search=true&from_srp=true&qid=nTcE5FUFRh&rank=1) Audio production by Graham Stephenson Episode music: Caprese by https://www.sessions.blue/ (Blue Dot Sessions) Rate, review, and subscribe to this podcast on Apple, Anchor, Breaker, Google, Overcast, Pocket Casts, RadioPublic, and Spotify

Audiobook Reviews in Five Minutes
Review of The Scout Mindset: Why Some People See Things Clearly and Others Don't by Julia Galef

Audiobook Reviews in Five Minutes

Play Episode Listen Later May 11, 2021 6:42


Author Julia Galef is the co-founder of the Center for Applied Rationality and host of Rationally Speaking, the official podcast of New York City Skeptics. She defines “scout mindset” as the motivation to see things as they are, not as you wish they were – and to be intellectually honest and curious about what's actually true. Goodreads: https://www.goodreads.com/book/show/42041926-the-scout-mindset?from_search=true&from_srp=true&qid=nTcE5FUFRh&rank=1 Audio production by Graham Stephenson Episode music: Caprese by Blue Dot Sessions Rate, review, and subscribe to this podcast on Apple, Anchor, Breaker, Google, Overcast, Pocket Casts, RadioPublic, and Spotify

Conversations With Coleman
How to Think with Julia Galef [S2 Ep.13]

Conversations With Coleman

Play Episode Listen Later May 7, 2021 86:20


My guest today is Julia Galef. Julia Galef is an author and podcaster. She's the Co-founder of the Centre for Applied Rationality and the host of the podcast "Rationally Speaking". In this episode, we discuss her new book, "The Scout Mindset: Why Some People See Things Clearly and Others Don't". We talked about the difference between intelligence and open-mindedness, the tension between pursuing the truth dispassionately and belonging to a tribe, the notion of instrumental rationality, the trade-off between building a larger audience and remaining true to one's principles, and whether affiliating with a political party makes it harder to form true beliefs.#AdWe deserve to know what we're putting in our bodies and why, especially when it comes to something we take every day. Rituals clean, vegan friendly multivitamin is formulated with high quality nutrients in bioavailable forms your body can actually use what you won't find sugars, GMOs, major allergens, synthetic fillers and artificial colorants. Ritual is offering listeners of this podcast 10% off during the first three months. Visit ritual.com/Coleman to start your ritual today.

Conversations With Coleman
How to Think with Julia Galef [S2 Ep.13]

Conversations With Coleman

Play Episode Listen Later May 6, 2021 86:08


Welcome to another episode of Conversations with Coleman. My guest today is Julia Galef. Julia Galef is an author and podcaster. She's the Co-founder of the Centre for Applied Rationality and the host of the podcast "Rationally Speaking". In this episode, we discuss her new book, "The Scout Mindset: Why Some People See Things Clearly and Others Don't". We talked about the difference between intelligence and open-mindedness, the tension between pursuing the truth dispassionately and belonging to a tribe, the notion of instrumental rationality, the trade-off between building a larger audience and remaining true to one's principles, and whether affiliating with a political party makes it harder to form true beliefs. #Ad We deserve to know what we're putting in our bodies and why, especially when it comes to something we take every day. Rituals clean, vegan friendly multivitamin is formulated with high quality nutrients in bioavailable forms your body can actually use what you won't find sugars, GMOs, major allergens, synthetic fillers and artificial colorants. Ritual is offering listeners of this podcast 10% off during the first three months. Visit ritual.com/Coleman to start your ritual today.

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
143 | Julia Galef on Openness, Bias, and Rationality

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Play Episode Listen Later Apr 19, 2021 92:16 Very Popular


Mom, apple pie, and rationality — all things that are unquestionably good, right? But rationality, as much as we might value it, is easier to aspire to than to achieve. And there are more than a few hot takes on the market suggesting that we shouldn’t even want to be rational — that it’s inefficient or maladaptive. Julia Galef is here to both stand up for the value of being rational, and to explain how we can better achieve it. She distinguishes between the “soldier mindset,” where we believe what we’re told about the world and march toward a goal, and the “scout mindset,” where we’re open-minded about what’s out there and always asking questions. She makes a compelling case that all things considered, it’s better to be a scout.Support Mindscape on Patreon.Julia Galef received a BA in statistics from Columbia University. She is currently a writer and host of the Rationally Speaking podcast. She was a co-founder and president of the Center for Applied Rationality. Her new book is The Scout Mindset: Why Some People See Things Clearly and Others Don’t.Web siteRationally Speaking podcastNew York magazine profileWikipediaTwitter

Nobody Told Me!
Julia Galef: ...why some people see things clearly and others don't

Nobody Told Me!

Play Episode Listen Later Apr 19, 2021 32:52


Have you ever paid much attention to your mindset? Do you wish you had emotional skills, habits and ways of looking at the world that served you better? Our guest on this episode, Julia Galef, says you can learn new ways of looking at the world and you should! Julia is the co-founder of the Center for Applied Rationality, the host of the podcast, Rationally Speaking, and the author of the new book, The Scout Mindset: Why Some People See Things Clearly and Others Don’t. ****** Thanks to our sponsor of this episode! --> AirMedCare: If you're ever in need of emergency medical transport, AirMedCare Network provides members with world class air transport services to the nearest appropriate hospital with no out of pocket expenses. Go to airmedcarenetwork.com/nobody and use offer code 'NOBODY' to sign up and choose up to a $50 eGift Card gift card with a new membership! Learn more about your ad choices. Visit megaphone.fm/adchoices

Sped up Rationally Speaking
Rationally Speaking #68 - Applied Rationality

Sped up Rationally Speaking

Play Episode Listen Later Dec 14, 2020 47:39


You've heard plenty about biases: the thinking errors the human brain tends to make. But is there anything we can do to make ourselves *less* biased? In this episode, Massimo and Julia discuss what psychological research has learned about "de-biasing," the challenges involved, and the de-biasing strategies Julia is implementing at her organization, the Center for Applied Rationality. Sped up the speakers by ['1.0', '1.09']

Clearer Thinking with Spencer Greenberg
Lines of Retreat and Incomplete Maps with Anna Salamon

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 14, 2020 86:32


What does it mean to leave lines of retreat in social contexts? How can we make sense of the current state of the world? What happens when we run out of map? How does the book Elephant in the Brain apply to the above questions? Anna Salamon does work with the Center for Applied Rationality and the Machine Intelligence Research Institute. She studied math and great books in undergrad, and philosophy of science for a small bit of grad school before leaving to work on AI-related existential risk. Fav. books include: R:AZ; HPMOR; “Zen and the Art of Motorcycle Maintenance,” and “The Closing of the American Mind” (as an intro to the practice of reading books from other places and times, not to evaluate the books, but to gain alternate hypotheses about ourselves by asking how the authors might perceive us). She blogs a bit at lesswrong.com.

Clearer Thinking with Spencer Greenberg
Lines of Retreat and Incomplete Maps (with Anna Salamon)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 13, 2020 86:32


What does it mean to leave lines of retreat in social contexts? How can we make sense of the current state of the world? What happens when we run out of map? How does the book Elephant in the Brain apply to the above questions?Anna Salamon does work with the Center for Applied Rationality and the Machine Intelligence Research Institute. She studied math and great books in undergrad, and philosophy of science for a small bit of grad school before leaving to work on AI-related existential risk. Fav. books include: R:AZ; HPMOR; “Zen and the Art of Motorcycle Maintenance,” and “The Closing of the American Mind” (as an intro to the practice of reading books from other places and times, not to evaluate the books, but to gain alternate hypotheses about ourselves by asking how the authors might perceive us). She blogs a bit at lesswrong.com.

Clearer Thinking with Spencer Greenberg
Lines of Retreat and Incomplete Maps (with Anna Salamon)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 13, 2020 86:32


What does it mean to leave lines of retreat in social contexts? How can we make sense of the current state of the world? What happens when we run out of map? How does the book Elephant in the Brain apply to the above questions?Anna Salamon does work with the Center for Applied Rationality and the Machine Intelligence Research Institute. She studied math and great books in undergrad, and philosophy of science for a small bit of grad school before leaving to work on AI-related existential risk. Fav. books include: R:AZ; HPMOR; “Zen and the Art of Motorcycle Maintenance,” and “The Closing of the American Mind” (as an intro to the practice of reading books from other places and times, not to evaluate the books, but to gain alternate hypotheses about ourselves by asking how the authors might perceive us). She blogs a bit at lesswrong.com.[Read more]

Clearer Thinking with Spencer Greenberg
Lines of Retreat and Incomplete Maps with Anna Salamon

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 13, 2020 86:32


What does it mean to leave lines of retreat in social contexts? How can we make sense of the current state of the world? What happens when we run out of map? How does the book Elephant in the Brain apply to the above questions?Anna Salamon does work with the Center for Applied Rationality and the Machine Intelligence Research Institute. She studied math and great books in undergrad, and philosophy of science for a small bit of grad school before leaving to work on AI-related existential risk. Fav. books include: R:AZ; HPMOR; “Zen and the Art of Motorcycle Maintenance,” and “The Closing of the American Mind” (as an intro to the practice of reading books from other places and times, not to evaluate the books, but to gain alternate hypotheses about ourselves by asking how the authors might perceive us). She blogs a bit at lesswrong.com.

Thiel Talks
Peter Thiel's Keynote - Effective Altruism Summit 2013

Thiel Talks

Play Episode Listen Later Aug 22, 2020 55:02


In 2013, the effective altruism movement came together for a 7-day event in the San Francisco Bay Area. Organizations in attendance: Leverage Research, the Center for Applied Rationality, the High Impact Network, GiveWell, The Life You Can Save, 80,000 Hours, Giving What We Can, Effective Animal Altruism, and the Machine Intelligence Research Institute --- Thiel Talks is an audio archive of Peter Thiel's ideas. New audio every Saturday. Inquiries to peterthielaudio@gmail.com

Future Histories
S01E23 - Max F. J. Schnetker zu transhumanistischer Mythologie

Future Histories

Play Episode Listen Later Mar 8, 2020 64:45


Was ist Transhumanismus und welche Ideologien sind damit verbunden? Vom Wunch nach Unsterblichkeit durch Mind-Upload bis zu moderner Eugenik, geht diese Episode in kritischer Auseinandersetzung den Wunschvorstellungen und Ängsten des Transhumanismus auf den Grund. Interessante & relevante Links: Das Buch "Transhumanistische Mythologien" bei Unrast: https://www.unrast-verlag.de/neuerscheinungen/transhumanistische-mythologie-detail Wiki zu Transhumanismus: https://de.wikipedia.org/wiki/Transhumanismus Tsveyfl (Zeitschrift, in die Max involviert ist): https://tsveyfl.blogspot.com Wiki zu Utilitarismus: https://de.wikipedia.org/wiki/Utilitarismus Wiki Jeremy Bentham: https://de.wikipedia.org/wiki/Jeremy_Bentham Nick Bostrom's Homepage: https://www.nickbostrom.com/ Paper von Nick Bostrom "Whole Brain Emulation: A Roadmap": https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf Homepage Future of Humanity Institute: https://www.fhi.ox.ac.uk/ Homepage Machine Intelligence Research Institute: https://intelligence.org/ Homepage Effective Altruism: https://www.effectivealtruism.org/ Wiki Ben Goertzel: https://en.wikipedia.org/wiki/Ben_Goertzel Homepage von Ben Goertzel: https://goertzel.org/ Homepage Ray Kurzweil: https://www.kurzweilai.net/ Wiki Ray Kurzweil: https://de.wikipedia.org/wiki/Raymond_Kurzweil Homepage Singularity University: https://su.org/ "The Age of Spiritual Machines" von Ray Kurzweil: https://www.goodreads.com/book/show/83533.The_Age_of_Spiritual_Machines Center for Applied Rationality: https://www.rationality.org/ Less Wrong Homepage: https://www.lesswrong.com/ Wiki zu bayesscher Statistik: https://de.wikipedia.org/wiki/Bayessche_Statistik Artikel zu dem Transhumanismus nahestehenden Berater von Boris Johnson (Eugenik): https://politicshome.com/news/uk/political-parties/conservative-party/news/109931/new-downing-street-adviser-called-universal Wiki Dominik Cummings: https://de.wikipedia.org/wiki/Dominic_Cummings Wiki Claus Peter Ortlieb: https://de.wikipedia.org/wiki/Claus_Peter_Ortlieb Wiki zu Palantir: https://en.wikipedia.org/wiki/Palantir_Technologies "The technical singularity" von Vernor Vinge: https://frc.ri.cmu.edu/~hpm/book98/com.ch1/vinge.singularity.html Buch "To be a Machine" von Mark O'Connell: https://www.goodreads.com/book/show/30555486-to-be-a-machine Ep. S01E13 mit Julia Grillmayr zu "Transhumanismus, Posthumanismus & Kompost": https://www.futurehistories.today/episoden-blog/s01e13-julia-grillmayr Wenn euch Future Histories gefällt, dann erwägt doch bitte eine Unterstützung auf Patreon: https://www.patreon.com/join/FutureHistories? Schreibt mir unter office@futurehistories.today und diskutiert mit auf Twitter (#FutureHistories): https://twitter.com/FutureHpodcast oder auf Reddit https://www.reddit.com/r/FutureHistories/  www.futurehistories.today

The Ezra Klein Show
ICYMI: Julia Galef

The Ezra Klein Show

Play Episode Listen Later Mar 14, 2019 94:27


For this episode of The Ezra Klein Show, we're digging into the archives to share another of our favorites with you! * At least in politics, this is an era of awful arguments. Arguments made in bad faith. Arguments in which no one, on either side, is willing to change their mind. Arguments where the points being made do not describe or influence the positions being held. Arguments that leave everyone dumber, angrier, sadder. Which is why I wanted to talk to Julia Galef this week. Julia is the host of the Rationally Speaking podcast, a co-founder of the Center for Applied Rationality, and the creator of the Update Project, which maps out arguments to make it easier for people to disagree clearly and productively. Her work focuses on how we think and argue, as well as the cognitive biases and traps that keep us from hearing what we're really saying, hearing what others are really saying, and preferring answers that make us feel good to answers that are true. I first met her at a Vox Conversation conference, where she ran a session helping people learn to change their minds, and it's struck me since then that more of us could probably use that training. In this episode, Julia and I talk about what she's learned about thinking more clearly and arguing better, as well as my concerns that the traditional paths toward a better discourse open up new traps of their own. (As you'll hear, I find it very easy to get lost in all the ways debate and cognition can go awry.) We talk about signaling, about motivated reasoning, about probabilistic debating, about which identities help us find truth, and about how to make online arguments less terrible. Enjoy! Recommended books: Language, Truth, and Logic by A.J. Ayer Seeing Like a State by James Scott The Robot's Rebellion by Keith Stanovich We are conducting an audience survey to better serve you. It takes no more than five minutes, and it really helps out the show. Please take our survey here: https://www.surveymonkey.com/r/3X6WMNF Learn more about your ad choices. Visit megaphone.fm/adchoices

EARadio
EAG 2018 SF: Center for Applied Rationality Workshop

EARadio

Play Episode Listen Later Mar 5, 2019 47:13


“Do you know what you’re doing, and why you’re doing it?” According to Duncan Sabien of the Center for Applied Rationality, this is a key question to ask yourself throughout life. In this workshop from Effective Altruism Global 2018: San Francisco, he describes a few different techniques, including managing your personal autopilot and mimicking useful … Continue reading EAG 2018 SF: Center for Applied Rationality Workshop

Efektiivne Altruism Eesti
Taivo Pungasega enesearengust

Efektiivne Altruism Eesti

Play Episode Listen Later Feb 24, 2019 94:11


Risto Uuk rääkis selles taskuhäälingu osas Taivo Pungasega, kes on insener ja andmeteadlane Veriffis. Nad vestlesid üldiselt enesearengust, aga kitsamalt ratsionaalsusest, produktiivsuse strateegiatest, vaimsest tervisest, esinemishirmust, välismaal tippülikoolis õppimisest, eriala valimisest ja muust sellisest. Allikad, mis vestlust toetasid või jutuks tulid: - Pungas heast karjäärinõust: https://pungas.ee/milline-on-hea-karjaarinou/ - Pungas universaalselt kasulikust erialavalikust: https://pungas.ee/erialavalik/ - Pungas produktiivsuse tööriistadest: https://pungas.ee/eneseareng-iii-tooriistad-produktiivsuse-tostmiseks/ - Pungas vaimsest tervisest: https://pungas.ee/eneseareng-ii-kuidas-olla-vaimses-tippvormis/ - Pungase motivatsioonikirjad stipendiumidele, ülikoolidesse ja mujale kandideerimiseks: https://pungas.ee/koik-mu-motivatsioonikirjad/ - 80,000 Hoursi artikkel sellest, miks ei tasu oma kirge järgida: https://80000hours.org/articles/dont-follow-your-passion/ - Psühholoogid eriala valimisest: https://novaator.err.ee/846345/psuhholoog-huvitav-eriala-ei-pea-pakkuma-kohest-monutunnet - Center for Applied Rationality ratsionaalsusseminarist: https://novaator.err.ee/595845/eestlased-kaivad-san-franciscos-ratsionaalsust-oppimas - Double Crux arutlemistehnikast: https://efektiivnealtruism.org/2018/09/18/kuidas-arutleda-arukalt/ - Efektiivse altruism foorumi postitus teemal, et kõik eesmärgid ei pea olema maailmaparandamise ümber: https://forum.effectivealtruism.org/posts/zu28unKfTHoxRWpGn/you-have-more-than-one-goal-and-that-s-fine

EARadio
EAG 2017 Boston: Convinced, not convincing (Duncan Sabien)

EARadio

Play Episode Listen Later Nov 2, 2017 28:30


Duncan presents an introduction to the Center for Applied Rationality’s tools for increasing motivation, avoiding mistakes, and collaborating effectively. Source: Effective Altruism Global (video).

80,000 Hours Podcast with Rob Wiblin
#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 13, 2017 74:16


The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong. Julia Galef - a well-known writer and researcher focused on improving human judgment, especially about high stakes questions - believes that if we could again develop new techniques to predict the future, resolve disagreements and make sound decisions together, it could dramatically improve the world across the board. We brought her in to talk about her ideas. This interview complements a new detailed review of whether and how to follow Julia’s career path. Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more. Julia has been host of the Rationally Speaking podcast since 2010, co-founder of the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements. In our conversation we ended up speaking about a wide range of topics, including: * Her research on how people can have productive intellectual disagreements. * Why she once planned to become an urban designer. * Why she doubts people are more rational than 200 years ago. * What makes her a fan of Twitter (while I think it’s dystopian). * Whether people should write more books. * Whether it’s a good idea to run a podcast, and how she grew her audience. * Why saying you don’t believe X often won’t convince people you don’t. * Why she started a PhD in economics but then stopped. * Whether she would recommend an unconventional career like her own. * Whether the incentives in the intelligence community actually support sound thinking. * Whether big institutions will actually pick up new tools for improving decision-making if they are developed. * How to start out pursuing a career in which you enhance human judgement and foresight. Get free, one-on-one career advice to help you improve judgement and decision-making We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. **If you want to work on any of the problems discussed in this episode, find out if our coaching can help you:** APPLY FOR COACHING Overview of the conversation **1m30s** So what projects are you working on at the moment? **3m50s** How are you working on the problem of expert disagreement? **6m0s** Is this the same method as the double crux process that was developed at the Center for Applied Rationality? **10m** Why did the Open Philanthropy Project decide this was a very valuable project to fund? **13m** Is the double crux process actually that effective? **14m50s** Is Facebook dangerous? **17m** What makes for a good life? Can you be mistaken about having a good life? **19m** Should more people write books? Read more...

Future of Life Institute Podcast
The Art Of Predicting With Anthony Aguirre And Andrew Critch

Future of Life Institute Podcast

Play Episode Listen Later Jul 31, 2017 57:59


How well can we predict the future? In this podcast, Ariel speaks with Anthony Aguirre and Andrew Critch about the art of predicting the future, what constitutes a good prediction, and how we can better predict the advancement of artificial intelligence. They also touch on the difference between predicting a solar eclipse and predicting the weather, what it takes to make money on the stock market, and the bystander effect regarding existential risks. Visit metaculus.com to try your hand at the art of predicting. Anthony is a professor of physics at the University of California at Santa Cruz. He's one of the founders of the Future of Life Institute, of the Foundational Questions Institute, and most recently of Metaculus.com, which is an online effort to crowdsource predictions about the future of science and technology. Andrew is on a two-year leave of absence from MIRI to work with UC Berkeley's Center for Human Compatible AI. He cofounded the Center for Applied Rationality, and previously worked as an algorithmic stock trader at James Street Capital.

The Ezra Klein Show
Julia Galef on how to argue better and change your mind more

The Ezra Klein Show

Play Episode Listen Later Jul 25, 2017 94:15


At least in politics, this is an era of awful arguments. Arguments made in bad faith. Arguments in which no one, on either side, is willing to change their mind. Arguments where the points being made do not describe, or influence, the positions being held. Arguments that leave everyone dumber, angrier, sadder. Which is why I wanted to talk to Julia Galef this week. Julia is the host of the Rationally Speaking podcast, a co-founder of the Center for Applied Rationality, and the creator of the Update Project, which maps out arguments to make it easier for people to disagree clearly and productively. Her work focuses on how we think and argue, as well as the cognitive biases and traps that keep us from hearing what we're really saying, hearing what others are really saying, and preferring answers that make us feel good to answers that are true. I first met her at a Vox Conversation conference, where she ran a session helping people learn to change their minds, and it's struck me since then that more of us could probably use that training. In this episode, Julia and I talk about what she's learned about thinking more clearly and arguing better, as well as my concerns that the traditional paths toward a better discourse open up new traps of their own. (As you'll hear, I find it very easy to get lost in all the ways debate and cognition can go awry.) We talk about signaling, about motivated reasoning, about probabilistic debating, about which identities help us find truth, and about how to make online arguments less terrible. Enjoy! Books: Language, Truth, and Logic by A.J. Ayer Seeing Like a State by James Scott The Robot's Rebellion by Keith Stanovich  Learn more about your ad choices. Visit megaphone.fm/adchoices

Skepticality:The Official Podcast of Skeptic Magazine
Skepticality #216 - Just Apply Rationality - Interview: Julia Galef

Skepticality:The Official Podcast of Skeptic Magazine

Play Episode Listen Later Oct 8, 2013 50:19


This week Derek spends some time talking with Julia Galef, the president of the Center for Applied Rationality, (CFAR). Along with her impressive work heading up an entire think tank dedicated to rational thinking, she also is the co-host of the popular podcast, 'Rationally Speaking' along with Massimo Pigliucci. Find out more about how something like CFAR came to be, and how Julia and her group and aiming to spread critical thinking to the masses.

Getting Better Acquainted

In GBA 131 we get better acquainted with Carl. He stops off in the middle of travelling around the world to talk neuroscience, culture, the merely real and share some stories of his travels. Carl plugs: "Have joy if you can in the merely real." "Do stuff while you can because there's a fine line between can and can't and you don't want to discover it on the wrong day." His Blog: http://themerelyreal.blogspot.com/ We mention: BBCQT Watch-a-long: https://www.facebook.com/events/361392280657539/ https://twitter.com/BBCQTWatchalong Unfortunatalie: https://twitter.com/unfortunatalie couchsurfing.org: https://www.couchsurfing.org/ Watching the English - Kate Fox: http://www.amazon.co.uk/Watching-English-Hidden-Rules-Behaviour/dp/0340818867 The Two Cultures - CP Snow: http://en.wikipedia.org/wiki/The_Two_Cultures Program or be Programmed - Douglas Rushcoff: http://www.amazon.com/Program-Be-Programmed-Commands-Digital/dp/159376426X Centre for Applied Rationality: http://rationality.org/ You can hear Getting Better Acquainted on Stitcher SmartRadio, Stitcher allows you to listen to your favourite shows directly from your iPhone, Android Phone, Kindle Fire and beyond. On-demand and on the go! Don’t have Stitcher? Download it for free today at www.stitcher.com or in the app stores. Help more people get better acquainted. If you like what you hear why not write an iTunes review?

The Humanist Hour
The Humanist Hour #83: Julia Galef

The Humanist Hour

Play Episode Listen Later May 22, 2013


In this month's podcast, Todd Stiefel's co-host is Amanda K. Metskas. Together they interview Julia Galef, the president of the Center for Applied Rationality.

Rationally Speaking
Rationally Speaking #68 - Applied Rationality

Rationally Speaking

Play Episode Listen Later Aug 26, 2012 49:47


You've heard plenty about biases: the thinking errors the human brain tends to make. But is there anything we can do to make ourselves *less* biased? In this episode, Massimo and Julia discuss what psychological research has learned about "de-biasing," the challenges involved, and the de-biasing strategies Julia is implementing at her organization, the Center for Applied Rationality.