Podcast appearances and mentions of spencer greenberg

  • 44PODCASTS
  • 92EPISODES
  • 55mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 24, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about spencer greenberg

Latest podcast episodes about spencer greenberg

80,000 Hours Podcast with Rob Wiblin
Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 24, 2025 138:41


How do you navigate a career path when the future of work is uncertain? How important is mentorship versus immediate impact? Is it better to focus on your strengths or on the world's most pressing problems? Should you specialise deeply or develop a unique combination of skills?From embracing failure to finding unlikely allies, we bring you 16 diverse perspectives from past guests who've found unconventional paths to impact and helped others do the same.Links to learn more and full transcript.Chapters:Cold open (00:00:00)Luisa's intro (00:01:04)Holden Karnofsky on just kicking ass at whatever (00:02:53)Jeff Sebo on what improv comedy can teach us about doing good in the world (00:12:23)Dean Spears on being open to randomness and serendipity (00:19:26)Michael Webb on how to think about career planning given the rapid developments in AI (00:21:17)Michelle Hutchinson on finding what motivates you and reaching out to people for help (00:41:10)Benjamin Todd on figuring out if a career path is a good fit for you (00:46:03)Chris Olah on the value of unusual combinations of skills (00:50:23)Holden Karnofsky on deciding which weird ideas are worth betting on (00:58:03)Karen Levy on travelling to learn about yourself (01:03:10)Leah Garcés on finding common ground with unlikely allies (01:06:53)Spencer Greenberg on recognising toxic people who could derail your career and life (01:13:34)Holden Karnofsky on the many jobs that can help with AI (01:23:13)Danny Hernandez on using world events to trigger you to work on something else (01:30:46)Sarah Eustis-Guthrie on exploring and pivoting in careers (01:33:07)Benjamin Todd on making tough career decisions (01:38:36)Hannah Ritchie on being selective when following others' advice (01:44:22)Alex Lawsen on getting good mentorship (01:47:25)Chris Olah on cold emailing that actually works (01:54:49)Pardis Sabeti on prioritising physical health to do your best work (01:58:34)Chris Olah on developing good taste and technique as a researcher (02:04:39)Benjamin Todd on why it's so important to apply to loads of jobs (02:09:52)Varsha Venugopal on embracing uncomfortable situations and celebrating failures (02:14:25)Luisa's outro (02:17:43)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore

80,000 Hours Podcast with Rob Wiblin
Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 11, 2025 107:10


"We are aiming for a place where we can decouple the scorecard from our worthiness. It's of course the case that in trying to optimise the good, we will always be falling short. The question is how much, and in what ways are we not there yet? And if we then extrapolate that to how much and in what ways am I not enough, that's where we run into trouble." —Hannah BoettcherWhat happens when your desire to do good starts to undermine your own wellbeing?Over the years, we've heard from therapists, charity directors, researchers, psychologists, and career advisors — all wrestling with how to do good without falling apart. Today's episode brings together insights from 16 past guests on the emotional and psychological costs of pursuing a high-impact career to improve the world — and how to best navigate the all-too-common guilt, burnout, perfectionism, and imposter syndrome along the way.Check out the full transcript and links to learn more: https://80k.info/mhIf you're dealing with your own mental health concerns, here are some resources that might help:If you're feeling at risk, try this for the the UK: How to get help in a crisis, and this for the US: National Suicide Prevention Lifeline.The UK's National Health Service publishes useful, evidence-based advice on treatments for most conditions.Mental Health Navigator is a service that simplifies finding and accessing mental health information and resources all over the world — built specifically for the effective altruism communityWe recommend this summary of treatments for depression, this summary of treatments for anxiety, and Mind Ease, an app created by Spencer Greenberg.We'd also recommend It's Not Always Depression by Hilary Hendel.Some on our team have found Overcoming Perfectionism and Overcoming Low Self-Esteem very helpful.And there's even more resources listed on these episode pages: Having a successful career with depression, anxiety, and imposter syndrome, Hannah Boettcher on the mental health challenges that come with trying to have a big impact, Tim LeBon on how altruistic perfectionism is self-defeating.Chapters:Cold open (00:00:00)Luisa's intro (00:01:32)80,000 Hours' former CEO Howie on what his anxiety and self-doubt feels like (00:03:47)Evolutionary psychiatrist Randy Nesse on what emotions are for (00:07:35)Therapist Hannah Boettcher on how striving for impact can affect our self-worth (00:13:45)Luisa Rodriguez on grieving the gap between who you are and who you wish you were (00:16:57)Charity director Cameron Meyer Shorb on managing work-related guilt and shame (00:24:01)Therapist Tim LeBon on aiming for excellence rather than perfection (00:29:18)Author Cal Newport on making time to be alone with our thoughts (00:36:03)80,000 Hours career advisors Michelle Hutchinson and Habiba Islam on prioritising mental health over career impact (00:40:28)Charity founder Sarah Eustis-Guthrie on the ups and downs of founding an organisation (00:45:52)Our World in Data researcher Hannah Ritchie on feeling like an imposter as a generalist (00:51:28)Moral philosopher Will MacAskill on being proactive about mental health and preventing burnout (01:00:46)Grantmaker Ajeya Cotra on the psychological toll of big open-ended research questions (01:11:00)Researcher and grantmaker Christian Ruhl on how having a stutter affects him personally and professionally (01:19:30)Mercy For Animals' CEO Leah Garcés on insisting on self-care when doing difficult work (01:32:39)80,000 Hours' former CEO Howie on balancing a job and mental illness (01:37:12)Therapist Hannah Boettcher on how self-compassion isn't self-indulgence (01:40:39)Journalist Kelsey Piper on communicating about mental health in ways that resonate (01:43:32)Luisa's outro (01:46:10)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore

Firewall
How to Make Americans Happier

Firewall

Play Episode Listen Later Apr 1, 2025 60:56


The just-released World Happiness Report places the United States 24th globally. That's the bad news. The good news is that a big part of our problem is based on a fixable misperception that we have about each other. Bradley outlines a new regimen of building better vibes. Plus, he explains why Trump is shredding our system of checks and balances so easily (because it's actually reliant on norms, which Trump does not observe), how Trump's rejection of good politics should eventually sap his momentum, and what Bradley learned from reading Graydon Carter's memoir, When the Going Was Good.LIVE EVENT: Join Bradley for a live Firewall recording at P&T Knitwear on Wednesday, April 2 and 6:30PM. After his discussion with Spencer Greenberg, host of the Clearer Thinking podcast, Bradley will answer audience questions on-air. Space is limited. RSVP today: https://www.eventbrite.com/e/clearer-thinking-x-firewall-a-live-podcast-recording-tickets-1261541337099?aff=oddtdtcreatorThis episode was taped at P&T Knitwear at 180 Orchard Street — New York City's only free podcast recording studio.Send us an email with your thoughts on today's episode: info@firewall.media.Subscribe to Bradley's weekly newsletter, follow Bradley on Linkedin + Substack + YouTube, be sure to order his new book, Vote With Your Phone.

Firewall
The Vindicator of Queens

Firewall

Play Episode Listen Later Mar 27, 2025 47:34


In this first installment of The Race to Gracie Mansion — a series of interviews with New York City mayoral hopefuls, co-produced by Firewall and City & State — Jessica Ramos, State Senator from Queens, lays out her vision for a more equitable, better-run city. She takes on police deployment, fair fares, Cuomo's comeback, and why she'll never promise childhood friends a job in her administration. This episode was taped at P&T Knitwear at 180 Orchard Street — New York City's only free podcast recording studio.Send us an email with your thoughts on today's episode: info@firewall.media.Subscribe to Bradley's weekly newsletter, follow Bradley on Linkedin + Substack + YouTube, be sure to order his latest book Vote With Your Phone. ++LIVE SHOW: Join Bradley for a live Firewall recording at P&T Knitwear on Wednesday, April 2 at 6:30PM. After his discussion with Spencer Greenberg, host of the Clearer Thinking podcast, Bradley will answer audience questions on-air. Space is limited. RSVP today: https://www.eventbrite.com/e/clearer-thinking-x-firewall-a-live-podcast-recording-tickets-1261541337099

Firewall
Does Elon Musk Know What He Doesn't Know?

Firewall

Play Episode Listen Later Mar 18, 2025 54:55


In word, no — not when it comes to the government, anyway. Bradley reviews the careful balance you need to strike when working in areas outside your expertise. Plus, he mulls over whether getting Western Europe to bulk up its armed forces will make the world safer, how Chuck Schumer's self interest and the national interest coincided on his vote for the continuing resolution on the budget and (because we needed to lift the mood) what makes a great cover version of a pop song.LIVE SHOW: Join Bradley for a live Firewall recording at P&T Knitwear on Wednesday, April 2 at 6:30PM. After his discussion with Spencer Greenberg, host of the Clearer Thinking podcast, Bradley will answer audience questions on-air. Space is limited. RSVP today: https://www.eventbrite.com/e/clearer-thinking-x-firewall-a-live-podcast-recording-tickets-1261541337099This episode was taped at P&T Knitwear at 180 Orchard Street — New York City's only free podcast recording studio.Send us an email with your thoughts on today's episode: info@firewall.media.Subscribe to Bradley's weekly newsletter, follow Bradley on Linkedin + Substack + YouTube, be sure to order his new book, Vote With Your Phone.

Human Design Tribe
Astrologie funkioniert nicht - sagt 2024 Studie, lass uns das mal genauer ansehen.

Human Design Tribe

Play Episode Listen Later Feb 10, 2025 19:11


In dieser Podcastfolge geht darum, dass Spencer Greenberg 2024 in einer Studie beweisen wollte, ob Astrologie funktioniert oder nicht. Wie der Ablauf war und warum ich nicht viel davon halte, erfährst du in dieser Folge. Außerdem möchte ich daran nochmal unser Konzept erklären und warum wir so auf die Energie unserer Texte achten, denn Schubladen denken, Abhängigkeit und Opferhaltung braucht ab 2025 nun wirklich niemand mehr.Codes of Life® die Software für Human Design & Astrologie Starter und ProfessionalsAcademy (Workshops & Ausbildungen), individuelle Auswertungen, Readings und tiefes Wissen500+ Profesisonals, 550.000+ erstelle Charts und über 22.000 individuelle Auswertungen↓↓↓ Starte deine Reise hierRegistriere dich, erstelle eine Chart und beginne mit dem Online-Kurs Create Yourself → for freehttps://de.codesoflife.com/register-col

80,000 Hours Podcast with Rob Wiblin
2024 Highlightapalooza! (The best of the 80,000 Hours Podcast this year)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Dec 27, 2024 170:02


"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depending on how you want to look at it." — Rob WiblinIt's that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:How to use the microphone on someone's mobile phone to figure out what password they're typing into their laptopWhy mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever doneWhy evolutionary psychology doesn't support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to othersHow superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it's mostly a disagreement about timingWhy the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research todayHow much of the gender pay gap is due to direct pay discrimination vs other factorsHow cleaner wrasse fish blow the mirror test out of the waterWhy effective altruism may be too big a tent to work wellHow we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with…as well as 27 other top observations and arguments from the past year of the show.Check out the full transcript and episode links on the 80,000 Hours website.Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you're struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.Enjoy, and look forward to speaking with you in 2025!Chapters:Rob's intro (00:00:00)Randy Nesse on the origins of morality and the problem of simplistic selfish-gene thinking (00:02:11)Hugo Mercier on the evolutionary argument against humans being gullible (00:07:17)Meghan Barrett on the likelihood of insect sentience (00:11:26)Sébastien Moro on the mirror test triumph of cleaner wrasses (00:14:47)Sella Nevo on side-channel attacks (00:19:32)Zvi Mowshowitz on AI sleeper agents (00:22:59)Zach Weinersmith on why space settlement (probably) won't make us rich (00:29:11)Rachel Glennerster on pull mechanisms to incentivise repurposing of generic drugs (00:35:23)Emily Oster on the impact of kids on women's careers (00:40:29)Carl Shulman on robot nannies (00:45:19)Nathan Labenz on kids and artificial friends (00:50:12)Nathan Calvin on why it's not too early for AI policies (00:54:13)Rose Chan Loui on how control of OpenAI is independently incredibly valuable and requires compensation (00:58:08)Nick Joseph on why he's a big fan of the responsible scaling policy approach (01:03:11)Sihao Huang on how the US and UK might coordinate with China (01:06:09)Nathan Labenz on better transparency about predicted capabilities (01:10:18)Ezra Karger on what explains forecasters' disagreements about AI risks (01:15:22)Carl Shulman on why he doesn't support enforced pauses on AI research (01:18:58)Matt Clancy on the omnipresent frictions that might prevent explosive economic growth (01:25:24)Vitalik Buterin on defensive acceleration (01:29:43)Annie Jacobsen on the war games that suggest escalation is inevitable (01:34:59)Nate Silver on whether effective altruism is too big to succeed (01:38:42)Kevin Esvelt on why killing every screwworm would be the best thing humanity ever did (01:42:27)Lewis Bollard on how factory farming is philosophically indefensible (01:46:28)Bob Fischer on how to think about moral weights if you're not a hedonist (01:49:27)Elizabeth Cox on the empirical evidence of the impact of storytelling (01:57:43)Anil Seth on how our brain interprets reality (02:01:03)Eric Schwitzgebel on whether consciousness can be nested (02:04:53)Jonathan Birch on our overconfidence around disorders of consciousness (02:10:23)Peter Godfrey-Smith on uploads of ourselves (02:14:34)Laura Deming on surprising things that make mice live longer (02:21:17)Venki Ramakrishnan on freezing cells, organs, and bodies (02:24:46)Ken Goldberg on why low fault tolerance makes some skills extra hard to automate in robots (02:29:12)Sarah Eustis-Guthrie on the ups and downs of founding an organisation (02:34:04)Dean Spears on the cost effectiveness of kangaroo mother care (02:38:26)Cameron Meyer Shorb on vaccines for wild animals (02:42:53)Spencer Greenberg on personal principles (02:46:08)Producing and editing: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo editing: Simon MonsourTranscriptions: Katy Moore

80,000 Hours Podcast with Rob Wiblin
Bonus: Parenting insights from Rob and 8 past guests

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 8, 2024 95:39


With kids very much on the team's mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them. Links to learn more and full transcript.After hearing 8 former guests' insights, Luisa and Rob chat about:Which of these resonate the most with Rob, now that he's been a dad for six months (plus an update at nine months).What have been the biggest surprises for Rob in becoming a parent.How Rob's dealt with work and parenting tradeoffs, and his advice for other would-be parents.Rob's list of recommended purchases for new or upcoming parents.This bonus episode includes excerpts from:Ezra Klein on parenting yourself as well as your children (from episode #157)Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid (#110 and #158)Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids' lives (#178)Russ Roberts on empirical research when deciding whether to have kids (#87)Spencer Greenberg on his surveys of parents (#183)Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems (#153)Bryan Caplan on homeschooling (#172)Nita Farahany on thinking about life and the world differently with kids (#174)Chapters:Cold open (00:00:00)Rob & Luisa's intro (00:00:19)Ezra Klein on parenting yourself as well as your children (00:03:34)Holden Karnofsky on preparing for a kid and freezing embryos (00:07:41)Emily Oster on the impact of kids on relationships (00:09:22)Russ Roberts on empirical research when deciding whether to have kids (00:14:44)Spencer Greenberg on parent surveys (00:23:58)Elie Hassenfeld on how having children reframes his relationship to solving pressing problems (00:27:40)Emily Oster on careers and kids (00:31:44)Holden Karnofsky on the experience of having kids (00:38:44)Bryan Caplan on homeschooling (00:40:30)Emily Oster on what actually makes a difference in young kids' lives (00:46:02)Nita Farahany on thinking about life and the world differently (00:51:16)Rob's first impressions of parenthood (00:52:59)How Rob has changed his views about parenthood (00:58:04)Can the pros and cons of parenthood be studied? (01:01:49)Do people have skewed impressions of what parenthood is like? (01:09:24)Work and parenting tradeoffs (01:15:26)Tough decisions about screen time (01:25:11)Rob's advice to future parents (01:30:04)Coda: Rob's updated experience at nine months (01:32:09)Emily Oster on her amazing nanny (01:35:01)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore

What Are You Made Of?
Unlocking Clear Thinking and Better Decision-Making with Spencer Greenberg

What Are You Made Of?

Play Episode Listen Later Nov 8, 2024 31:27


Mike "C-Roc" sits down with Spencer Greenberg, an entrepreneur, mathematician, and psychology researcher passionate about improving human well-being. As the founder of ClearerThinking.org and Spark Wave, Spencer has dedicated his career to developing tools and conducting research to help people make better decisions, understand themselves, and build lives of purpose. With over 200,000 subscribers, his insights on critical thinking and habit formation have reached a global audience, and his podcast, Clearer Thinking, is among the top 1% worldwide. "C-Roc" and Spencer dive deep into the building blocks of success, from identifying personal assets and liabilities to cultivating positive habits and surrounding yourself with supportive people. Spencer shares his unique approach to self-improvement, which combines mathematical precision and psychological insight, and explains how cognitive behavioral techniques can reshape one's personality traits, like reducing neuroticism. Listeners will also learn about the importance of self-awareness and emotional regulation, and how mindfulness can help control reactions for healthier relationships and a clearer mind. Whether you're an entrepreneur or anyone looking to think more clearly, this conversation provides valuable tools to better understand yourself and take control of your journey. Website- https://www.clearerthinking.org/ Social Media Link/Handles - https://www.instagram.com/spencrgreenberg/?hl=en https://x.com/SpencrGreenberg

The Path to Utopia, with Nick Bostrom – from Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Sep 11, 2024 74:38


In this special cross-post episode of The Cognitive Revolution, Nathan shares a fascinating conversation between Spencer Greenberg and philosopher Nick Bostrom from the Clearer Thinking podcast. They explore Bostrom's latest book, "Deep Utopia," and discuss the challenges of envisioning a truly desirable future. Discover how advanced AI could reshape our concept of purpose and meaning, and hear thought-provoking ideas on finding fulfillment in a world where technology solves our pressing problems. Join us for an insightful journey into the potential evolution of human flourishing and the quest for positive visions of the future. Originally appeared in Clearer Thinking Podcast: https://podcast.clearerthinking.org/episode/224/nick-bostrom-the-path-to-utopia Check out the Clearer Thinking with Spencer Greenberg Podcast here: https://podcast.clearerthinking.org/ Deep Utopia Book: https://www.amazon.com/Deep-Utopia-Meaning-Solved-World/dp/1646871642/ Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/ SPONSORS: Oracle: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive Brave: The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Weights & Biases Weave: Weights & Biases Weave is a lightweight AI developer toolkit designed to simplify your LLM app development. With Weave, you can trace and debug input, metadata and output with just 2 lines of code. Make real progress on your LLM development and visit the following link to get started with Weave today: https://wandb.me/cr RECOMMENDED PODCAST: This Won't Last. Eavesdrop on Keith Rabois, Kevin Ryan, Logan Bartlett, and Zach Weinberg's monthly backchannel. They unpack their hottest takes on the future of tech, business, venture, investing, and politics. Apple Podcasts: https://podcasts.apple.com/us/podcast/id1765665937 Spotify: https://open.spotify.com/show/2HwSNeVLL1MXy0RjFPyOSz YouTube: https://www.youtube.com/@ThisWontLastpodcast CHAPTERS: (00:00:00) About the Show (00:00:22) About the Episode (00:02:58) Introduction to the podcast (00:03:26) Dystopias vs utopias in fiction (00:07:29) Material abundance and utopia (00:14:57) AI and the future of work (00:20:10) AI companions and human relationships (00:22:57) Sponsors: Weights & Biases Weave | Oracle (00:25:01) Sponsor message: Positly research platform (00:26:04) Surveillance and global coordination (00:44:38) Sponsors: Omneky | Brave (00:44:52) Sponsor message: Transparent Replications project (00:46:07) AI governance challenges (00:49:36) Deep Utopia book's purpose (00:53:09) Global coordination strategies (00:59:13) The vulnerable world hypothesis (01:05:18) Bostrom's meta-ethical views (01:08:32) Listener question on meditation (01:10:17) Outro

Subversive w/Alex Kaschuta
Spencer Greenberg - How do other people think?

Subversive w/Alex Kaschuta

Play Episode Listen Later Jul 1, 2024 43:21


This is the first half of our conversation. The full episode and the complete archive of Subversive episodes, including exclusive episodes and my writing, are available on Substack. You can also subscribe to the podcast sans writing on Patreon for a bit less. This is how the show is financed and grows, so I appreciate every contribution! Please subscribe at: https://www.alexkaschuta.com/ https://www.patreon.com/aksubversive Our conversation explores the concept of worldviews as self-contained snow globes that represent specific cardinal virtues. We discuss the four common elements of every worldview: what is good, where good and bad come from, who deserves good, and how to do good. The conversation also delves into the challenges of understanding and evaluating different worldviews, the role of in-group signaling, and the importance of understanding other perspectives. We also discuss Valueism as a life philosophy based on intrinsic values and effective action to increase them, the decline of traditional religion, the search for alternative forms of community and meaning, group differences, and the extremes of the distribution, language ambiguity, and imprecision used to hide behind claims and avoid accountability - and much more. Spencer Greenberg is the founder of ClearerThinking.org and Spark Wave and host of the Clearer Thinking podcast. A few notes on things mentioned in our chat: Clearer Thinking with Spencer Greenberg (podcast) - a recent episode with Sasha Chapin: https://podcast.clearerthinking.org/episode/215/sasha-chapin-raising-our-happiness-baseline/  The Intrinsic Values Test: https://programs.clearerthinking.org/intrinsic_values_test.html Valuism: doing what you value as a life philosophy: https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/ A theory of worldviews: https://www.clearerthinking.org/post/understand-how-other-people-think-a-theory-of-worldviews Clearer Thinking's 80 free tools on topics like critical thinking, decision-making, etc.: https://www.clearerthinking.org/tools Oversimplified vs. Difference Deniers: https://www.spencergreenberg.com/2023/12/oversimplifiers-vs-difference-deniers-a-dynamic-regarding-group-differences-that-leads-to-rage-and-confusion/ Tails in distributions: https://x.com/SpencrGreenberg/status/1795806828015837226 Precision and measurability as B.S. detectors: https://x.com/SpencrGreenberg/status/1804923269092442580 Chapters 00:00 Exploring Worldviews as Self-Contained Snow Globes 01:20 The Four Elements of Every Worldview 29:09 The Decline of Traditional Religion and the Search for Meaning 30:40 Adapting Religions to Modern Ideas 31:37 The Appeal of Traditional and Hardcore Religion 32:25 Interpretations and Sects within Religions 34:38 Constant Splitting and Factionalism in Online Communities 36:05 Balancing Group Differences and Individual Assessments 40:02 Understanding Average Group Differences 41:55 The Power of Language Ambiguity and Imprecision 54:19 Recognizing and Overcoming Biases --- Send in a voice message: https://podcasters.spotify.com/pod/show/aksubversive/message

Utterly Moderate Network
Spotting Real Expertise & Examining Your Own Knowledge (w/Jacob Mackey)

Utterly Moderate Network

Play Episode Listen Later Jun 28, 2024 56:24


On this episode of the Utterly Moderate Podcast, host Lawrence Eppard and Connors Institute co-director Jacob Mackey discuss techniques and shortcuts that you can use to spot real expertise in a world where people with expert credentials are sometimes frauds and where people without expert credentials are often very knowledgeable. They also discuss crucial techniques for examining your personal biases and the limits of your own knowledge. This conversation is based on two really good readings, and we hope you will not only listen to this episode but go to these websites and read these short but very illuminating pieces: “Spotting Real Expertise” by Spencer Greenberg in the Connors Newsletter (click HERE to read). “Strategies for Consuming News” by the Connors Institute (click HERE to read). Enjoy the episode! And PLEASE subscribe to our newsletter in just one click! ------------- ------------- Episode Audio: "Air Background Corporate" by REDCVT (Free Music Archive) "Please Listen Carefully" by Jahzzar (Free Music Archive) "Last Dance" by Jahzzar (Free Music Archive) “Happy Trails (To You)” by the Riders in the Sky (used with artist’s permission)

Effective Altruism Forum Podcast
“Against a Happiness Ceiling: Replicating Killingsworth & Kahneman (2022)” by charlieh943

Effective Altruism Forum Podcast

Play Episode Listen Later May 30, 2024 11:24


Epistemic Status: somewhat confident: I may have made coding mistakes. R code is here if you feel like checking. Introduction:  In their 2022 article, Matthew Killingsworth and Daniel Kahneman looked to reconcile the results from two of their papers. Kahneman (2010) had reported that above a certain income level ($75,000 USD), extra income had no association with increases in individual happiness. Killingsworth (2021) suggested that it did. Kahneman and Killingsworth (henceforth KK) claimed they had resolved this conflict by (correctly) hypothesizing that: 1) There is an unhappy minority, whose unhappiness diminishes with rising income up to a threshold, then shows no further progress (i.e., Kahnemann's leveling off); 2) In the happier majority, happiness continues to rise with income even in the high range of incomes (i.e., Kllingsworth continued log-linear finding) (More info on this discussion can be found in Spencer Greenberg's thoroughly enjoyable blog post. Spencer [...] ---Outline:(00:18) Introduction:(03:04) Summary of Findings(04:07) Results(05:07) Median Regressions(05:21) Figure 1(06:16) Regressions at Various Percentiles(06:55) Figure 2(08:38) Implications(10:50) Table 1: Happiness at Different Percentiles (above, KK; below, me)The original text contained 2 footnotes which were omitted from this narration. --- First published: May 28th, 2024 Source: https://forum.effectivealtruism.org/posts/A5voYMFhPkWTrGkuJ/against-a-happiness-ceiling-replicating-killingsworth-and --- Narrated by TYPE III AUDIO.

The Reality Check
TRC #688: Interview with Spencer Greenberg

The Reality Check

Play Episode Listen Later May 5, 2024 32:50


Spencer Greenberg is the founder Clearer Thinking, a web site which provides tools for critical thinking, as well as Transparent Replications, which does rapid replications of papers in psychology and behavioural studies. In this interview he discusses the replication crisis in scientific studies, what's causing it and what can be done to reduce these problems.

The Nonlinear Library
LW - Key takeaways from our EA and alignment research surveys by Cameron Berg

The Nonlinear Library

Play Episode Listen Later May 3, 2024 47:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Key takeaways from our EA and alignment research surveys, published by Cameron Berg on May 3, 2024 on LessWrong. Many thanks to Spencer Greenberg, Lucius Caviola, Josh Lewis, John Bargh, Ben Pace, Diogo de Lucena, and Philip Gubbins for their valuable ideas and feedback at each stage of this project - as well as the ~375 EAs + alignment researchers who provided the data that made this project possible. Background Last month, AE Studio launched two surveys: one for alignment researchers, and another for the broader EA community. We got some surprisingly interesting results, and we're excited to share them here. We set out to better explore and compare various population-level dynamics within and across both groups. We examined everything from demographics and personality traits to community views on specific EA/alignment-related topics. We took on this project because it seemed to be largely unexplored and rife with potentially-very-high-value insights. In this post, we'll present what we think are the most important findings from this project. Meanwhile, we're also sharing and publicly releasing a tool we built for analyzing both datasets. The tool has some handy features, including customizable filtering of the datasets, distribution comparisons within and across the datasets, automatic classification/regression experiments, LLM-powered custom queries, and more. We're excited for the wider community to use the tool to explore these questions further in whatever manner they desire. There are many open questions we haven't tackled here related to the current psychological and intellectual make-up of both communities that we hope others will leverage the dataset to explore further. (Note: if you want to see all results, navigate to the tool, select the analysis type of interest, and click 'Select All.' If you have additional questions not covered by the existing analyses, the GPT-4 integration at the bottom of the page should ideally help answer them. The code running the tool and the raw anonymized data are both also publicly available.) We incentivized participation by offering to donate $40 per eligible[1] respondent - strong participation in both surveys enabled us to donate over $10,000 to both AI safety orgs as well as a number of different high impact organizations (see here[2] for the exact breakdown across the two surveys). Thanks again to all of those who participated in both surveys! Three miscellaneous points on the goals and structure of this post before diving in: 1. Our goal here is to share the most impactful takeaways rather than simply regurgitating every conceivable result. This is largely why we are also releasing the data analysis tool, where anyone interested can explore the dataset and the results at whatever level of detail they please. 2. This post collectively represents what we at AE found to be the most relevant and interesting findings from these experiments. We sorted the TL;DR below by perceived importance of findings. We are personally excited about pursuing neglected approaches to alignment, but we have attempted to be as deliberate as possible throughout this write-up in striking the balance between presenting the results as straightforwardly as possible and sharing our views about implications of certain results where we thought it was appropriate. 3. This project was descriptive and exploratory in nature. Our goal was to cast a wide psychometric net in order to get a broad sense of the psychological and intellectual make-up of both communities. We used standard frequentist statistical analyses to probe for significance where appropriate, but we definitely still think it is important for ourselves and others to perform follow-up experiments to those presented here with a more tightly controlled scope to replicate and further sharpen t...

The Nonlinear Library: LessWrong
LW - Key takeaways from our EA and alignment research surveys by Cameron Berg

The Nonlinear Library: LessWrong

Play Episode Listen Later May 3, 2024 47:42


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Key takeaways from our EA and alignment research surveys, published by Cameron Berg on May 3, 2024 on LessWrong. Many thanks to Spencer Greenberg, Lucius Caviola, Josh Lewis, John Bargh, Ben Pace, Diogo de Lucena, and Philip Gubbins for their valuable ideas and feedback at each stage of this project - as well as the ~375 EAs + alignment researchers who provided the data that made this project possible. Background Last month, AE Studio launched two surveys: one for alignment researchers, and another for the broader EA community. We got some surprisingly interesting results, and we're excited to share them here. We set out to better explore and compare various population-level dynamics within and across both groups. We examined everything from demographics and personality traits to community views on specific EA/alignment-related topics. We took on this project because it seemed to be largely unexplored and rife with potentially-very-high-value insights. In this post, we'll present what we think are the most important findings from this project. Meanwhile, we're also sharing and publicly releasing a tool we built for analyzing both datasets. The tool has some handy features, including customizable filtering of the datasets, distribution comparisons within and across the datasets, automatic classification/regression experiments, LLM-powered custom queries, and more. We're excited for the wider community to use the tool to explore these questions further in whatever manner they desire. There are many open questions we haven't tackled here related to the current psychological and intellectual make-up of both communities that we hope others will leverage the dataset to explore further. (Note: if you want to see all results, navigate to the tool, select the analysis type of interest, and click 'Select All.' If you have additional questions not covered by the existing analyses, the GPT-4 integration at the bottom of the page should ideally help answer them. The code running the tool and the raw anonymized data are both also publicly available.) We incentivized participation by offering to donate $40 per eligible[1] respondent - strong participation in both surveys enabled us to donate over $10,000 to both AI safety orgs as well as a number of different high impact organizations (see here[2] for the exact breakdown across the two surveys). Thanks again to all of those who participated in both surveys! Three miscellaneous points on the goals and structure of this post before diving in: 1. Our goal here is to share the most impactful takeaways rather than simply regurgitating every conceivable result. This is largely why we are also releasing the data analysis tool, where anyone interested can explore the dataset and the results at whatever level of detail they please. 2. This post collectively represents what we at AE found to be the most relevant and interesting findings from these experiments. We sorted the TL;DR below by perceived importance of findings. We are personally excited about pursuing neglected approaches to alignment, but we have attempted to be as deliberate as possible throughout this write-up in striking the balance between presenting the results as straightforwardly as possible and sharing our views about implications of certain results where we thought it was appropriate. 3. This project was descriptive and exploratory in nature. Our goal was to cast a wide psychometric net in order to get a broad sense of the psychological and intellectual make-up of both communities. We used standard frequentist statistical analyses to probe for significance where appropriate, but we definitely still think it is important for ourselves and others to perform follow-up experiments to those presented here with a more tightly controlled scope to replicate and further sharpen t...

The Nonlinear Library
EA - Personal reflections on FTX by William MacAskill

The Nonlinear Library

Play Episode Listen Later Apr 18, 2024 1:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal reflections on FTX, published by William MacAskill on April 18, 2024 on The Effective Altruism Forum. The two podcasts where I discuss FTX are now out: Making Sense with Sam Harris Clearer Thinking with Spencer Greenberg The Sam Harris podcast is more aimed at a general audience; the Spencer Greenberg podcast is more aimed at people already familiar with EA. (I've also done another podcast with Chris Anderson from TED that will come out next month, but FTX is a fairly small part of that conversation.) In this post, I'll gather together some things I talk about across these podcasts - this includes updates and lessons, and responses to some questions that have been raised on the Forum recently. I'd recommend listening to the podcasts first, but these comments can be read on their own, too. I cover a variety of different topics, so I'll cover each topic in separate comments underneath this post. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Effective Altruism Forum Podcast
“Personal reflections on FTX” by William_MacAskill

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 18, 2024 1:12


The two podcasts where I discuss FTX are now out: Making Sense with Sam Harris Clearer Thinking with Spencer Greenberg The Sam Harris podcast is more aimed at a general audience; the Spencer Greenberg podcast is more aimed at people already familiar with EA. (I've also done another podcast with Chris Anderson from TED that will come out next month, but FTX is a fairly small part of that conversation.) In this post, I'll gather together some things I talk about across these podcasts — this includes updates and lessons, and responses to some questions that have been raised on the Forum recently. I'd recommend listening to the podcasts first, but these comments can be read on their own, too. I cover a variety of different topics, so I'll cover each topic in separate comments underneath this post. --- First published: April 18th, 2024 Source: https://forum.effectivealtruism.org/posts/A2vBJGEbKDpuKveHk/personal-reflections-on-ftx --- Narrated by TYPE III AUDIO.

The Nonlinear Library
EA - Spencer Greenberg and William MacAskill: What should the EA movement learn from the SBF/FTX scandal? by AnonymousTurtle

The Nonlinear Library

Play Episode Listen Later Apr 16, 2024 1:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spencer Greenberg and William MacAskill: What should the EA movement learn from the SBF/FTX scandal?, published by AnonymousTurtle on April 16, 2024 on The Effective Altruism Forum. What are the facts around Sam Bankman-Fried and FTX about which all parties agree? What was the nature of Will's relationship with SBF? What things, in retrospect, should've been red flags about Sam or FTX? Was Sam's personality problematic? Did he ever really believe in EA principles? Does he lack empathy? Or was he on the autism spectrum? Was he naive in his application of utilitarianism? Did EA intentionally install SBF as a spokesperson, or did he put himself in that position of his own accord? What lessons should EA leaders learn from this? What steps should be taken to prevent it from happening again? What should EA leadership look like moving forward? What are some of the dangers around AI that are not related to alignment? Should AI become the central (or even the sole) focus of the EA movement? the Clearer Thinking podcast is aimed more at people in or related to EA, whereas Sam Harris's wasn't Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

80k After Hours
Highlights: #183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more

80k After Hours

Play Episode Listen Later Mar 29, 2024 21:06


This is a selection of highlights from episode #183 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and moreAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

The Nonlinear Library
EA - Recent and upcoming media related to EA by 2ndRichter

The Nonlinear Library

Play Episode Listen Later Mar 28, 2024 1:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recent and upcoming media related to EA, published by 2ndRichter on March 28, 2024 on The Effective Altruism Forum. I'm Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they'll touch on topics - like FTX - that I expect will be of interest to Forum readers. The CEO of CEA, @Zachary Robinson, wrote an op-ed that came out today addressing Sam Bankman-Fried and the continuing value of EA. ( Read here) @William_MacAskill will appear on two podcasts and will discuss FTX: Clearer Thinking with Spencer Greenberg and the Making Sense Podcast with Sam Harris. The podcast episode with Sam Harris will likely be released next week and is aimed at a general audience. The podcast episode with Spencer Greenberg will likely be released in two weeks and is aimed at people more familiar with the EA movement. I'll add links for these episodes once they become available and plan to update this post as needed. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Slate Star Codex Podcast
The Mystery Of Internet Survey IQs

Slate Star Codex Podcast

Play Episode Listen Later Mar 20, 2024 11:49


I have data from two big Internet surveys, Less Wrong 2014 and Clearer Thinking 2023. Both asked questions about IQ: The average LessWronger reported their IQ as 138. The average ClearerThinking user reported their IQ as 130. These are implausibly high. Only 1/200 people has an IQ of 138 or higher. 1/50 people have IQ 130, but the ClearerThinking survey used crowdworkers (eg Mechanical Turk) who should be totally average. Okay, fine, so people lie about their IQ (or foolishly trust fake Internet IQ tests). Big deal, right? But these don't look like lies. Both surveys asked for SAT scores, which are known to correspond to IQ. The LessWrong average was 1446, corresponding to IQ 140. The ClearerThinking average was 1350, corresponding to IQ 134. People seem less likely to lie about their SATs, and least likely of all to optimize their lies for getting IQ/SAT correspondences right. And the Less Wrong survey asked people what test they based their estimates off of. Some people said fake Internet IQ tests. But other people named respected tests like the WAIS, WISC, and Stanford-Binet, or testing sessions by Mensa (yes, I know you all hate Mensa, but their IQ tests are considered pretty accurate). The subset of about 150 people who named unimpeachable tests had slightly higher IQ (average 140) than everyone else. Thanks to Spencer Greenberg of ClearerThinking, I think I'm finally starting to make progress in explaining what's going on. https://www.astralcodexten.com/p/the-mystery-of-internet-survey-iqs 

80,000 Hours Podcast with Rob Wiblin
#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Mar 14, 2024 156:38


"When a friend comes to me with a decision, and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, 'I solved your problem.' What I'm trying to do often is give them other ways of thinking about what they're doing, or giving different framings. A classic example of this would be someone who's been working on a project for a long time and they feel really trapped by it. And someone says, 'Let's suppose you currently weren't working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?' And they'd be like, 'Hell no!' It's a reframe. It doesn't mean you definitely shouldn't join, but it's a reframe that gives you a new way of looking at it." —Spencer GreenbergIn today's episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.Links to learn more, summary, and full transcript.They cover:How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.The importance of hype in making valuable things happen.How to recognise warning signs that someone is untrustworthy or likely to hurt you.Whether Registered Reports are successfully solving reproducibility issues in science.The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.The potential harms of lightgassing, which is the opposite of gaslighting.How Spencer's team used non-statistical methods to test whether astrology works.Whether there's any social value in retaliation.And much more.Chapters:Does money make you happy? (00:05:54)Hype vs value (00:31:27)Warning signs that someone is bad news (00:41:25)Integrity and reproducibility in social science research (00:57:54)Personal principles (01:16:22)Decision-making errors (01:25:56)Lightgassing (01:49:23)Astrology (02:02:26)Game theory, tit for tat, and retaliation (02:20:51)Parenting (02:30:00)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

Clearer Thinking with Spencer Greenberg
Spencer's takeaways after 200 episodes (with Spencer Greenberg)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Mar 6, 2024 51:38


Read the full transcript here. It's our 200th episode!

Mornings with Simi
Personality Traits, how accurate are they?

Mornings with Simi

Play Episode Listen Later Feb 29, 2024 8:37


The Myers-Briggs Type Indicator categorizes individuals into 16 personality types based on four dimensions…but are they accurate? Guest: Dr. Spencer Greenberg, Mathematician and Entrepreneur in Social Science Learn more about your ad choices. Visit megaphone.fm/adchoices

Mornings with Simi
Full Show: Did humans have tails?, How accurate are personality tests & Opting out of AirBnB regulations

Mornings with Simi

Play Episode Listen Later Feb 29, 2024 60:37


Seg 1: Did Humans once have tails? How did we lose them? Tails are a widespread feature in the animal kingdom, with nearly every vertebrate class possessing them so did humans once have tails? Guest: Dr. Bo Xia, Geneticist and researcher at the Broad institute of Harvard and MIT. Seg 2: Can you tell a chef how to prepare your meal? Dietary restrictions are increasing in Canada but does that mean that restaurants need to cater to everyone's specific intolerances? Guest: Scott Shantz, CKNW Contributor Seg 3: View From Victoria:  The budget says that $3 billion are available in unallocated contingency funding over each of the next three years. We get a local look at the top political stories with the help of Vancouver Sun columnist Vaughn Palmer. Seg 4: What is Winter Warming Syndrome? Winter warming is a significant indicator of climate change, leading to a phenomenon known as "warming winter syndrome." Guest: Richard B Rood, Professor Emeritus of Climate and Space Sciences and Engineering at the University of Michigan. Seg 5: Personality Tests, how accurate are they? The Myers-Briggs Type Indicator categorizes individuals into 16 personality types based on four dimensions…but are they accurate? Guest: Dr. Spencer Greenberg, Mathematician and Entrepreneur in Social Science Seg 6: Some Municipalities want out of Airbnb Legislation Some municipalities in BC are resisting provincial legislation restricting short-term rental with Prince George city council unanimously voting to request opting out of the Short-Term Rental Accommodations Act Guest: Kyle Sampson, Prince George City councillor Seg 7: Richmond Etching Catalytic Converters A Handful of Richmond auto service shops are offering free etching of partial VIN numbers onto catalytic converters in order to help aid in the increasing theft of the converters. Guest: Corporal Dennis Hwang, Richmond RCMP Learn more about your ad choices. Visit megaphone.fm/adchoices

Clearer Thinking with Spencer Greenberg
Schemas, goals, values, and the pursuit of happiness (with Jeff Perron)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Jan 17, 2024 87:45


Read the full transcript here. What does it mean to have conflicts between our schemas and our values? What is schema therapy? How do schema therapy's claims differ from the "common sense" view that we develop tools for interacting with the world in childhood? How do our "inner critic" and "vulnerable child" connect to our schemas? How do these things differ from the IFS (Internal Family Systems) model of psychotherapy? How do these things map onto Buddhism, Stoicism, and other religious or philosophical traditions? What are the values that lead to a life of happiness? Why are teachings about embracing impermanence and reducing craving found in ancient religious and philosophical traditions but not in modern psychology? And, conversely, why are practices for building "flow" and healthy self-esteem present in modern psychology but not in ancient religious and philosophical traditions?Jeff Perron is a Clinical Psychologist and Author of The Psychology of Happiness, a Substack with over 15,000 subscribers. He writes detailed guides that explain evidence-based concepts associated with mental well-being and happiness. In his clinical work, he has spent years helping professionals align their lives more closely with their goals and values, supporting them in moving away from unnecessary suffering and towards meaning and fulfillment. Dr. Perron also holds an MBA from Wilfrid Laurier University and in the past has worked in the corporate strategy world. He holds a dual research-clinical PhD in Clinical Psychology from the University of Ottawa and is a Clinical Associate of the Ottawa Institute of CBT.Further reading:"Values, Practices, and Behaviors Associated with Happiness (a life of relative equanimity, meaning, fulfillment, health, and positive engagement)" by Jeff PerronANNOUNCEMENT: EA NYC is hosting Spencer for a live recording of our podcast on January 30, 2024! The event is titled: "The moral status of insects and AI systems, and other thorny questions and global priorities research, with Jeff Sebo and Spencer Greenberg". If you'd like to attend in person, click here! Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Miles Kestran — Marketing Music Lee Rosevere Josh Woodward Broke for Free zapsplat.com wowamusic Quiet Music for Tiny Robots Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]

Clearer Thinking with Spencer Greenberg
Cognitive Behavioral Therapy and beyond (with David Burns)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Jan 10, 2024 141:16


What was therapy like in the years leading up to the advent of CBT? Has CBT now been over-sold? How does CBT differ from "the power of positive thinking"? How can therapists who use CBT avoid invalidating clients' feelings? When, if ever, should people listen to their negative thoughts? To what extent can a person's good qualities contribute to their depression? Can empathy be learned? Is it possible to cure depression in a single psychotherapy session? What is TEAM-CBT? Is exposure therapy cruel? What are some strategies for silencing the voices in our heads that lead to depression, anxiety, and other negative mental states?David Burns is Adjunct Clinical Professor Emeritus of Psychiatry and Behavioral Sciences at the Stanford University School of Medicine, where he is involved in research and teaching. He has previously served as Acting Chief of Psychiatry at the Presbyterian / University of Pennsylvania Medical Center (1988) and Visiting Scholar at the Harvard Medical School (1998), and is certified by the National Board of Psychiatry and Neurology. He has received numerous awards, including the A. E. Bennett Award for his research on brain chemistry, the Distinguished Contribution to Psychology through the Media Award, and the Outstanding Contributions Award from the National Association of Cognitive-Behavioral Therapists. He has been named Teacher of the Year three times from the class of graduating residents at Stanford University School of Medicine, and feels especially proud of this award. In addition to his academic research, Dr. Burns has written a number of popular books on mood and relationship problems. His best-selling book, Feeling Good: The New Mood Therapy, has sold over 4 million copies in the United States, and many more worldwide. When he is not crunching statistics for his research, he can be found teaching his famous Tuesday evening psychotherapy training group for Stanford students and community clinicians, or giving workshops for mental health professionals throughout the United States and Canada. Learn more about him at feelinggood.com.Further reading:Feeling Great: The Revolutionary New Treatment for Depression and Anxiety by David BurnsANNOUNCEMENT: EA NYC is hosting Spencer for a live recording of our podcast on January 30, 2024! The event is titled: "The moral status of insects and AI systems, and other thorny questions and global priorities research, with Jeff Sebo and Spencer Greenberg". If you'd like to attend in person, click here! Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Miles Kestran — Marketing Music Lee Rosevere Josh Woodward Broke for Free zapsplat.com wowamusic Quiet Music for Tiny Robots Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]

Clearer Thinking with Spencer Greenberg
There are shrinks, and then there are SUPER-shrinks (with Daryl Chow)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Jan 3, 2024 79:20


Read the full transcript here. What is a "super-shrink"? Which factors in the therapist-client relationship are most predictive of positive client outcomes over time: the therapist's personality, the client's personality, the therapist's methodology, or other factor(s)? How can therapists use and teach evidence-based practices and behaviors while also respecting and working within an individual client's belief system? What should clients look for when shopping for therapists? Why do clients often choose to be less open and honest with their therapists than would be beneficial for them? How can non-therapists be good, therapeutic friends to others?Originally from Singapore, Daryl Chow, MA, Ph.D. is a practicing psychologist based in Perth, Western Australia. He presents to and trains other psychotherapists around the world. He has authored / co-authored several books, including: The First Kiss: Undoing the Intake Model and Igniting First Sessions in Psychotherapy (2018), Better Results: Using Deliberate Practice to Improve Therapeutic Outcomes (APA, 2021), The Field Guide to Better Results (APA, 2023), and Creating Impact (2022). He is also the co-author of many articles, and is co-editor and contributing author of The Write to Recovery: Personal Stories & Lessons About Recovery from Mental Health Concerns. Daryl's newsletter, blogs, and podcast (Frontiers of Psychotherapist Development) are all aimed at inspiring and sustaining practitioners' individualised professional development. Read his writings on Substack; learn more about him on his website, darylchow.com; or email him at info@darylchow.com.ANNOUNCEMENT: EA NYC is hosting Spencer for a live recording of our podcast on January 30, 2024! The event is titled: "The moral status of insects and AI systems, and other thorny questions and global priorities research, with Jeff Sebo and Spencer Greenberg". If you'd like to attend in person, click here: https://www.eventbrite.com/e/insects-ai-systems-and-the-moral-circle-w-jeff-sebo-spencer-greenberg-tickets-767822737477 Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Miles Kestran — Marketing Music Lee Rosevere Josh Woodward Broke for Free zapsplat.com wowamusic Quiet Music for Tiny Robots Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]

80k After Hours
Highlights: #147 – Spencer Greenberg on stopping valueless papers from getting into top journals

80k After Hours

Play Episode Listen Later Dec 7, 2023 19:05


This is a selection of highlights from episode #147 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Spencer Greenberg on stopping valueless papers from getting into top journalsAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

Pigeon Hour
#9: Sarah Woodhouse on discovering AI x-risk, Twitter, and more

Pigeon Hour

Play Episode Listen Later Nov 15, 2023 74:47


Note: I can't seem to edit or remove the “transcript” tab. I recommend you ignore that and just look at the much higher quality, slightly cleaned up one below. Most importantly, follow Sarah on Twitter! Summary (Written by chatGPT, as you can probably tell)In this episode of Pigeon Hour host Aaron delves deep into the world of AI safety with his guest, Sarah Woodhouse. Sarah shares her unexpected journey from fearing job automation to becoming a recognized voice on AI safety Twitter. Her story starts with a simple Google search that led her down a rabbit hole of existential dread and unexpected fame on social media. As she narrates her path from lurker to influencer, Sarah reflects on the quirky dynamics of the AI safety community, her own existential crisis, and the serendipitous tweet that resonated with thousands.Aaron and Sarah's conversation takes unexpected turns, discussing everything from the peculiarities of EA rationalists to the surprisingly serious topic of shrimp welfare. They also explore the nuances of AI doom probabilities, the social dynamics of tech Twitter, and Sarah's unexpected viral fame as a tween. This episode is a rollercoaster of insights and anecdotes, perfect for anyone interested in the intersection of technology, society, and the unpredictable journey of internet fame.Topics discussedDiscussion on AI Safety and Personal Journeys:* Aaron and Sarah discuss her path to AI safety, triggered by concerns about job automation and the realization that AI could potentially replace her work.* Sarah's deep dive into AI safety started with a simple Google search, leading her to Geoffrey Hinton's alarming statements, and eventually to a broader exploration without finding reassuring consensus.* Sarah's Twitter engagement began with lurking, later evolving into active participation and gaining an audience, especially after a relatable tweet thread about an existential crisis.* Aaron remarks on the rarity of people like Sarah, who follow the AI safety rabbit hole to its depths, considering its obvious implications for various industries.AI Safety and Public Perception:* Sarah discusses her surprise at discovering the AI safety conversation happening mostly in niche circles, often with a tongue-in-cheek attitude that could seem dismissive of the serious implications of AI risks.* The discussion touches on the paradox of AI safety: it's a critically important topic, yet it often remains confined within certain intellectual circles, leading to a lack of broader public engagement and awareness.Cultural Differences and Personal Interests:* The conversation shifts to cultural differences between the UK and the US, particularly in terms of sincerity and communication styles.* Personal interests, such as theater and musicals (like "Glee"), are also discussed, revealing Sarah's background and hobbies.Effective Altruism (EA) and Rationalist Communities:* Sarah points out certain quirks of the EA and rationalist communities, such as their penchant for detailed analysis, hedging statements, and the use of probabilities in discussions.* The debate around the use of "P(Doom)" (probability of doom) in AI safety discussions is critiqued, highlighting how it can be both a serious analytical tool and a potentially alienating jargon for outsiders.Shrimp Welfare and Ethical Considerations:* A detailed discussion on shrimp welfare as an ethical consideration in effective altruism unfolds, examining the moral implications and effectiveness of focusing on animal welfare at a large scale.* Aaron defends his position on prioritizing shrimp welfare in charitable giving, based on the principles of importance, tractability, and neglectedness.Personal Decision-Making in Charitable Giving:* Strategies for personal charitable giving are explored, including setting a donation cutoff point to balance moral obligations with personal needs and aspirations.TranscriptAARON: Whatever you want. Okay. Yeah, I feel like you said this on Twitter. The obvious thing is, how did you learn about AI safety? But maybe you've already covered that. That's boring. First of all, do you want to talk about that? Because we don't have to.SARAH: I don't mind talking about that.AARON: But it's sort of your call, so whatever. I don't know. Maybe briefly, and then we can branch out?SARAH: I have a preference for people asking me things and me answering them rather than me setting the agenda. So don't ever feel bad about just asking me stuff because I prefer that.AARON: Okay, cool. But also, it feels like the kind of thing where, of course, we have AI. Everyone already knows that this is just like the voice version of these four tweets or whatever. But regardless. Yes. So, Sarah, as Pigeon Hour guest, what was your path through life to AI safety Twitter?SARAH: Well, I realized that a chatbot could very easily do my job and that my employers either hadn't noticed this or they had noticed, but they were just being polite about it and they didn't want to fire me because they're too nice. And I was like, I should find out what AI development is going to be like over the next few years so that I know if I should go and get good at some other stuff.SARAH: I just had a little innocent Google. And then within a few clicks, I'd completely doom pilled myself. I was like, we're all going to die. I think I found Geoffrey Hinton because he was on the news at the time, because he just quit his job at Google. And he was there saying things that sounded very uncertain, very alarming. And I was like, well, he's probably the pessimist, but I'm sure that there are loads of optimists to counteract that because that's how it usually goes. You find a doomer and then you find a bunch of more moderate people, and then there's some consensus in the middle that everything's basically fine.SARAH: I was like, if I just keep looking, I'll find the consensus because it's there. I'm sure it's there. So I just kept looking and looking for it. I looked for it for weeks. I just didn't find it. And then I was like, nobody knows what's going on. This seems really concerning. So then I started lurking on Twitter, and then I got familiar with all the different accounts, whatever. And then at some point, I was like, I'm going to start contributing to this conversation, but I didn't think that anybody would talk back to me. And then at some point, they started talking back to me and I was like, this is kind of weird.SARAH: And then at some point, I was having an existential crisis and I had a couple of glasses of wine or something, and I just decided to type this big, long thread. And then I went to bed. I woke up the next morning slightly grouchy and hungover. I checked my phone and there were all these people messaging me and all these people replying to my thread being like, this is so relatable. This really resonated with me. And I was like, what is going on?AARON: You were there on Twitter before that thread right? I'm pretty sure I was following you.SARAH: I think, yeah, I was there before, but no one ever really gave me any attention prior to that. I think I had a couple of tweets that blew up before that, but not to the same extent. And then after that, I think I was like, okay, so now I have an audience. When I say an audience, like, obviously a small one, but more of an audience than I've ever had before in my life. And I was like, how far can I take this?SARAH: I was a bit like, people obviously started following me because I'm freFreaking out about AI, but if I post an outfit, what's going to happen? How far can I push this posting, these fit checks? I started posting random stuff about things that were completely unrelated. I was like, oh, people are kind of here for this, too. Okay, this is weird. So now I'm just milking it for all its worth, and I really don't know why anybody's listening to me. I'm basically very confused about the whole thing.AARON: I mean, I think it's kind of weird from your perspective, or it's weird in general because there aren't that many people who just do that extremely logical thing at the beginning. I don't know, maybe it's not obvious to people in every industry or whatever that AI is potentially a big deal, but there's lots of truckers or whatever. Maybe they're not the best demographic or the most conducive demographic, like, getting on Twitter or whatever, but there's other jobs that it would make sense to look into that. It's kind of weird to me that only you followed the rabbit hole all the way down.SARAH: I know! This is what I…Because it's not that hard to complete the circle. It probably took me like a day, it took me like an afternoon to get from, I'm worried about job automation to I should stop saving for retirement. It didn't take me that long. Do you know what I mean? No one ever looks. I literally don't get it. I was talking to some people. I was talking to one of my coworkers about this the other day, and I think I came up in conversation. She was like, yeah, I'm a bit worried about AI because I heard on the radio that taxi drivers might be out of a job. That's bad. And I was like, yeah, that is bad. But do you know what else? She was like, what are the AI companies up to that we don't know about? And I was like, I mean, you can go on their website. You can just go on their website and read about how they think that their technology is an extinction risk. It's not like they're hiding. It's literally just on there and no one ever looks. It's just crazy.AARON: Yeah. Honestly, I don't even know if I was in your situation, if I would have done that. It's like, in some sense, I am surprised. It's very few people maybe like one, but at another level, it's more rationality than most humans have or something. Yeah. You regret going down that rabbit hole?SARAH: Yeah, kind of. Although I'm enjoying the Twitter thing and it's kind of fun, and it turns out there's endless comedic material that you can get out of impending doom. The whole thing is quite funny. It's not funny, but you can make it funny if you try hard enough. But, yeah, what was I going to say? I think maybe I was more primed for doom pilling than your average person because I already knew what EA was and I already knew, you know what I mean. That stuff was on my radar.AARON: That's interesting.SARAH: I think had it not been on my radar, I don't think I would have followed the pipeline all the way.AARON: Yeah. I don't know what browser you use, but it would be. And you should definitely not only do this if you actually think it would be cool or whatever, but this could be in your browser history from that day and that would be hilarious. You could remove anything you didn't want to show, but if it's like Google Chrome, they package everything into sessions. It's one browsing session and it'll have like 10,000 links.SARAH: Yeah, I think for non-sketchy reasons, I delete my Google history more regularly than that. I don't think I'd be able to find that. But I can remember the day and I can remember my anxiety levels just going up and up somewhere between 01:00 p.m. and 07:00 p.m. And by the evening I'm like, oh, my God.AARON: Oh, damn, that's wild.SARAH: It was really stressful.AARON: Yeah, I guess props for, I don't know if props…Is the right word, I guess, impressed? I'm actually somewhat surprised to hear that you said you regret it. I mean, that sucks though, I guess. I'm sorry.SARAH: If you could unknow this, would you?AARON: No, because I think it's worth maybe selfishly, but not overall because. Okay, yeah, I think that would plausibly be the selfish thing to do. Actually. No, actually, hold on. No, I actually don't think that's true. I actually think there's enough an individual can do selfishly such that it makes sense. Even the emotional turmoil.SARAH: It would depend how much you thought that you were going to personally move the needle by knowing about it. I personally don't think that I'm going to be able to do very much. I was going to tip the scales. I wouldn't selfishly unknow it and sacrifice the world. But me being not particularly informed or intelligent and not having any power, I feel like if I forgot that AI was going to end the world, it would not make much difference.AARON: You know what I mean? I agree that it's like, yes, it is unlikely for either of us to tip the scales, but.SARAH: Maybe you can't.AARON: No, actually, in terms of, yeah, I'm probably somewhat more technically knowledgeable just based on what I know about you. Maybe I'm wrong.SARAH: No, you're definitely right.AARON: It's sort of just like a probabilities thing. I do think that ‘doom' - that word - is too simplified, often too simple to capture what people really care about. But if you just want to say doom versus no doom or whatever, AI doom versus no AI doom. Maybe there's like a one in 100,000 chance that one of us tips the scales. And that's important. Maybe even, like, one in 10,000. Probably not. Probably not.SARAH: One in 10,000. Wow.AARON: But that's what people do. People vote, even though this is old 80k material I'm regurgitating because they basically want to make the case for why even if you're not. Or in some article they had from a while ago, they made a case for why doing things that are unlikely to counterfactually matter can still be amazingly good. And the classic example, just voting if you're in a tight race, say, in a swing state in the United States, and it could go either way. Yeah. It might be pretty unlikely that you are the single swing vote, but it could be one in 100,000. And that's not crazy.SARAH: It doesn't take very much effort to vote, though.AARON: Yeah, sure. But I think the core justification, also, the stakes are proportionally higher here, so maybe that accounts for some. But, yes, you're absolutely right. Definitely different amounts of effort.SARAH: Putting in any effort to saving the world from AI. I wouldn't say that. I wouldn't say that I'm sacrificing.AARON: I don't even know if I like. No. Maybe it doesn't feel like a sacrifice. Maybe it isn't. But I do think there's, like, a lot. There's at least something to be. I don't know if this really checks out, but I would, like, bet that it does, which is that more reasonably, at least calibrated. I wanted to say reasonably well informed. But really what it is is, like, some level of being informed and, like, some level of knowing what you don't know or whatever, and more just like, normal. Sorry. I hope normal is not like a bat. I'm saying not like tech Bros, I guess so more like non tech bros. People who are not coded as tech bros. Talking about this on a public platform just seems actually, in fact, pretty good.SARAH: As long as we like, literally just people that aren't men as well. No offense.AARON: Oh, no, totally. Yeah.SARAH: Where are all the women? There's a few.AARON: There's a few that are super. I don't know, like, leaders in some sense, like Ajeya Cotra and Katja Grace. But I think the last EA survey was a third. Or I could be butchering this or whatever. And maybe even within that category, there's some variation. I don't think it's 2%.SARAH: Okay. All right. Yeah.AARON: Like 15 or 20% which is still pretty low.SARAH: No, but that's actually better than I would have thought, I think.AARON: Also, Twitter is, of all the social media platforms, especially mail. I don't really know.SARAH: Um.AARON: I don't like Instagram, I think.SARAH: I wonder, it would be interesting to see whether or not that's much, if it's become more male dominated since Elon Musk took.AARON: It's not a huge difference, but who knows?SARAH: I don't know. I have no idea. I have no idea. We'll just be interesting to know.AARON: Okay. Wait. Also, there's no scheduled time. I'm very happy to keep talking or whatever, but as soon as you want to take a break or hop off, just like. Yeah.SARAH: Oh, yeah. I'm in no rush.AARON: Okay, well, I don't know. We've talked about the two obvious candidates. Do you have a take or something? Want to get out to the world? It's not about AI or obesity or just a story you want to share.SARAH: These are my two pet subjects. I don't know anything else.AARON: I don't believe you. I know you know about house plants.SARAH: I do. A secret, which you can't tell anyone, is that I actually only know about house plants that are hard to kill, and I'm actually not very good at taking care of them.AARON: Well, I'm glad it's house plants in that case, rather than pets. Whatever.SARAH: Yeah. I mean, I have killed some sea monkeys, too, but that was a long time ago.AARON: Yes. So did I, actually.SARAH: Did you? I feel like everyone has. Everyone's got a little sea monkey graveyard in their past.AARON: New cause area.SARAH: Are there more shrimp or more sea monkeys? That's the question.AARON: I don't even know what even. I mean, are they just plankton?SARAH: No, they're not plankton.AARON: I know what sea monkeys are.SARAH: There's definitely a lot of them because they're small and insignificant.AARON: Yeah, but I also think we don't. It depends if you're talking about in the world, which I guess probably like sea monkeys or farmed for food, which is basically like. I doubt these are farmed either for food or for anything.SARAH: Yeah, no, you're probably right.AARON: Or they probably are farmed a tiny bit for this niche little.SARAH: Or they're farmed to sell in aquariums for kids.AARON: Apparently. They are a kind of shrimp, but they were bred specifically to, I don't know, be tiny or something. I'm just skimming that, Wikipedia. Here.SARAH: Sea monkeys are tiny shrimp. That is crazy.AARON: Until we get answers, tell me your life story in whatever way you want. It doesn't have to be like. I mean, hopefully not. Don't straight up lie, but wherever you want to take that.SARAH: I'm not going to lie. I'm just trying to think of ways to make it spicier because it's so average. I don't know what to say about it.AARON: Well, it's probably not that average, right? I mean, it might be average among people you happen to know.SARAH: Do you have any more specific questions?AARON: Okay, no. Yeah, hold on. I have a meta point, which is like, I think the people who are they have a thing on the top of their mind, and if I give any sort of open ended question whatsoever, they'll take it there and immediately just start giving slinging hot takes. But thenOther people, I think, this category is very EA. People who aren't, especially my sister, they're like, “No, I have nothing to talk about. I don't believe that.” But they're not, I guess, as comfortable.SARAH: No, I mean, I have. Something needs to trigger them in me. Do you know what I mean? Yeah, I need an in.AARON: Well, okay, here's one. Is there anything you're like, “Maybe I'll cut this. This is kind of, like narcissistic. I don't know. But is there anything you want or curious to ask?” This does sound kind of weird. I don't know. But we can cut it if need be.SARAH: What does the looking glass in your Twitter name mean? Because I've seen a bunch of people have this, and I actually don't know what it means, but I was like, no.AARON: People ask this. I respond to a tweet that's like, “What does that like?” At least, I don't know, once every month or two. Or know basically, like Spencer Greenberg. I don't know if you're familiar with him. He's like a sort of.SARAH: I know the know.AARON: He literally just tweeted, like a couple years ago. Put this in your bio to show that you really care about finding the truth or whatever and are interested in good faith conversations. Are you familiar with the scout mindset?SARAH: Yeah.AARON: Julia Galef. Yeah. That's basically, like the short version.SARAH: Okay.AARON: I'm like, yeah, all right. And there's at least three of us who have both a magnifying glass. Yeah. And a pause thing, which is like, my tightest knit online community I guess.SARAH: I think I've followed all the pause people now. I just searched the emoji on Twitter, and I just followed everyone. Now I can't find. And I also noticed when I was doing this, that some people, if they've suspended their account or they're taking time off, then they put a pause in their thing. So I was, like, looking, and I was like, oh, these are, like, AI people. But then they were just, like, in their bio, they were, like, not tweeting until X date. This is a suspended account. And I was like, I see we have a messaging problem here. Nice. I don't know how common that actually.AARON: Was. I'm glad. That was, like, a very straightforward question. Educated the masses. Max Alexander said Glee. Is that, like, the show? You can also keep asking me questions, but again, this is like.SARAH: Wait, what did he say? Is that it? Did he just say glee? No.AARON: Not even a question mark. Just the word glee.SARAH: Oh, right. He just wants me to go off about Glee.AARON: Okay. Go off about. Wait, what kind of Glee are we? Vaguely. This is like a show or a movie or something.SARAH: Oh, my God. Have you not seen it?AARON: No. I mean, I vaguely remember, I think, watching some TV, but maybe, like, twelve years ago or something. I don't know.SARAH: I think it stopped airing in, like, maybe 2015?AARON: 16. So go off about it. I don't know what I. Yeah, I.SARAH: Don't know what to say about this.AARON: Well, why does Max think you might have a take about Glee?SARAH: I mean, I don't have a take about. Just see the thing. See? No, not even, like, I am just transparently extremely lame. And I really like cheesy. I'm like. I'm like a musical theater kid. Not even ironically. I just like show tunes. And Glee is just a show about a glee club at a high school where they sing show tunes and there's, like, petty drama, and people burst into song in the hallways, and I just think it's just the most glorious thing on Earth. That's it. There are no hot takes.AARON: Okay, well, that's cool. I don't have a lot to say, unfortunately, but.SARAH: No, that's totally fine. I feel like this is not a spicy topic for us to discuss. It's just a good time.AARON: Yeah.SARAH: Wait.AARON: Okay. Yeah. So I do listen to Hamilton on Spotify.SARAH: Okay.AARON: Yeah, that's about it.SARAH: I like Hamilton. I've seen it three times. Oh.AARON: Live or ever. Wow. Cool. Yeah, no, that's okay. Well, what do people get right or wrong about theater kids?SARAH: Oh, I don't know. I think all the stereotypes are true.AARON: I mean, that's generally true, but usually, it's either over moralized, there's like a descriptive thing that's true, but it's over moralized, or it's just exaggerated.SARAH: I mean, to put this in more context, I used to be in choir. I went every Sunday for twelve years. And then every summer we do a little summer school and we go away and put on a production. So we do a musical or something. So I have been. What have I been? I was in Guys and Dolls. I think I was just in the chorus for that. I was the reverend in Anything Goes. But he does unfortunately get kidnapped in like the first five minutes. So he's not a big presence. Oh, I've been Tweedle dumb in Alice in Wonderland. I could go on, but right now as I'm saying this, I'm looking at my notice board and I have two playbills from when I went to Broadway in April where I saw Funny Girl and Hadestown.SARAH: I went to New York.AARON: Oh, cool. Oh yeah. We can talk about when you're moving to the United States. However.SARAH: I'm not going to do that. Okay.AARON: I know. I'm joking. I mean, I don't know.SARAH: I don't think I'm going to do that. I don't know. It just seems like you guys have got a lot going on over there. It seems like things aren't quite right with you guys. Things aren't quite right with us either.AARON: No, I totally get this. I think it would be cool. But also I completely relate to not wanting to. I've lived within 10 miles of one. Not even 10 miles, 8 miles in one location. Obviously gone outside of that. But my entire life.SARAH: You've just always lived in DC.AARON: Yeah, either in DC or. Sorry. But right now in Maryland, it's like right next to DC on the Metro or at Georgia University, which is in the trying to think would I move to the UK. Like I could imagine situations that would make me move to the UK. But it would still be annoying. Kind of.SARAH: Yeah, I mean, I guess it's like they're two very similar places, but there are all these little cultural things which I feel like kind of trip you up.AARON: I don't to. Do you want to say what?SARAH: Like I think people, I just like, I don't know. I don't have that much experience because I've only been to America twice. But people seem a lot more sincere in a way that you don't really get that. Like people are just never really being upfront. And in America, I just got the impression that people just have less of a veneer up, which is probably a good thing. But it's really hard to navigate if you're not used to it or something. I don't know how to describe that.AARON: Yeah, I've definitely heard this at least. And yeah, I think it's for better and for worse.SARAH: Yeah, I think it's generally a good thing.AARON: Yeah.SARAH: But it's like there's this layer of cynicism or irony or something that is removed and then when it's not there, it's just everything feels weak. I can't describe it.AARON: This is definitely, I think, also like an EA rationalist thing. I feel like I'm pretty far on the spectrum. Towards the end of surgical niceties are fine, but I don't know, don't obscure what you really think unless it's a really good reason to or something. But it can definitely come across as being rude.SARAH: Yeah. No, but I think it's actually a good rule of thumb to obscure what you. It's good to try not to obscure what you think most of the time, probably.Ably, I don't know, but I would love to go over temporarily for like six months or something and just hang out for a bit. I think that'd be fun. I don't know if I would go back to New York again. Maybe. I like the bagels there.AARON: I should have a place. Oh yeah. Remember, I think we talked at some point. We can cut this out if you like. Don't if either of us doesn't want it in. But we discussed, oh yeah, I should be having a place. You can. I emailed the landlord like an hour before this. Hopefully, probably more than 50%. That is still an offer. Yeah, probably not for all six months, but I don't know.SARAH: I would not come and sleep on your sofa for six months. That would be definitely impolite and very weird.AARON: Yeah. I mean, my roommates would probably grumble.SARAH: Yeah. They would be like.AARON: Although I don't know. Who knows? I wouldn't be shocked if people were actually like, whatever somebody asked for as a question. This is what he said. I might also be interested in hearing how different backgrounds. Wait, sorry. This is not good grammar. Let me try to parse this. Not having a super hardcore EA AI rationalist background shape how you think or how you view AI as rationality?SARAH: Oh, that's a good question. I think it's more happening the other way around, the more I hang around in these circles. You guys are impacting how I think.AARON: It's definitely true for me as well.SARAH: Seeping into my brain and my language as well. I've started talking differently. I don't know. That's a good question, though. Yeah. One thing that I will say is that there are certain things that I find irritating about the EA way of style of doing things. I think one specific, I don't know, the kind of like hand ring about everything. And I know that this is kind of the point, right? But it's kind of like, you know, when someone's like, I want to take a stance on something, but then whenever they want to take a stance on something, they feel the need to write like a 10,000 word blog post where they're thinking about the second and order and third and fifth order effects of this thing. And maybe this thing that seems good is actually bad for this really convoluted reason. That's just so annoying.AARON: Yeah.SARAH: Also understand that maybe that is a good thing to do sometimes, but it just seems like, I don't know how anyone ever gets anywhere. It seems like everyone must be paralyzed by indecision all the time because they just can't commit to ever actually just saying anything.AARON: I think this kind of thing is really good if you're trying to give away a billion dollars. Oh yes, I do want the billion dollar grantor to be thinking through second and third order effects of how they give away their billion dollars. But also, no, I am super. The words on the tip of my tongue, not overwhelmed but intimidated when I go on the EA forum because the posts, none of them are like normal, like five paragraph essays. Some of them are like, I think one of them I looked up for fun because I was going to make a meme about it and still will. Probably was like 30,000 words or something. And even the short form posts, which really gets me kind of not even annoyed. I don't know, maybe kind of annoyed is that the short form posts, which is sort of the EA forum version of Twitter, are way too high quality, way too intimidating. And so maybe I should just suck it up and post stuff anyway more often. It just feels weird. I totally agree.SARAH: I was also talking to someone recently about how I lurked on the EA forum and less wrong for months and months and I couldn't figure out the upvoting system and I was like, am I being stupid or why are there four buttons? And I was like, well, eventually I had to ask someone because I couldn't figure it out. And then he explained it to me and I was like, that is just so unnecessary. Like, just do it.AARON: No, I do know what you mean.SARAH: I just tI think it's annoying. It pisses me off. I just feel like sometimes you don't need to add more things. Sometimes less is good. Yeah, that's my hot take. Nice things.AARON: Yeah, that's interesting.SARAH: But actually, a thing that I like that EA's do is the constant hedging and caveatting. I do find it kind of adorable. I love that because it's like you're having to constantly acknowledge that you probably didn't quite articulate what you really meant and that you're not quite making contact with reality when you're talking. So you have to clarify that you probably were imprecise when you said this thing. It's unnecessary, but it's kind of amazing.AARON: No, it's definitely. I am super guilty of this because I'll give an example in a second. I think I've been basically trained to try pretty hard, even in normal conversation with anybody, to just never say anything that's literally wrong. Or at least if I do caveat it.AARON: I was driving home, me and my parents and I, unless visited, our grandparents were driving back, and we were driving back past a cruise ship that was in a harbor. And my mom, who was driving at the time, said, “Oh, Aaron, can you see if there's anyone on there?” And I immediately responded like, “Well, there's probably at least one person.” Obviously, that's not what she meant. But that was my technical best guess. It's like, yes, there probably are people on there, even though I couldn't see anybody on the decks or in the rooms. Yeah, there's probably a maintenance guy. Felt kind of bad.SARAH: You can't technically exclude that there are, in fact, no people.AARON: Then I corrected myself. But I guess I've been trained into giving that as my first reaction.SARAH: Yeah, I love that. I think it's a waste of words, but I find it delightful.AARON: It does go too far. People should be more confident. I wish that, at least sometimes, people would say, “Epistemic status: Want to bet?” or “I am definitely right about this.” Too rarely do we hear, "I'm actually pretty confident here.SARAH: Another thing is, people are too liberal with using probabilities. The meaning of saying there is an X percent chance of something happening is getting watered down by people constantly saying things like, “I would put 30% on this claim.” Obviously, there's no rigorous method that's gone into determining why it's 30 and not 35. That's a problem and people shouldn't do that. But I kind of love it.AARON: I can defend that. People are saying upfront, “This is my best guess. But there's no rigorous methodology.” People should take their word for that. In some parts of society, it's seen as implying that a numeric probability came from a rigorous model. But if you say, “This is my best guess, but it's not formed from anything,” people should take their word for that and not refuse to accept them at face value.SARAH: But why do you have to put a number on it?AARON: It depends on what you're talking about. Sometimes probabilities are relevant and if you don't use numbers, it's easy to misinterpret. People would say, “It seems quite likely,” but what does that mean? One person might think “quite reasonably likely” means 70%, the other person thinks it means 30%. Even though it's weird to use a single number, it's less confusing.SARAH: To be fair, I get that. I've disagreed with people about what the word “unlikely” means. Someone's pulled out a scale that the government uses, or intelligence services use to determine what “unlikely” means. But everyone interprets those words differently. I see what you're saying. But then again, I think people in AI safety talking about P Doom was making people take us less seriously, especially because people's probabilities are so vibey.AARON: Some people are, but I take Paul Cristiano's word seriously.SARAH: He's a 50/50 kind of guy.AARON: Yeah, I take that pretty seriously.Obviously, it's not as simple as him having a perfect understanding of the world, even after another 10,000 hours of investigation. But it's definitely not just vibes, either.SARAH: No, I came off wrong there. I don't mean that everyone's understanding is just vibes.AARON: Yeah.SARAH: If you were looking at it from the outside, it would be really difficult to distinguish between the ones that are vibes and the ones that are rigorous, unless you carefully parsed all of it and evaluated everyone's background, or looked at the model yourself. If you're one step removed, it looks like people just spitting out random, arbitrary numbers everywhere.AARON: Yeah. There's also the question of whether P doom is too weird or silly, or if it could be easily dismissed as such.SARAH: Exactly, the moment anyone unfamiliar with this discussion sees it, they're almost definitely going to dismiss it. They won't see it as something they need to engage with.AARON: That's a very fair point. Aside from the social aspect, it's also a large oversimplification. There's a spectrum of outcomes that we lump into doom and not doom. While this binary approach can be useful at times, it's probably overdone.SARAH: Yeah, because when some people say doom, they mean everyone dies, while others mean everyone dies plus everything is terrible. And no one specifies what they mean. It is silly. But, I also find it kind of funny and I kind of love it.AARON: I'm glad there's something like that. So it's not perfect. The more straightforward thing would be to say P existential risk from AI comes to pass. That's the long version, whatever.SARAH: If I was in charge, I would probably make people stop using PDOOm. I think it's better to say it the long way around. But obviously I'm not in charge. And I think it's funny and kind of cute, so I'll keep using it.AARON: Maybe I'm willing to go along and try to start a new norm. Not spend my whole life on it, but say, I think this is bad for X, Y, and Z reasons. I'll use this other phrase instead and clarify when people ask.SARAH: You're going to need Twitter premium because you're going to need a lot more characters.AARON: I think there's a shorthand which is like PX risk or P AiX risk.SARAH: Maybe it's just the word doom that's a bit stupid.AARON: Yeah, that's a term out of the Bay Area rationalists.SARAH: But then I also think it kind of makes the whole thing seem less serious. People should be indignant to hear that this meme is being used to trade probabilities about the likelihood that they're going to die and their families are going to die. This has been an in-joke in this weird niche circle for years and they didn't know about it. I'm not saying that in a way to morally condemn people, but if you explain this to people…People just go to dinner parties in Silicon Valley and talk about this weird meme thing, and what they really mean is the ODs know everyone's going to prematurely die. People should be outraged by that, I think.AARON: I disagree that it's a joke. It is a funny phrase, but the actual thing is people really do stand by their belief.SARAH: No, I totally agree with that part. I'm not saying that people are not being serious when they give their numbers, but I feel like there's something. I don't know how to put this in words. There's something outrageous about the fact that for outsiders, this conversation has been happening for years and people have been using this tongue-in-cheek phrase to describe it, and 99.9% of people don't know that's happening. I'm not articulating this very well.AARON: I see what you're saying. I don't actually think it's like. I don't know a lot of jargon.SARAH: But when I first found out about this, I was outraged.AARON: I honestly just don't share that intuition. But that's really good.SARAH: No, I don't know how to describe this.AARON: I think I was just a little bit indignant, perhaps.SARAH: Yeah, I was indignant about it. I was like, you guys have been at social events making small talk by discussing the probability of human extinction all this time, and I didn't even know. I was like, oh, that's really messed up, guys.AARON: I feel like I'm standing by the rational tier because, it was always on. No one was stopping you from going on less wrong or whatever. It wasn't behind closed.SARAH: Yeah, but no one ever told me about it.AARON: Yeah, that's like a failure of outreach, I suppose.SARAH: Yeah. I think maybe I'm talking more about. Maybe the people that I'm mad at is the people who are actually working on capabilities and using this kind of jargon. Maybe I'm mad at those people. They're fine.AARON: Do we have more questions? I think we might have more questions. We have one more. Okay, sorry, but keep going.SARAH: No, I'm going to stop making that point now because I don't really know what I'm trying to say and I don't want to be controversial.AARON: Controversy is good for views. Not necessarily for you. No, thank you for that. Yes, that was a good point. I think it was. Maybe it was wrong. I think it seems right.SARAH: It was probably wrong.Shrimp Welfare: A Serious DiscussionAARON: I don't know what she thinks about shrimp welfare. Oh, yeah. I think it's a general question, but let's start with that. What do you think about shrimp? Well, today.SARAH: Okay. Is this an actual cause area or is this a joke about how if you extrapolate utilitarianism to its natural conclusion, you would really care about shrimp?AARON: No, there's a charity called the Shrimp Welfare Initiative or project. I think it's Shrimp Welfare Initiative. I can actually have a rant here about how it's a meme that people find amusing. It is a serious thing, but I think people like the meme more than they're willing to transfer their donations in light of it. This is kind of wrong and at least distasteful.No, but there's an actual, if you Google, Shrimp Welfare Project. Yeah, it's definitely a thing, but it's only a couple of years old. And it's also kind of a meme because it does work in both ways. It sort of shows how we're weird, but in the sense that we are willing to care about things that are very different from us. Not like we're threatening other people. That's not a good description.SARAH: Is the extreme version of this position that we should put more resources into improving the lives of shrimp than into improving the lives of people just because there are so many more shrimp? Are there people that actually believe that?AARON: Well, I believe some version of that, but it really depends on who the ‘we' is there.SARAH: Should humanity be putting more resources?AARON: No one believes that as far as I know.SARAH: Okay. Right. So what is the most extreme manifestation of the shrimp welfare position?AARON: Well, I feel like my position is kind of extreme, and I'm happy to discuss it. It's easier than speculating about what the more extreme ones are. I don't think any of them are that extreme, I guess, from my perspective, because I think I'm right.SARAH: Okay, so what do you believe?AARON: I think that most people who have already decided to donate, say $20, if they are considering where to donate it and they are better morally, it would be better if they gave it to the shrimp welfare project than if they gave it to any of the commonly cited EA organizations.SARAH: Malaria nets or whatever.AARON: Yes. I think $20 of malaria nets versus $20 of shrimp. I can easily imagine a world where it would go the other way. But given the actual situation, the $20 of shrimp is much better.SARAH: Okay. Is it just purely because there's just more shrimp? How do we know how much shrimp suffering there is in the world?AARON: No, this is an excellent question. The numbers are a key factor, but no, it's not as simple. I definitely don't think one shrimp is worth one human.SARAH: I'm assuming that it's based on the fact that there are so many more shrimp than there are people that I don't know how many shrimp there are.AARON: Yeah, that's important, but at some level, it's just the margin. What I think is that when you're donating money, you should give to wherever it does the most good, whatever that means, whatever you think that means. But let's just leave it at that. The most good is morally best at the margin, which means you're not donating where you think the world should or how you think the world should expend its trillion dollar wealth. All you're doing is adding $20 at this current level, given the actual world. And so part of it is what you just said, and also including some new research from Rethink Priorities.Measuring suffering in reasonable ranges is extremely hard to do. But I believe it's difficult to do a better job than raising priorities on that, given what I've seen. I can provide some links. There are a few things to consider here: numbers, times, and the enormity of suffering. I think there are a couple of key elements, including tractability.Are you familiar with the three-pronged concept people sometimes discuss, which encompasses tractability, and neglectedness?SARAH: Okay.AARON: Importance is essentially what we just mentioned. Huge numbers and plausible amounts of suffering. When you try to do the comparison, it seems like they're a significant concern. Tractability is another factor. I think the best estimates suggest that a one-dollar donation could save around 10,000 shrimp from a very painful death.SARAH: In that sense…AARON: You could imagine that even if there were a hundred times more shrimp than there actually are, we have direct control over how they live and die because we're farming them. The industry is not dominated by wealthy players in the United States. Many individual farmers in developing nations, if educated and provided with a more humane way of killing the shrimp, would use it. There's a lot of potential for improvement here. This is partly due to the last prong, neglectedness, which is really my focus.SARAH: You're saying no one cares about the shrimp.AARON: I'm frustrated that it's not taken seriously enough. One of the reasons why the marginal cost-effectiveness is so high is because large amounts of money are donated to well-approved organizations. But individual donors often overlook this. They ignore their marginal impact. If you want to see even a 1% shift towards shrimp welfare, the thing to do is to donate to shrimp welfare. Not donate $19 to human welfare and one dollar to shrimp welfare, which is perhaps what they think the overall portfolio should be.SARAH: Interesting. I don't have a good reason why you're wrong. It seems like you're probably right.AARON: Let me put the website in the chat. This isn't a fair comparison since it's something I know more about.SARAH: Okay.AARON: On the topic of obesity, neither of us were more informed than the other. But I could have just made stuff up or said something logically fallacious.SARAH: You could have told me that there were like 50 times the number of shrimp in the world than there really are. And I would have been like, sure, seems right.AARON: Yeah. And I don't know, if I…If I were in your position, I would say, “Oh, yeah, that sounds right.” But maybe there are other people who have looked into this way more than me that disagree, and I can get into why I think it's less true than you'd expect in some sense.SARAH: I just wonder if there's like… This is like a deeply non-EA thing to say. So I don't know, maybe I shouldn't say it, but are there not any moral reasons? Is there not any good moral philosophy behind just caring more about your own species than other species? If you're sorry, but that's probably not right, is it? There's probably no way to actually morally justify that, but it seems like it feels intuitively wrong. If you've got $20 to be donating 19 of them to shrimp and one to children with malaria, that feels like there should be something wrong with that, but I can't tell you what it is.AARON: Yeah, no, there is something wrong, which is that you should donate all 20 because they're acting on the margin, for one thing. I do think that doesn't check out morally, but I think basically me and everybody I know in terms of real life or whatever, I do just care way more about humans. I don't know, for at least the people that it's hard to formalize or specify what you mean by caring about or something. But, yeah, I think you can definitely basically just be a normal human who basically cares a lot about other humans. And still that's not like, negated by changing your $20 donation or whatever. Especially because there's nothing else that I do for shrimp. I think you should be like a kind person or something. I'm like an honest person, I think. Yeah, people should be nice to other humans. I mean, you should be nice in the sense of not beating them. But if you see a pigeon on the street, you don't need to say hi or whatever, give it a pet, because. I don't know. But yeah, you should be basically like, nice.SARAH: You don't stop to say hi to every pigeon that you see on the way to anywhere.AARON: I do, but I know most normal people don't.SARAH: This is why I'm so late to everything, because I have to do it. I have to stop for every single one. No exceptions.AARON: Yeah. Or how I think about it is sort of like a little bit of compartmentalization, which I think is like… Which is just sort of like a way to function normally and also sort of do what you think really checks out at the end of the day, just like, okay, 99% of the time I'm going to just be like a normal person who doesn't care about shrimp. Maybe I'll refrain from eating them. But actually, even that is like, I could totally see a person just still eating them and then doing this. But then during the 1% of the time where you're deciding how to give money away and none of those, the beneficiaries are going to be totally out of sight either way. This is like a neutral point, I guess, but it's still worth saying, yeah, then you can be like a hardcore effective altruist or whatever and then give your money to the shrimp people.SARAH: Do you have this set up as like a recurring donation?AARON: Oh, no. Everybody should call me out as a hypocrite because I haven't donated much money, but I'm trying to figure out actually, given that I haven't had a stable income ever. And maybe, hopefully I will soon, actually. But even then, it's still a part-time thing. I haven't been able to do sort of standard 10% or more thing, and I'm trying to figure out what the best thing to do or how to balance, I guess, not luxury, not like consumption on things that I… Well, to some extent, yeah. Maybe I'm just selfish by sometimes getting an Uber. That's totally true. I think I'm just a hypocrite in that respect. But mostly I think the trade-off is between saving, investing, and giving. Beast of the money that I have saved up and past things. So this is all sort of a defense of why I don't have a recurring donation going on.SARAH: I'm not asking you to defend yourself because I do not do that either.AARON: I think if I was making enough money that I could give away $10,000 a year and plan on doing that indefinitely, I would be unlikely to set up a recurring donation. What I would really want to do is once or twice a year, really try to prioritize deciding on how to give it away rather than making it the default. This has a real cost for charities. If you set up a recurring donation, they have more certainty in some sense of their future cash flow. But that's only good to do if you're really confident that you're going to want to keep giving there in the future. I could learn new information that says something else is better. So I don't think I would do that.SARAH: Now I'm just thinking about how many shrimp did you say it was per dollar?AARON: Don't quote me. I didn't say an actual thing.SARAH: It was like some big number. Right. Because I just feel like that's such a brainworm. Imagine if you let that actually get in your head and then every time you spend some unnecessary amount of money on something you don't really need, you think about how many shrimp you just killed by getting an Uber or buying lunch out. That is so stressful. I think I'm going to try not to think about that.AARON: I don't mean to belittle this. This is like a core, I think you're new to EA type of thinking. It's super natural and also troubling when you first come upon it. Do you want me to talk about how I, or other people deal with that or take action?SARAH: Yeah, tell me how to get the shrimp off my conscience.AARON: Well, for one thing, you don't want to totally do that. But I think the main thing is that the salience of things like this just decreases over time. I would be very surprised if, even if you're still very engaged in the EA adjacent communities or EA itself in five years, that it would be as emotionally potent. Brains make things less important over time. But I think the thing to do is basically to compartmentalize in a sort of weird sense. Decide how much you're willing to donate. And it might be hard to do that, but that is sort of a process. Then you have that chunk of money and you try to give it away the best you can under whatever you think the best ethics are. But then on the daily, you have this other set pot of money. You just are a normal person. You spend it as you wish. You don't think about it unless you try not to. And maybe if you notice that you might even have leftover money, then you can donate the rest of it. But I really do think picking how much to give should sort of be its own project. And then you have a pile of money you can be a hardcore EA about.SARAH: So you pick a cut off point and then you don't agonize over anything over and above that.AARON: Yeah. And then people, I mean, the hard part is that if somebody says their cut off point is like 1% of their income and they're making like $200,000, I don't know. Maybe their cut off point should be higher. So there is a debate. It depends on that person's specific situation. Maybe if they have a kid or some super expensive disease, it's a different story. If you're just a random guy making $200,000, I think you should give more.SARAH: Maybe you should be giving away enough to feel the pinch. Well, not even that. I don't think I'm going to do that. This is something that I do actually want to do at some point, but I need to think about it more and maybe get a better job.AARON: Another thing is, if you're wanting to earn to give as a path to impact, you could think and strive pretty hard. Maybe talk to people and choose your education or professional development opportunities carefully to see if you can get a better paying job. That's just much more important than changing how much you give from 10% to 11% or something. You should have this macro level optimization. How can I have more money to spend? Let me spend, like, I don't know, depends what life stage you are, but if you had just graduated college or maybe say you're a junior in college or something. It could make sense to spend a good amount of time figuring out what that path might look like.AARON: I'm a huge hypocrite because I definitely haven't done all this nearly as much as I should, but I still endorse it.SARAH: Yeah, I think it's fine to say what you endorse doing in an ideal world, even if you're not doing that, that's fine.AARON: For anybody listening, I tweeted a while ago, asking if anyone has resources on how to think about giving away wealth. I'm not very wealthy but have some amount of savings. It's more than I really need. At the same time, maybe I should be investing it because EA orgs don't feel like, or they think they can't invest it because there's potentially a lot of blowback if they make poor investments, even though it would be higher expected value.There's also the question of, okay, having some amount of savings allows me to take higher, potentially somewhat higher risk, but higher value opportunities because I have a cushion. But I'm very confused about how to give away what I should do here. People should DM me on Twitter or anywhere they have ideas.SARAH: I think you should calculate how much you need to cover your very basic needs. Maybe you should work out, say, if you were working 40 hours a week in a minimum wage job, like how much would you make then? And then you should keep that for yourself. And then the rest should definitely all go to the shrimp. Every single penny. All of it.AARON: This is pretty plausible. Just to make it more complicated, there's also the thing that I feel like my estimates or my best guesses of the best charities to give to over time has changed. And so there's like two competing forces. One is that I might get wiser and more knowledgeable as time goes on. The other one is that in general, giving now is better than giving later. All else equal, because I think for a couple of reasons, the main one just being that the charities don't know that you're going to give later.AARON: So it's like they can plan for the future much better if they get money now. And also there's just higher leverage opportunities or higher value per dollar opportunities now in general than there will be later for a couple of reasons I don't really need to. This is what makes it really complicated. So I've donated in the past to places that I don't think, or I don't think even at the time were the best to. So then there's a question of like, okay, how long do I save this money? Do I sit on it for months until I'm pretty confident, like a year.AARON: I do think that probably over the course of zero to five years or something, becoming more confident or changing your mind is like the stronger effect than how much good you give to the, or how much better it is for the charities to give now instead of later. But also that's weird because you're never committing at all.Sometimes you might decide to give it away, and maybe you won't. Maybe at that time you're like, “Oh, that's what I want. A car, I have a house, whatever.” It's less salient or something. Maybe something bad happened with EA and you no longer identify that way. Yeah, there's a lot of really thorny considerations. Sorry, I'm talking way too much.SARAH: Long, are you factoring AI timelines into this?AARON: That makes it even more sketchy. But that could also go both ways. On one hand, you have the fact that if you don't give away your money now and you die with it, it's never going to do any good. The other thing is that it might be that especially high leverage opportunities come in the future or something potentially you need, I don't know, whatever I can imagine I could make something up about. OpenPhil needs as much money as it can get to do X, Y and Z. It's really important right now, but I won't know that until a few years down the line. So just like everything else, it doesn't neatly wash out.SARAH: What do you think the AGI is going to do to the shrimp? I reckon it's probably pretty neat, like one shrimp per paperclip. Maybe you could get more. I wonder what the sort of shrimp to paperclip conversion rate is.AARON: Has anyone looked into that morally? I think like one to zero. I don't think in terms of money. You could definitely price that. I have no idea.SARAH: I don't know. Maybe I'm not taking this as seriously as I should be because I'm.AARON: No, I mean, humor is good. When people are giving away money or deciding what to do, they should be serious. But joking and humor is good. Sorry, go ahead.SARAH: No, you go ahead.AARON: I had a half-baked idea. At EA Global, they should have a comedy show where people roast everybody, but it's a fundraiser. You have to pay to get 100 people to attend. They have a bidding contest to get into the comedy show. That was my original idea. Or they could just have a normal comedy show. I think that'd be cool.SARAH: Actually, I think that's a good idea because you guys are funny. There is a lot of wit on this side of Twitter. I'm impressed.AARON: I agree.SARAH: So I think that's a very good idea.AARON: Okay. Dear Events team: hire Aaron Bergman, professional comedian.SARAH: You can just give them your Twitter as a source for how funny you are, and that clearly qualifies you to set this up. I love it.AARON: This is not important or related to anything, but I used to be a good juggler for entertainment purposes. I have this video. Maybe I should make sure the world can see it. It's like a talent show. So maybe I can do that instead.SARAH: Juggling. You definitely should make sure the world has access to this footage.AARON: It had more views than I expected. It wasn't five views. It was 90 or something, which is still nothing.SARAH: I can tell you a secret right now if you want. That relates to Max asking in the chat about glee.AARON: Yes.SARAH: This bit will also have to edit out, but me having a public meltdown over AI was the second time that I've ever blown up on the Internet. The first time being. I can't believe I'm telling you this. I think I'm delirious right now. Were you ever in any fandoms, as a teenager?AARON: No.SARAH: Okay. Were you ever on Tumblr?AARON: No. I sort of know what the cultural vibes were. I sort of know what you're referring to. There are people who like Harry Potter stuff and bands, like Kpop stuff like that.SARAH: So people would make these fan videos where they'd take clips from TV shows and then they edit them together to music. Sometimes people would edit the clips to make it look like something had happened in the plot of the show that hadn't actually happened. For example, say, what if X character had died? And then you edit the clips together to try and make it look like they've died. And you put a sad song, how to save a life by the fray or something, over the top. And then you put it on YouTube.AARON: Sorry, tell me what…"Hat I should search or just send the link here. I'm sending my link.SARAH: Oh, no, this doesn't exist anymore. It does not exist anymore. Right? So, say if you're, like, eleven or twelve years old and you do this, and you don't even have a mechanism to download videos because you don't know how to do technology. Instead, you take your little iPod touch and you just play a YouTube video on your screen, and you literally just film the screen with your iPod touch, and that's how you're getting the clips. It's kind of shaky because you're holding the camera anyway.SARAH: Then you edit together on the iMovie app of your iPod touch, and then you put it on the Internet, and then you just forget about it. You forget about it. Two years later, you're like, oh, I wonder what happened to that YouTube account? And you log in and this little video that you've made with edited clips that you've filmed off the screen of your laptop to ‘How To Save Life' by The Fray with clips from Glee in it, has nearly half a million views.AARON: Nice. Love it.SARAH: Embarrassing because this is like, two years later. And then all the comments were like, oh, my God, this was so moving. This made me cry. And then obviously, some of them were hating and being like, do you not even know how to download video clips? Like, what? And then you're so embarrassed.AARON: I could totally seem it. Creative, but only a reasonable solution. Yeah.SARAH: So that's my story of how I went viral when I was like, twelve.AARON: It must have been kind of overwhelming.SARAH: Yeah, it was a bit. And you can tell that my time, it's like 20 to eleven at night, and now I'm starting to really go off on one and talk about weird things.AARON: Like an hour. So, yeah, we can wrap up. And I always say this, but it's actually true. Which is that low standard, like, low stakes or low threshold. Low bar for doing that in recording some of the time.SARAH: Yeah, probably. We'll have to get rid of the part about how I went viral on YouTube when I was twelve. I'll sleep on that.AARON: Don't worry. I'll send the transcription at some point soon.SARAH: Yeah, cool.AARON: Okay, lovely. Thank you for staying up late into the night for this.SARAH: It's not that late into the night. I'm just like, lame and go to bed early.AARON: Okay, cool. Yeah, I know. Yeah, for sure. All right, bye. Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

Modern Wisdom
#705 - Spencer Greenberg - The 5 Most Effective Techniques To Hack Your Habits

Modern Wisdom

Play Episode Listen Later Nov 11, 2023 86:30


Spencer Greenberg is a mathematician, a writer and the founder of ClearerThinking.org First we make our habits, and then our habits make us. But what is the best way to step into this recursive loop and take charge of the most powerful force in our lives? Thankfully Spencer just completed a huge new study testing tons of different techniques. Expect to learn how useful personality tests are, Spencer's biggest insights from 450 people trying every habit strategy ever invented, how you can better integrate your subconscious into decision making, why becoming wise is genuinely important, how useful intuition really is, when you should trust your gut and when you should override it and much more... Sponsors: Get 10% discount on all Gymshark's products at https://bit.ly/sharkwisdom (use code: MW10) Get over 37% discount on all products site-wide from MyProtein at https://bit.ly/proteinwisdom (use code: MODERNWISDOM)  Get 20% discount on House Of Macadamias' nuts at https://houseofmacadamias.com/modernwisdom (use code MW20) Extra Stuff: Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/ Learn more about your ad choices. Visit megaphone.fm/adchoices

The Partially Examined Life Philosophy Podcast
Ep. 328: Guest Yascha Mounk Against Identity Politics (Part Two)

The Partially Examined Life Philosophy Podcast

Play Episode Listen Later Nov 6, 2023 59:41


Continuing on The Identity Trap (2023). Which works better to achieve social progress; classical liberalism, or strategies involving emphasis of identity group membership? Do we even have to pick a side, or can we pragmatically choose strategies from whichever philosophy most effectively addresses the situation in question? We discuss cultural appropriation, free speech, standpoint epistemology, and more. Get more at partiallyexaminedlife.com. Visit partiallyexaminedlife.com/support to get ad-free episodes and bonus content including a supporter-exclusive, guest-free part three to this discussion. Listen to a preview. Sponsor: Learn about St. John's College at sjc.edu/pel. Check out the Clearer Thinking Podcast with Spencer Greenberg.

The One You Feed
How to Integrate Behavior Change with Your Values

The One You Feed

Play Episode Listen Later Aug 8, 2023 62:49 Transcription Available


Spencer Greenberg and Eric discuss how to integrate behavior change with your values. They explore the importance of focusing on the process rather than the end goal and share practice strategies for forming habits that will help you live according to your values. In this episode, you'll be able to: Identify the underlying values that lead to your decisions, and build a strategy around them Recognize the crucial role regular self-reflection plays in cultivating these improved practices Understand the significance of prioritizing the process, not just the end goal in forming habits Navigate the next steps when facing conflicting values Understand the various frameworks for behavior change and the ten conditions for change To learn more, click here!See omnystudio.com/listener for privacy information.

EARadio
Workshop: Improve your decision-making | Spencer Greenberg | EAG Bay Area 23

EARadio

Play Episode Listen Later Apr 7, 2023 21:23


Workshop: Improve your decision-making | Spencer Greenberg | EAG Bay Area 23 - YouTubeEffective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

80,000 Hours Podcast with Rob Wiblin
#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Mar 24, 2023 158:08


Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don't get the same result if the experiments are repeated. Two key reasons are 'p-hacking' and 'publication bias'. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they're actually not — a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a 'null result' never saw the light of day. The resulting phenomenon of publication bias is one we've understood for 60 years. Today's repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years. Links to learn more, summary and full transcript. He recently checked whether p-values, an indicator of how likely a result was to occur by pure chance, could tell us how likely an outcome would be to recur if an experiment were repeated. From his sample of 325 replications of psychology studies, the answer seemed to be yes. According to Spencer, "when the original study's p-value was less than 0.01 about 72% replicated — not bad. On the other hand, when the p-value is greater than 0.01, only about 48% replicated. A pretty big difference." To do his bit to help get these numbers up, Spencer has launched an effort to repeat almost every social science experiment published in the journals Nature and Science, and see if they find the same results. But while progress is being made on some fronts, Spencer thinks there are other serious problems with published research that aren't yet fully appreciated. One of these Spencer calls 'importance hacking': passing off obvious or unimportant results as surprising and meaningful. Spencer suspects that importance hacking of this kind causes a similar amount of damage to the issues mentioned above, like p-hacking and publication bias, but is much less discussed. His replication project tries to identify importance hacking by comparing how a paper's findings are described in the abstract to what the experiment actually showed. But the cat-and-mouse game between academics and journal reviewers is fierce, and it's far from easy to stop people exaggerating the importance of their work. In this wide-ranging conversation, Rob and Spencer discuss the above as well as: • When you should and shouldn't use intuition to make decisions. • How to properly model why some people succeed more than others. • The difference between “Soldier Altruists” and “Scout Altruists.” • A paper that tested dozens of methods for forming the habit of going to the gym, why Spencer thinks it was presented in a very misleading way, and what it really found. • Whether a 15-minute intervention could make people more likely to sustain a new habit two months later. • The most common way for groups with good intentions to turn bad and cause harm. • And Spencer's approach to a fulfilling life and doing good, which he calls “Valuism.” Here are two flashcard decks that might make it easier to fully integrate the most important ideas they talk about: • The first covers 18 core concepts from the episode • The second includes 16 definitions of unusual terms. Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Milo McGuire Transcriptions: Katy Moore

The Bayesian Conspiracy
Bayes Blast 5 – Importance Hacking

The Bayesian Conspiracy

Play Episode Listen Later Mar 19, 2023 12:22


Steven blasts Eneasz with Spencer Greenberg's description of a huge problem in science publication. Links The short post Steven took his notes from A Clearer Thinking podcast episode where they discuss this Spencer as a guest on Two Psychologists, Four … Continue reading →

The Nonlinear Library
EA - Have there been any detailed, cross-cultural surveys into global moral priorities? by Amber Dawn

The Nonlinear Library

Play Episode Listen Later Feb 7, 2023 1:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have there been any detailed, cross-cultural surveys into global moral priorities?, published by Amber Dawn on February 6, 2023 on The Effective Altruism Forum. Has there been any research done (either within or outside of EA) about most people's moral priorities, and/or about the priorities of recipients of philanthropy? I'm thinking of things like, surveys of large groups of people across many cultures which asked them ‘is it more important to be healthy, or wealthy, or have more choices, or prevent risks that might hurt your grandchildren?' What motivates this question is something like: there's been a lot of talk about democratizing EA. But even if more EA community members had input into funding decisions, that's still not really democratic. I want to know: what does the average person worldwide think that wealthy philanthropists should do with their money? GiveWell commissioned some research into the preferences of people in some low income communities, similar to the beneficiaries of many of their top charities. However, they only asked about whether they valued saving the lives of younger vs older people, and how much they valued saving years of life vs increasing income. It would be interesting to read more holistic surveys that asked about other things that people might value, including things that charities might not straightforwardly be able to provide (like more political participation, or less oppression). (You could use as a basis, for example, the capability approach, or Spencer Greenberg's work on intrinsic values.) This might be useful for longtermists as well as those who focus on global health and poverty in the nearer term. You could ask people how much they value risk mitigation vs increases in wellbeing, for example; or you could use people's answers to try to shape a future that fits more people's values. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

EARadio
How Can You Become a Clearer Thinker? | Spencer Greenberg | EAGxVirtual 2022

EARadio

Play Episode Listen Later Feb 6, 2023 47:02


A fireside chat with Spencer Greenberg.Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

Two Psychologists Four Beers
Episode 98: Inspired Science (with Spencer Greenberg)

Two Psychologists Four Beers

Play Episode Listen Later Nov 23, 2022 70:51


Yoel and Alexa are joined by Spencer Greenberg, founder of the behavioral science startup incubator Spark Wave and host of the Clearer Thinking podcast. He describes how he became fascinated with psychology and behavior change, and how he's been working to provide empirically-backed strategies for everday tasks, like making decisions or forming habits. He also offers an alternative perspective on open science, arguing that a phenomenon he calls "importance hacking" has been overshadowed by p-hacking in calls for science reform. Greenberg further challenges the Alexa and Yoel to consider whether the "open scientist" will fall short of what can only be achieved by the truly "inspired scientist." Finally, Spenccer has a major project in the works, and he gives us the honor of the big reveal. Special Guest: Spencer Greenberg.

The Nonlinear Library
EA - Podcast: The Left and Effective Altruism with Habiba Islam by Garrison

The Nonlinear Library

Play Episode Listen Later Oct 27, 2022 2:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast: The Left and Effective Altruism with Habiba Islam, published by Garrison on October 27, 2022 on The Effective Altruism Forum. I recently rebooted my interview podcast, The Most Interesting People I Know (found wherever you find podcasts). I focus on EA and left-wing guests, and have been pretty involved in both communities for the last 5 years. Some example guests: Rutger Bregman, Leah Garcés, Lewis Bollard, Spencer Greenberg, Nathan Robinson, Malaika Jabali, Emily Bazelon, David Shor, and Eric Levitz. I just released a long conversation with Habiba Islam, an 80K career advisor and lefty, about the relationship between EA and the left. This is not an attempt to paper over differences between the two communities, or pretend that EA is more left-wing than it is. Instead, I tried to give an accurate description of both communities, where they are in hidden agreement, where they actually disagree, and what each can learn from the other. Habiba is so sharp and thoughtful throughout the conversation. We're very lucky to have her! I hope this could be a good reference text as well as an onboarding ramp for leftists who might be open to EA. I think there's a real gap in the EA media-verse on the intersection of left-wing politics and EA, and we're almost certainly missing out on some great people and perspectives who would be into EA if they were presented with the right arguments and framing. I have no delusions that all leftists would be into EA if they only understood it better, but I think there are tons of bad-faith criticisms and genuine misunderstandings that we could better address. I think we can have a healthier and more productive relationship with the left. If you'd like to support the show, here are some things you can do: Personally recommend the show/particular episodes to friends. Apparently, this is how podcasts best grow their audiences. Share the podcast/episode on social media (I'm on Twitter @garrisonlovely) Rate and review the show on Apple Podcasts. Give me feedback (anonymous form here). You can also email me at tgarrisonlovely@gmail.com Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Clearer Thinking with Spencer Greenberg
Forecasting the things that matter (with Peter Wildeford)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 21, 2022 92:12


How can we change the way we think about expertise (or the trustworthiness of any information source) using forecasting? How do prediction markets work? How can we use prediction markets in our everyday lives? Are prediction markets more trustworthy than large or respectable news outlets? How long does it take to sharpen one's prediction skills? In (e.g.) presidential elections, we know that the winner will be one person from a very small list of people; but how can we reasonably make predictions in cases where the outcomes aren't obviously multiple-choice (e.g., predicting when artificial general intelligence will be created)? How can we move from the world we have now to a world in which people think more quantitatively and make much better predictions? What scoring rules should we use to keep track of our predictions and update accordingly?Peter Wildeford is the co-CEO of Rethink Priorities, where he aims to scalably employ a large number of well-qualified researchers to work on the world's most important problems. Prior to running Rethink Priorities, he was a data scientist in industry for five years at DataRobot, Avant, Clearcover, and other companies. He is also recognized as a Top 50 Forecaster on Metaculus (international forecasting competition) and has a Triple Master Rank on Kaggle (international data science competition) with top 1% performance in five different competitions. Follow him on Twitter at @peterwildeford.Further reading:ClearerThinking.org's "Calibrate Your Judgment" practice programMetaculus (forecasting platform)Manifold MarketsPolymarket"Calibration Scoring Rules for Practical Prediction Training", a paper by Spencer Greenberg

Clearer Thinking with Spencer Greenberg
Forecasting the things that matter (with Peter Wildeford)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 21, 2022 92:12


Read the full transcriptHow can we change the way we think about expertise (or the trustworthiness of any information source) using forecasting? How do prediction markets work? How can we use prediction markets in our everyday lives? Are prediction markets more trustworthy than large or respectable news outlets? How long does it take to sharpen one's prediction skills? In (e.g.) presidential elections, we know that the winner will be one person from a very small list of people; but how can we reasonably make predictions in cases where the outcomes aren't obviously multiple-choice (e.g., predicting when artificial general intelligence will be created)? How can we move from the world we have now to a world in which people think more quantitatively and make much better predictions? What scoring rules should we use to keep track of our predictions and update accordingly?Peter Wildeford is the co-CEO of Rethink Priorities, where he aims to scalably employ a large number of well-qualified researchers to work on the world's most important problems. Prior to running Rethink Priorities, he was a data scientist in industry for five years at DataRobot, Avant, Clearcover, and other companies. He is also recognized as a Top 50 Forecaster on Metaculus (international forecasting competition) and has a Triple Master Rank on Kaggle (international data science competition) with top 1% performance in five different competitions. Follow him on Twitter at @peterwildeford.Further reading:ClearerThinking.org's "Calibrate Your Judgment" practice programMetaculus (forecasting platform)Manifold MarketsPolymarket"Calibration Scoring Rules for Practical Prediction Training", a paper by Spencer Greenberg

The Nonlinear Library
EA - EAGxVirtual: A virtual venue, timings, and other updates by Alex Berezhnoi

The Nonlinear Library

Play Episode Listen Later Oct 14, 2022 3:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxVirtual: A virtual venue, timings, and other updates, published by Alex Berezhnoi on October 13, 2022 on The Effective Altruism Forum. EAGxVirtual is fast approaching. This post covers updates from the team, including demographics data, dates and times, content, venue, and unique features. Transcending Boundaries We have already received more than 600 applications from people representing over 60 countries, making our conference one of the most geographically diverse EA events ever. For many of them, it would be their first conference. If you are a highly-engaged EA, you can make a difference by being responsive to requests from first-time attendees. The map below shows the geographical distribution of the participants: Still, we would love to see more applications. If you know someone who you think should attend the conference, please encourage them to apply by sending them this link! The deadline for applications is 8:00 am UTC on Wednesday, 19 October. Dates and times The conference will be taking place from 5 pm UTC on Friday, October 21st, until 11:59 pm UTC on Sunday, October 23rd. Friday will feature group meetups and an opening session. On Saturday and Sunday, the sessions will start at 8 am UTC. We try to make the keynote sessions accessible to people from different time zones but the recordings will be available if you cannot make it. There will be a break in the program on Sunday between 3 am and 8 am UTC. Content: what to expect We are working hard on the program. Here are the types of content you might expect, beyond the usual talks and workshops: Career stories sessions Office Hours hosted by EA orgs Q&As and fireside chats Group meetups and icebreakers Lightning talks from the attendees Participant-driven meetups on Gather.Town We have confirmed speakers from Charity Entrepreneurship, GFI Asia, Manifold Markets, Spark Wave, CEA, GovAI, HLI, and other organizations. Some exciting confirmed speakers: Spencer Greenberg, Seth Baum, Varun Deshpande, Ben Garfinkel, David Manheim, and others! The tentative schedule will be available on the Swapcard app at the end of the week, but it is subject to slight changes in the leadup to the conference. Virtual venue Our main content and networking platform for the conference is the Swapcard. We will share access to the app with all the attendees a week before the conference and provide guidance on how to use it and get the most out of the conference. We also collaborate with EA Gather.Town to make an always-available virtual space for the attendees to spark more connections and unstructured discussions throughout the conference. There will be spots for private meetings and rooms you can book for group meetups: just like the real conference venue! There will be sessions led by EA Virtual Reality as well! Gather.Town and EA VR are optional but are exciting opportunities for those who want to experiment with formats beyond usual live streams and calls. Call for volunteers We think volunteering for such events can be a very fulfilling experience, and organizers depend on motivated people like you to support us and make the best out of this event. We are currently looking for volunteers to help in a wide range of positions, including chat management, moderators, emcees, and more. If you attending the conference, please consider becoming a volunteer. We are very excited about the event and hope to see you there! EAGxVirtual Team: Alex, Jordan, Dion, Amine, Marka, and Ollie Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Does beating yourself up about being unproductive accomplish anything? What should I ask self-compassion researcher Kristin Neff when I interview her? by Robert Wiblin

The Nonlinear Library

Play Episode Listen Later Sep 14, 2022 1:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does beating yourself up about being unproductive accomplish anything? What should I ask self-compassion researcher Kristin Neff when I interview her?, published by Robert Wiblin on September 14, 2022 on The Effective Altruism Forum. Next week for The 80,000 Hours Podcast I'm interviewing Kristin Neff — an academic psychologist who has pioneered research into, and is strongly associated with, the idea of 'self-compassion'. We will discuss how much guilt/shame/negative self-talk help get things done, and if not much, how to reduce them. While many people believe that feeling bad about themselves when they're unproductive helps them achieve more, and that without this behaviour they'd become lazy and self-indulgent, Kristin argues that the evidence shows these behaviours to be bad for long-term productivity. Just as a good line manager doesn't denigrate a staff member who is struggling, and a good parent won't belittle a child who is procrastinating, we ought not treat ourselves so harshly. We should speak to ourselves like a good manager or friend or parent would: with kindness and understanding as well as a firm commitment to what's in our own long-term best interest. If Kristin is right, 'beating yourself up' is not only harmful to your own well-being, in typical cases it accomplishes very little. What should I ask her? Places to learn more about Kristen's work: A good interview she did with Spencer Greenberg on the Clearer Thinking Podcast that addresses mental health issues common in the EA community Wikipedia Her website Her book Self Compassion: The Proven Power of Being Kind to Yourself Dr. Kristin Neff | The Science of Self-Compassion | Talks at Google Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - The inordinately slow spread of good AGI conversations in ML by RobBensinger

The Nonlinear Library

Play Episode Listen Later Jun 29, 2022 13:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The inordinately slow spread of good AGI conversations in ML, published by RobBensinger on June 29, 2022 on The Effective Altruism Forum. Spencer Greenberg wrote on Twitter: Recently @KerryLVaughan has been critiquing groups trying to build AGI, saying that by being aware of risks but still trying to make it, they're recklessly putting the world in danger. I'm interested to hear your thought/reactions to what Kerry says and the fact he's saying it. Michael Page replied: I'm pro the conversation. That said, I think the premise -- that folks are aware of the risks -- is wrong. Honestly, I think the case for the risks hasn't been that clearly laid out. The conversation among EA-types typically takes that as a starting point for their analysis. The burden for the we're-all-going-to-die-if-we-build-x argument is -- and I think correctly so -- quite high. Oliver Habryka then replied: I find myself skeptical of this. Like, my sense is that it's just really hard to convince someone that their job is net-negative. "It is difficult to get a man to understand something when his salary depends on his not understanding it" And this barrier is very hard to overcome with just better argumentation. My reply: I disagree with "the case for the risks hasn't been that clearly laid out". I think there's a giant, almost overwhelming pile of intro resources at this point, any one of which is more than sufficient, written in all manner of style, for all manner of audience. (I do think it's possible to create a much better intro resource than any that exist today, but 'we can do much better' is compatible with 'it's shocking that the existing material hasn't already finished the job'.) I also disagree with "The burden for the we're-all-going-to-die-if-we-build-x argument is -- and I think correctly so -- quite high." If you're building a machine, you should have an at least somewhat lower burden of proof for more serious risks. It's your responsibility to check your own work to some degree, and not impose lots of micromorts on everyone else through negligence. But I don't think the latter point matters much, since the 'AGI is dangerous' argument easily meets higher burdens of proof as well. I do think a lot of people haven't heard the argument in any detail, and the main focus should be on trying to signal-boost the arguments and facilitate conversations, rather than assuming that everyone has heard the basics. A lot of the field is very smart people who are stuck in circa-1995 levels of discourse about AGI. I think 'my salary depends on not understanding it' is only a small part of the story. ML people could in principle talk way more about AGI, and understand the problem way better, without coming anywhere close to quitting their job. The level of discourse is by and large too low for 'I might have to leave my job' to be the very next obstacle on the path. Also, many ML people have other awesome job options, have goals in the field other than pure salary maximization, etc. More of the story: Info about AGI propagates too slowly through the field, because when one ML person updates, they usually don't loudly share their update with all their peers. This is because: 1. AGI sounds weird, and they don't want to sound like a weird outsider. 2. Their peers and the community as a whole might perceive this information as an attack on the field, an attempt to lower its status, etc. 3. Tech forecasting, differential technological development, long-term steering, exploratory engineering, 'not doing certain research because of its long-term social impact', prosocial research closure, etc. are very novel and foreign to most scientists. EAs exert effort to try to dig up precedents like Asilomar partly because Asilomar is so unusual compared to the norms and practices of the vast majority of science. Sci...

The Nonlinear Library
LW - The inordinately slow spread of good AGI conversations in ML by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Jun 21, 2022 13:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The inordinately slow spread of good AGI conversations in ML, published by Rob Bensinger on June 21, 2022 on LessWrong. Spencer Greenberg wrote on Twitter: Recently @KerryLVaughan has been critiquing groups trying to build AGI, saying that by being aware of risks but still trying to make it, they're recklessly putting the world in danger. I'm interested to hear your thought/reactions to what Kerry says and the fact he's saying it. Michael Page replied: I'm pro the conversation. That said, I think the premise -- that folks are aware of the risks -- is wrong. Honestly, I think the case for the risks hasn't been that clearly laid out. The conversation among EA-types typically takes that as a starting point for their analysis. The burden for the we're-all-going-to-die-if-we-build-x argument is -- and I think correctly so -- quite high. Oliver Habryka then replied: I find myself skeptical of this. Like, my sense is that it's just really hard to convince someone that their job is net-negative. "It is difficult to get a man to understand something when his salary depends on his not understanding it" And this barrier is very hard to overcome with just better argumentation. My reply: I disagree with "the case for the risks hasn't been that clearly laid out". I think there's a giant, almost overwhelming pile of intro resources at this point, any one of which is more than sufficient, written in all manner of style, for all manner of audience. (I do think it's possible to create a much better intro resource than any that exist today, but 'we can do much better' is compatible with 'it's shocking that the existing material hasn't already finished the job'.) I also disagree with "The burden for the we're-all-going-to-die-if-we-build-x argument is -- and I think correctly so -- quite high." If you're building a machine, you should have an at least somewhat lower burden of proof for more serious risks. It's your responsibility to check your own work to some degree, and not impose lots of micromorts on everyone else through negligence. But I don't think the latter point matters much, since the 'AGI is dangerous' argument easily meets higher burdens of proof as well. I do think a lot of people haven't heard the argument in any detail, and the main focus should be on trying to signal-boost the arguments and facilitate conversations, rather than assuming that everyone has heard the basics. A lot of the field is very smart people who are stuck in circa-1995 levels of discourse about AGI. I think 'my salary depends on not understanding it' is only a small part of the story. ML people could in principle talk way more about AGI, and understand the problem way better, without coming anywhere close to quitting their job. The level of discourse is by and large too low for 'I might have to leave my job' to be the very next obstacle on the path. Also, many ML people have other awesome job options, have goals in the field other than pure salary maximization, etc. More of the story: Info about AGI propagates too slowly through the field, because when one ML person updates, they usually don't loudly share their update with all their peers. This is because: 1. AGI sounds weird, and they don't want to sound like a weird outsider. 2. Their peers and the community as a whole might perceive this information as an attack on the field, an attempt to lower its status, etc. 3. Tech forecasting, differential technological development, long-term steering, exploratory engineering, 'not doing certain research because of its long-term social impact', prosocial research closure, etc. are very novel and foreign to most scientists. EAs exert effort to try to dig up precedents like Asilomar partly because Asilomar is so unusual compared to the norms and practices of the vast majority of science. Scientists generally ...

The Nonlinear Library: LessWrong
LW - The inordinately slow spread of good AGI conversations in ML by Rob Bensinger

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 21, 2022 13:47


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The inordinately slow spread of good AGI conversations in ML, published by Rob Bensinger on June 21, 2022 on LessWrong. Spencer Greenberg wrote on Twitter: Recently @KerryLVaughan has been critiquing groups trying to build AGI, saying that by being aware of risks but still trying to make it, they're recklessly putting the world in danger. I'm interested to hear your thought/reactions to what Kerry says and the fact he's saying it. Michael Page replied: I'm pro the conversation. That said, I think the premise -- that folks are aware of the risks -- is wrong. Honestly, I think the case for the risks hasn't been that clearly laid out. The conversation among EA-types typically takes that as a starting point for their analysis. The burden for the we're-all-going-to-die-if-we-build-x argument is -- and I think correctly so -- quite high. Oliver Habryka then replied: I find myself skeptical of this. Like, my sense is that it's just really hard to convince someone that their job is net-negative. "It is difficult to get a man to understand something when his salary depends on his not understanding it" And this barrier is very hard to overcome with just better argumentation. My reply: I disagree with "the case for the risks hasn't been that clearly laid out". I think there's a giant, almost overwhelming pile of intro resources at this point, any one of which is more than sufficient, written in all manner of style, for all manner of audience. (I do think it's possible to create a much better intro resource than any that exist today, but 'we can do much better' is compatible with 'it's shocking that the existing material hasn't already finished the job'.) I also disagree with "The burden for the we're-all-going-to-die-if-we-build-x argument is -- and I think correctly so -- quite high." If you're building a machine, you should have an at least somewhat lower burden of proof for more serious risks. It's your responsibility to check your own work to some degree, and not impose lots of micromorts on everyone else through negligence. But I don't think the latter point matters much, since the 'AGI is dangerous' argument easily meets higher burdens of proof as well. I do think a lot of people haven't heard the argument in any detail, and the main focus should be on trying to signal-boost the arguments and facilitate conversations, rather than assuming that everyone has heard the basics. A lot of the field is very smart people who are stuck in circa-1995 levels of discourse about AGI. I think 'my salary depends on not understanding it' is only a small part of the story. ML people could in principle talk way more about AGI, and understand the problem way better, without coming anywhere close to quitting their job. The level of discourse is by and large too low for 'I might have to leave my job' to be the very next obstacle on the path. Also, many ML people have other awesome job options, have goals in the field other than pure salary maximization, etc. More of the story: Info about AGI propagates too slowly through the field, because when one ML person updates, they usually don't loudly share their update with all their peers. This is because: 1. AGI sounds weird, and they don't want to sound like a weird outsider. 2. Their peers and the community as a whole might perceive this information as an attack on the field, an attempt to lower its status, etc. 3. Tech forecasting, differential technological development, long-term steering, exploratory engineering, 'not doing certain research because of its long-term social impact', prosocial research closure, etc. are very novel and foreign to most scientists. EAs exert effort to try to dig up precedents like Asilomar partly because Asilomar is so unusual compared to the norms and practices of the vast majority of science. Scientists generally ...

The Nonlinear Library
LW - 14 Techniques to Accelerate Your Learning by spencerg

The Nonlinear Library

Play Episode Listen Later May 20, 2022 17:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 14 Techniques to Accelerate Your Learning, published by spencerg on May 19, 2022 on LessWrong. Originally written by Belén Cobeta and Spencer Greenberg from ClearerThinking.org. Revised by Teis Rasmussen & Florence Hinder from ThoughtSaver.com. This is a cross-post from the Effective Altruism Forum. TLDR: There are a number of techniques that may accelerate the speed at which you learn (14 of which we explain in this article). Additionally, embedded within this article are flashcards to provide you with the key takeaways so you can remember these ideas and more effectively put them into action. Vasconcelos Library — photo by Diego Delso What if you could learn more in less time? Whether you're studying to pass your classes, aiming to improve at work, honing your personal life skills, or focusing on having the greatest impact you can, learning to learn more efficiently can be a great time investment because it accelerates the rest of your learning. Many learning methods are inefficient Many widespread learning practices waste a lot of time and effort, at least if we assume that the goal of those efforts is to actually learn. For example, you have probably had the experience of reading an exciting and useful piece of nonfiction, only to forget basically all of it and not take any action based on what you learned from it. And you've probably spent a lot of time taking classes that taught information that mostly wasn't useful to you, and which you no longer remember either way. It's unfortunate that such experiences are as common as they are. The good news is that there are more efficient and powerful learning practices out there - many of us just don't adopt them. Here we lay out 14 of our favorite techniques for improving your learning processes. They are grouped into four categories: Learn faster Boost understanding Remember more Put your learning into practice Part 1: LEARN FASTER This section will cover techniques that can help you absorb more information or knowledge per hour that you spend learning. 1. Listening instead of reading Listening is the new reading: take advantage of technology by listening to books and articles. It may feel less studious than reading, but at least some research shows that you can learn and retain just the same. The intonation of the narrator can also help you understand the text. Use the Audible version of the book or text-to-speech software and set the reading speed to a level that feels challenging but still allows you to understand the content. In time, your listening speed may become faster than your reading speed, as you go from listening at 1x to 2x and beyond. Some people even claim that faster speeds can improve comprehension to a degree because they require greater focus and so can prevent the mind from wandering. Another advantage of audio over regular reading is that you can do other activities that don't use your conscious mind at the same time, like taking out the trash, walking outdoors, washing the dishes, or taking a bath. Check out The Nonlinear Library for audio-versions of content from blogs such as the EA Forum, Alignment Forum, and LessWrong. 2. Immersive reading Why choose between listening and reading when you can do both? Try Emerson Spartz´s #1 speed reading hack: read a book using the printed and the audible version simultaneously (that is, read with your eyes WHILE you're also listening). By engaging two senses at once, your focus and reading speed may increase. It takes some practice, so be sure to try this method for at least a few hours before deciding if it's right for you (people who just try it for one hour are unlikely to see a benefit). To get the full benefit, you'll also want to push the speed of the audio to the upper edge of what feels comfortable to you. 3. Recursive sampling Use this technique t...

The Nonlinear Library: LessWrong
LW - 14 Techniques to Accelerate Your Learning by spencerg

The Nonlinear Library: LessWrong

Play Episode Listen Later May 20, 2022 17:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 14 Techniques to Accelerate Your Learning, published by spencerg on May 19, 2022 on LessWrong. Originally written by Belén Cobeta and Spencer Greenberg from ClearerThinking.org. Revised by Teis Rasmussen & Florence Hinder from ThoughtSaver.com. This is a cross-post from the Effective Altruism Forum. TLDR: There are a number of techniques that may accelerate the speed at which you learn (14 of which we explain in this article). Additionally, embedded within this article are flashcards to provide you with the key takeaways so you can remember these ideas and more effectively put them into action. Vasconcelos Library — photo by Diego Delso What if you could learn more in less time? Whether you're studying to pass your classes, aiming to improve at work, honing your personal life skills, or focusing on having the greatest impact you can, learning to learn more efficiently can be a great time investment because it accelerates the rest of your learning. Many learning methods are inefficient Many widespread learning practices waste a lot of time and effort, at least if we assume that the goal of those efforts is to actually learn. For example, you have probably had the experience of reading an exciting and useful piece of nonfiction, only to forget basically all of it and not take any action based on what you learned from it. And you've probably spent a lot of time taking classes that taught information that mostly wasn't useful to you, and which you no longer remember either way. It's unfortunate that such experiences are as common as they are. The good news is that there are more efficient and powerful learning practices out there - many of us just don't adopt them. Here we lay out 14 of our favorite techniques for improving your learning processes. They are grouped into four categories: Learn faster Boost understanding Remember more Put your learning into practice Part 1: LEARN FASTER This section will cover techniques that can help you absorb more information or knowledge per hour that you spend learning. 1. Listening instead of reading Listening is the new reading: take advantage of technology by listening to books and articles. It may feel less studious than reading, but at least some research shows that you can learn and retain just the same. The intonation of the narrator can also help you understand the text. Use the Audible version of the book or text-to-speech software and set the reading speed to a level that feels challenging but still allows you to understand the content. In time, your listening speed may become faster than your reading speed, as you go from listening at 1x to 2x and beyond. Some people even claim that faster speeds can improve comprehension to a degree because they require greater focus and so can prevent the mind from wandering. Another advantage of audio over regular reading is that you can do other activities that don't use your conscious mind at the same time, like taking out the trash, walking outdoors, washing the dishes, or taking a bath. Check out The Nonlinear Library for audio-versions of content from blogs such as the EA Forum, Alignment Forum, and LessWrong. 2. Immersive reading Why choose between listening and reading when you can do both? Try Emerson Spartz´s #1 speed reading hack: read a book using the printed and the audible version simultaneously (that is, read with your eyes WHILE you're also listening). By engaging two senses at once, your focus and reading speed may increase. It takes some practice, so be sure to try this method for at least a few hours before deciding if it's right for you (people who just try it for one hour are unlikely to see a benefit). To get the full benefit, you'll also want to push the speed of the audio to the upper edge of what feels comfortable to you. 3. Recursive sampling Use this technique t...

Clearer Thinking with Spencer Greenberg
Our 100th episode! (with Spencer Greenberg)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Apr 13, 2022 58:00


Is it possible to be both agreeable and skeptical in conversations? How can you give feedback and challenge people constructively without triggering their automatic self-defense mechanisms? More generally, how can you challenge people intellectually without riling them up emotionally? What skills are needed to be able to have detailed, productive conversations across a wide range of topics? How can you push through plateaus in the process of self-improvement? What are podcasts as a medium good for?Find more about Spencer through his website, spencergreenberg.com.

Clearer Thinking with Spencer Greenberg
Our 100th episode! (with Uri Bram and Spencer Greenberg)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Apr 13, 2022 58:00


Is it possible to be both agreeable and skeptical in conversations? How can you give feedback and challenge people constructively without triggering their automatic self-defense mechanisms? More generally, how can you challenge people intellectually without riling them up emotionally? What skills are needed to be able to have detailed, productive conversations across a wide range of topics? How can you push through plateaus in the process of self-improvement? What are podcasts as a medium good for?Find more about Spencer through his website, spencergreenberg.com.[Read more]

Clearer Thinking with Spencer Greenberg
Our 100th episode! (with Uri Bram and Spencer Greenberg)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Apr 13, 2022 58:00


Is it possible to be both agreeable and skeptical in conversations? How can you give feedback and challenge people constructively without triggering their automatic self-defense mechanisms? More generally, how can you challenge people intellectually without riling them up emotionally? What skills are needed to be able to have detailed, productive conversations across a wide range of topics? How can you push through plateaus in the process of self-improvement? What are podcasts as a medium good for?Find more about Spencer through his website, spencergreenberg.com.

Sentientism
102: Clearer Thinking with Spencer Greenberg - talking about Sentientism - Cross-post bonus episode

Sentientism

Play Episode Listen Later Mar 22, 2022 78:37


I had the pleasure of talking about Sentientism on the Clearer Thinking podcast hosted by Spencer Greenberg. This is a cross-post of our episode so make sure you go subscribe to Clearer Thinking too. Clearer Thinking is a podcast about ideas that truly matter. If you enjoy learning about powerful, practical concepts and frameworks, or wish you had more deep, intellectual conversations in your life, then you'll love this podcast! In Sentientist Conversations we talk about the two most important questions: “what's real?” & “what (and who) matters?” Sentientism answers those questions with a commitment to "evidence, reason & compassion for all sentient beings." Find out more at Sentientism.info. Join our "I'm a Sentientist" wall via this simple form. Everyone, Sentientist or not, is welcome in our groups. The biggest so far is here on FaceBook. Come join us there! There's a full transcript here: https://clearerthinkingpodcast.com/episode/090 Show notes (thanks Josh!): How can we encourage people to increase their critical thinking and reliance on evidence in the current information climate? What types of evidence "count" as valid, useful, or demonstrative? And what are the relative strengths and weaknesses of those types? Could someone reasonably come to believe just about anything, provided that they live through very specific sets of experiences? What does it mean to have a "naturalistic" epistemology? How does a philosophical disorder differ from a moral failure? Historically speaking, where does morality come from? Is moral circle expansion always good or praiseworthy? What sorts of entities deserve moral consideration? Jamie Woodhouse works on the Sentientism worldview ("evidence, reason, and compassion for all sentient beings") — refining the philosophy, raising awareness of the idea, and building communities and movements around it. After a quarter century in the corporate world he is a now an independent consultant, coach, and volunteer. You can follow Jamie on Twitter at @JamieWoodhouse or email him at hello@sentientism.info. Here are a few more links related to Sentientism: Sentientism YouTube channel Sentientism podcast Sentientism website Sentientism Facebook group All other places to find Sentientism (including Twitter, Reddit, Discord, and many others)

Clearer Thinking with Spencer Greenberg
Fight, flight, freeze, fawn (with Sasha Raskin)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Feb 16, 2022 76:34


Read the full transcriptWhen does positivity become toxic? When is it appropriate (or not) to give advice? Can depression really be healed without systemic changes? What are some ways that society at large gaslights people? Why do women sometimes not come forward after sexual assault? What is "freeze or faun"? Do men suffer as much under patriarchy as women?Sasha Raskin is the Founder of A Beautiful Mess (ABM), a mental health organization that runs corporate talks and events to combat loneliness, depression, and mental health stigma, while fostering connection, intimacy, and equality. She founded ABM to be the resource she wished she had when she was struggling the most. You can learn more about her story here. For her mental health work, she has delivered talks for platforms ranging from Venture University to The World Economic Forum's Global Shapers Community; she has been named a Young Social Impact Hero by Thrive Global in partnership with Authority Magazine; and she'll be delivering a TEDx talk shortly titled, "The Other Pandemic: We Must End Mental Health Stigma Now".Sasha's contact info:Website: abeautifulmess.orgEmail: sraskin@abment.orgPhone: 732-630-5520Medium writings: https://blog.usejournal.com/@sashaalexraskinsCalendly (to book a free chat): https://calendly.com/sraskin/15minInstagram: @abeautifulmess_orgMailing list: https://thoughtful-speaker-3766.ck.page/d42eaa3d08Sasha also asked us to include this in the show notes:A BEAUTIFUL MESS believes that mental health is a human right and accordingly, no one is turned away from public events for lack of funds. We also offer free resources whenever possible. A lot goes into this work so please consider supporting our mission financially. It is greatly appreciated!Venmo: @Sasha-RaskinPaypal: araskin11@gmail.comCash App: $abeautifulmessorgZelle: araskin11@gmail.comAdditionally, we'd LOVE to collaborate with you and your company. You can reach us / schedule a free consultation call here:Calendly: https://calendly.com/sraskin/15minE-mail: sraskin@abment.comPhone: 732.630.5520Website: abeautifulmess.orgFollow Sasha's Writing on Medium: https://blog.usejournal.com/@sashaalexraskinInstagram: https://www.instagram.com/abeautifulmess_orgLinkedin: https://www.linkedin.com/in/sasha-alexandra-raskin-20334813/Further reading:"Addressing some common misconceptions about rape and sexual assault" by Spencer Greenberg

Clearer Thinking with Spencer Greenberg
Fight, flight, freeze, fawn (with Sasha Raskin)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Feb 16, 2022 76:34


When does positivity become toxic? When is it appropriate (or not) to give advice? Can depression really be healed without systemic changes? What are some ways that society at large gaslights people? Why do women sometimes not come forward after sexual assault? What is "freeze or faun"? Do men suffer as much under patriarchy as women?Sasha Raskin is the Founder of A Beautiful Mess (ABM), a mental health organization that runs corporate talks and events to combat loneliness, depression, and mental health stigma, while fostering connection, intimacy, and equality. She founded ABM to be the resource she wished she had when she was struggling the most. You can learn more about her story here. For her mental health work, she has delivered talks for platforms ranging from Venture University to The World Economic Forum's Global Shapers Community; she has been named a Young Social Impact Hero by Thrive Global in partnership with Authority Magazine; and she'll be delivering a TEDx talk shortly titled, "The Other Pandemic: We Must End Mental Health Stigma Now". You can find her at abeautifulmess.org, email her at sraskin@abment.org, reach her by phone at 732-630-5520, read her Medium writings, book a free chat via her Calendly link, follow her on Instagram, or subscribe to her email list.Further reading:"Addressing some common misconceptions about rape and sexual assault" by Spencer Greenberg

Dilemma Podcast
S03E05: What Kind of Truth? - Spencer Greenberg

Dilemma Podcast

Play Episode Listen Later Jan 17, 2022 93:04


Does 2+2 really equal 4? What realm of truth am I in when I speak about my pain? What kind of truth claim is it to speak about the existence of "Poland"? How about the existence of ghosts and gods? Spencer Greenberg breaks down his taxonomy of truth claims to help us better understand what we and others might be saying when we declare something to be true. He also lays out his personal philosophy of Valuism, a deceptively simple yet illuminating framework that can guide your behavior and focus your mind on what really matters to you. Spencer's work and his intrinsic values test can all be found here: https://www.clearerthinking.org/ Spencer's essay on the "Seven Realms of Truth" can be found here: https://www.spencergreenberg.com/2019/03/the-7-realms-of-truth-framework/

The Nonlinear Library
EA - Momentum 2022 updates (we're hiring) by arikagan

The Nonlinear Library

Play Episode Listen Later Jan 12, 2022 19:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Momentum 2022 updates (we're hiring), published by arikagan on January 11, 2022 on The Effective Altruism Forum. Momentum (formerly Sparrow) is a venture-backed, EA startup that aims to increase effective giving. We build donation pages that emphasize creative, recurring donations tied to moments in your life (e.g. offset your carbon footprint when you buy gas, or give to AI alignment every time you scroll Facebook) and we use behavioral science to nudge new donors to support EA charities. Our goal is to direct $1B per month to EA causes by 2030, growing EA funding by over an order of magnitude and becoming EA's largest funding source. We believe that relying exclusively on a few mega-funders is not a robust long-term strategy for EA. Yet effective giving is quite small - there are 7.4K active EAs, and 85K people have ever donated to GiveWell. In contrast, each year, 240M Americans donate and 11M people visit Charity Navigator alone. We intend to reach a large number of people who have not yet heard of EA but are open to making highly-effective donations. 82% of VC-backed startups fail, and Momentum may be no exception. However, the potential upside of creating a sustainable funding source for EA is so great that the expected value appears high. And there's evidence that we're on the right track. Momentum has moved over $10M with our software from 40,000 donors. In our mobile app, 87% of donations went to our recommended charities (including several longtermist ones). We have $4M in funding from both EA funders (e.g. Jaan Tallinn, Spencer Greenberg, Luke Ding) and venture investors (e.g. Mark Cuban, Eric Ries, On Deck), making us one of a handful of VC-backed EA startups. Our campaigns have received widespread attention from celebrities (Peter Singer, John Legend, MLK III) and the press (e.g. NYT, BBC, and Quartz). Our team recently grew from 3 to 9, and we have 6 more openings in product, growth, engineering, etc. For the right person, this could be very high impact and a great place to work. We'd love to hear from you - email ari@givemomentum.com or apply here. A huge thank you to Jade Leung, Ozzie Gooen, Aaron Gertler, Rebecca Kagan, George Rosenfeld, and Bill Zito for feedback. All opinions and mistakes are our own. Table of contents The product Impact Working at Momentum FAQs We're hiring The product To increase the number of EA donors, we need to reach people outside of EA and encourage effective giving. We reach new donors by providing donation pages to a wide range of charities (regardless of effectiveness) that they market to their audience. After donors check out we offer a portal that encourages additional effective donations. Step 1: Reach donors with our donation pages To reach donors outside of EA, we give charities free, white-labeled donation pages like this one or this one that they put on their website (like Shopify but for nonprofits). Our page helps the charity acquire more recurring donors (who give 7x more) by tying personal actions and global events to automatic donations. You might give 5% to clean water when you buy a coffee, donate to BLM with each police shooting, or donate to stop Trump every time he tweets. We saw success with Defeat by Tweet (97% of 40K donors were recurring), so it looks promising that we can beat the industry average of 10% recurring. Since increasing the number of recurring donors increases donation volume so much, charities leverage their marketing resources to direct traffic to their page. Step 2: Nudge effective giving with our donor portal To increase effective giving, we give donors a portal that encourages supporting effective charities. After the donor checks out on a donation page, we guide them (on the confirmation page or via email) to log in to the portal to track their giving, edit their donations, and see...

The Nonlinear Library: LessWrong Top Posts
The Treacherous Path to Rationality by Jacob Falkovich

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 17:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Treacherous Path to Rationality, published by Jacob Falkovich on the LessWrong. Cross-posted, as always, from Putanumonit. Rats v. Plague The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts. We started discussing the virus and raising the alarm in private back in January. By late February, as American health officials were almost unanimously downplaying the threat, we wrote posts on taking the disease seriously, buying masks, and preparing for quarantine. Throughout March, the CDC was telling people not to wear masks and not to get tested unless displaying symptoms. At the same time, Rationalists were already covering every relevant angle, from asymptomatic transmission to the effect of viral load, to the credibility of the CDC itself. As despair and confusion reigned everywhere into the summer, Rationalists built online dashboards modeling nationwide responses and personal activity risk to let both governments and individuals make informed decisions. This remarkable success did not go unnoticed. Before he threatened to doxx Scott Alexander and triggered a shitstorm, New York Times reporter Cade Metz interviewed me and other Rationalists mostly about how we were ahead of the curve on COVID and what others can learn from us. I told him that Rationality has a simple message: “people can use explicit reason to figure things out, but they rarely do” If rationalists led the way in covering COVID-19, Vox brought up the rear Rationalists have been working to promote the application of explicit reason, to “raise the sanity waterline” as it were, but with limited success. I wrote recently about success stories of rationalist improvement but I don't think it inspired a rush to LessWrong. This post is in a way a response to my previous one. It's about the obstacles preventing people from training and succeeding in the use of explicit reason, impediments I faced myself and saw others stumble over or turn back from. This post is a lot less sanguine about the sanity waterline's prospects. The Path I recently chatted with Spencer Greenberg about teaching rationality. Spencer regularly publishes articles like 7 questions for deciding whether to trust your gut or 3 types of binary thinking you fall for. Reading him, you'd think that the main obstacle to pure reason ruling the land is lack of intellectual listicles on ways to overcome bias. But we've been developing written and in-person curricula for improving your ability to reason for more than a decade. Spencer's work is contributing to those curricula, an important task. And yet, I don't think that people's main failure point is in procuring educational material. I think that people don't want to use explicit reason. And if they want to, they fail. And if they start succeeding, they're punished. And if they push on, they get scared. And if they gather their courage, they hurt themselves. And if they make it to the other side, their lives enriched and empowered by reason, they will forget the hard path they walked and will wonder incredulously why everyone else doesn't try using reason for themselves. This post is about that hard path. The map is not the territory Alternatives to Reason What do I mean by explicit reason? I don't refer merely to “System 2”, the brain's slow, sequential, analytical, fully conscious, and effortful mode of cognition. I refer to the informed application of this type of thinking. Gathering data with real effort to find out, crunching the numbers with a grasp of the math, modeling the world with testable predictions, reflection on your thinking with an awareness of biases. Reason requires good inputs and a lot of...

The Nonlinear Library: EA Forum Top Posts
A list of EA-related podcasts by M_Allcock

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 5:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A list of EA-related podcasts, published by M_Allcock on the Effective Altruism Forum. Podcasts are a great way to learn about EA. Here's a list of the EA-related podcasts I've come across over the last few years of my podcast obsession. I've split them up into two categories: Strongly EA-related podcasts: Podcasts run by EA organisations or otherwise explicitly EA-related. Podcasts featuring EA-related episodes: Podcasts which are usually not EA-related but have some episodes which are about an EA idea or interviewing an EA-aligned guest. Please add to the comments any podcasts that I have missed. I am always excited to find out about more interesting podcasts! Strongly EA-related podcasts Doing Good Better Podcast- Five short episodes about EA concepts. Produced by the Centre for Effective Altruism. No new content since 2017. The Life You Can Save Podcast- Episodes from Peter Singer's organisation that focus on alleviating global poverty. The latest episodes are interviews with EA organisation staff. The Turing Test - The newly restarted EA podcast from the Harvard University EA group. Interviews with EA thinkers including Brian Tomasik on ethics, animal welfare, and a focus on suffering, and Scott Weathers on Charity Science Health. 80,000 Hours Podcast - Robert Wiblin leads long-form interviews (up to 4 hours) with individuals in high impact careers. This podcast really gets into the weeds of the most important cause areas. Global Optimum - An informal podcast by professional psychology researcher, Daniel Gambacorta. Discussing psychology results that can help you become a more effective altruist. There is usually no extra padding in this podcast, it's straight to the point. Future Perfect Podcast - The podcast part of Vox Media's Future Perfect project. Dylan Matthews leads scripted discussions about interesting and hopefully effective ways to improve the world. Morality is hard - Michael Dello Iacovo interviews guests about topics related to effective animal advocacy. Future of Life Podcast - Interviews with researchers and thought leaders who the Future of Life Institute believe are helping to “safeguard life and build optimistic visions of the future”. They include a series on AI alignment and a recent series on climate change. Wildness - A new podcast of Wild Animal Initiative. Narrative episodes based around a theme relevant to wild animal welfare research, typically including multiple interviews with animal welfare researchers. EARadio - hundreds of audio recordings from EA Global talks. Some episodes are hard to follow due to the missing visual information that is used in presentations. Sentience Institute Podcast - New podcast on effective animal advocacy. Podcasts featuring EA-related episodes Our Hen House - Jacy Reese on the end of animal farming; Joey Savoie on using charity entrepreneurship to help animals. The Joe Rogan Experience - Nick Bostrom on the simulation argument; Will Macaskill on EA. The Most Interesting People I Know - Chloe Cockburn on US justice system reform; Lewis Bollard on ending factory farming; Spencer Greenberg on lots of things related to EA; Andres Gomez Emilsson of Qualia Research Institute on solving consciousness. Autocracy and Transhumanist Podcast - Phil Torres, Seth Baum, and Anders Sandberg on the long-term future; Jeff Sebo on the moral value of other minds. The Future Thinkers - Phil Torres on the long-term future and existential risks; Daniel Schmachtenberger on generator functions for existential risks, global phase shift, and mitigating existential risks. Making Sense with Sam Harris - Lots of episodes about consciousness, meaning, and ethics. In particular: Will Macaskill on EA; Nick Bostrom on existential risks; Eliezer Yudkowsky on AI. Philosophise This - A brilliant episode on Peter Singer and effective altrui...

Clearer Thinking with Spencer Greenberg
When is suffering good? (with Paul Bloom)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Nov 11, 2021 85:55


Read the full transcriptWhen (if ever) can suffering be good? Is there an optimal ratio of pleasure to pain? What is motivational pluralism? Can large, positive incentives be coercive? (For example, is it coercive to offer to pay someone enormous amounts of money to do something relatively benign or even painful or immoral?) How can moving from making judgments about a person's actions to making judgments about their character solve certain moral puzzles? Why do we sometimes make seemingly irrational judgments about the relative badness of certain actions? How does the level of controversy around an action factor into how much we publicly disapprove of it? What are the differences between compassion and empathy? Is antisocial personality disorder (AKA psychopathy or sociopathy) defined only by a lack of empathy? How have humans evolved (or not) to detect and mitigate the effects of others who feel no remorse? Is altruism especially vulnerable to remorseless people? What are the differences between narcissists and sociopaths?Paul Bloom is Professor of Psychology at the University of Toronto, and Brooks and Suzanne Ragen Professor Emeritus of Psychology at Yale University. Paul Bloom studies how children and adults make sense of the world, with special focus on pleasure, morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is past-president of the Society for Philosophy and Psychology, and co-editor of Behavioral and Brain Sciences. He has written for scientific journals such as Nature and Science, and for popular outlets such as The New York Times, The Guardian, The New Yorker, and The Atlantic Monthly. He is the author of six books, including his most recent, The Sweet Spot: The Pleasures of Suffering and the Search for Meaning. Find more about him at paulbloom.net, or follow him on Twitter at @paulbloomatyale.Further reading:"Friction in Relationships from Misunderstanding the Mind" by Spencer Greenberg

Effective Altruism: An Introduction – 80,000 Hours
Four: Spencer Greenberg on the scientific approach to solving difficult everyday questions

Effective Altruism: An Introduction – 80,000 Hours

Play Episode Listen Later Apr 12, 2021 137:12


Will SpaceX land people on Mars in the next decade? Will North Korea give up their nuclear weapons? Will your friend turn up to dinner?Spencer Greenberg, founder of ClearerThinking.org has a process for working out such real life problems.In this conversation from 2018, Spencer walks us through how to reason through difficult questions more accurately, and when we should expect to be overconfident or underconfident. Full transcript, related links, and summary of this interviewThis episode first broadcast on the regular 80,000 Hours Podcast feed on August 7, 2018. Some related episodes include:• #7 – Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn't all bad• #11 – Dr Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm. • #15 – Prof Tetlock on how chimps beat Berkeley undergrads and when it's wise to defer to the wise• #30 – Dr Eva Vivalt on how little social science findings generalize from one study to another• #40 – Katja Grace on forecasting future technology & how much we should trust expert predictions.• #48 – Brian Christian on better living through the wisdom of computer science• #78 – Danny Hernandez on forecasting and measuring some of the most important drivers of AI progressSeries produced by Keiran Harris.

Effective Altruism: An Introduction – 80,000 Hours
Seven: Prof Tetlock on why accurate forecasting matters for everything, and how you can do it better

Effective Altruism: An Introduction – 80,000 Hours

Play Episode Listen Later Apr 12, 2021 137:25


Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case?Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can't assess the likelihood of different outcomes we're in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul's Drag Race.Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day.In this conversation from 2019, we discuss how his work can be applied to your personal life to answer high-stakes questions, like how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. Full transcript, related links, and summary of this interviewThis episode first broadcast on the regular 80,000 Hours Podcast feed on June 28, 2019. Some related episodes include:• #7 – Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn't all bad• #11 – Dr Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm. • #15 – Prof Tetlock on how chimps beat Berkeley undergrads and when it's wise to defer to the wise• #30 – Dr Eva Vivalt on how little social science findings generalize from one study to another• #40 – Katja Grace on forecasting future technology & how much we should trust expert predictions.• #48 – Brian Christian on better living through the wisdom of computer science• #78 – Danny Hernandez on forecasting and measuring some of the most important drivers of AI progressSeries produced by Keiran Harris.

Startups for Good
Spencer Greenberg, Founder of Smart Wave

Startups for Good

Play Episode Listen Later Jan 25, 2021 39:14


Spencer is founder and CEO of Spark Wave, a startup foundry (a.k.a. company builder / startup studio) that creates new software companies from scratch, designed to help solve big problems in the world. He has a PhD in applied math from the Courant Institute of Mathematical Sciences at NYU, with his specialty being machine learning (sometimes referred to as “artificial intelligence”). He also has a bachelor of science degree from Columbia University, where he studied applied math and computer science. He has published papers on a variety of topics in applied math, machine learning, mental health and social science.Spencer joins me today to discuss his founder's story and the many projects that his company, Spark Wave, are working on. We learn more about Spencer's take on ethical value and how this drives him in business and life. Spencer shares how his academic background in mathematics influences the work that he does. He also tells about some of the challenges he has encountered, and advice to all inspiring founders. “Entrepreneurship is basically the world punching you in the face between 10 and 100 times, and the vast majority of people would give up after a few punches, right?” - Spencer GreenbergToday on Startups for Good we cover:Keeping track of multiple products within a companyProducts and business idea generation and evaluationWhat an effective altruist isThe type of team involved with a general studio modelHow to recruit a CEO or co-founder to pair with an ideaSpencer's new podcast - Clearer Thinking with Spencer GreenbergQuestions to ask yourself prior to becoming an entrepreneur.Net Promoter Score and Dissatisfaction Score and other metrics Connect with Spencer on Twitter and LinkedIn. For more information about sparkwave.tech and clearerthinking.org Subscribe, Rate & Share Your Favorite Episodes!Thanks for tuning into today's episode of Startups For Good with your host, Miles Lasater. If you enjoyed this episode, please subscribe and leave a rating and review on your favorite podcast listening app.Don't forget to visit our website, connect with Miles on Twitter or LinkedIn, and share your favorite episodes across social media. For more information about The Giving Circle

80,000 Hours Podcast with Rob Wiblin
Rob Wiblin on self-improvement and research ethics

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jan 13, 2021 150:36


This is a crosspost of an episode of the Clearer Thinking Podcast: 022: Self-Improvement and Research Ethics with Rob Wiblin. Rob chats with Spencer Greenberg, who has been an audience favourite in episodes 11 and 39 of the 80,000 Hours Podcast, and has now created this show of his own. Among other things they cover: • Is trying to become a better person a good strategy for self-improvement • Why Rob thinks many people could achieve much more by finding themselves a line manager • Why interviews on this show are so damn long • Is it complicated to figure out what human beings value, or actually simpler than it seems • Why Rob thinks research ethics and institutional review boards are causing immense harm • Where prediction markets might be failing today and how to tell If you like this go ahead and subscribe to Spencer's show by searching for Clearer Thinking in your podcasting app. In particular, you might want to check out Spencer’s conversation with another 80,000 Hours researcher: 008: Life Experiments and Philosophical Thinking with Arden Koehler. The 80,000 Hours Podcast is produced by Keiran Harris.

You Are Not So Smart
195 - Clearer Thinking

You Are Not So Smart

Play Episode Listen Later Dec 14, 2020 79:40


In this episode we sit down with Spencer Greenberg to discuss how to be better critical thinkers using his FIRE method and other insights from his website, ClearerThinking.org See omnystudio.com/listener for privacy information.

Forcing Function Hour
Spencer Greenberg: The Sum of Our Decisions

Forcing Function Hour

Play Episode Listen Later Oct 16, 2020 69:36


Spencer Greenberg is a serial founder, mathematician, and social scientist, with a focus on improving human well-being. He is the founder of Uplift and Mind Ease, apps for helping people with anxiety and depression, as well as ClearerThinking.org which provides 40 tools and training programs to improve decision-making. Spencer is also the founder of Spark Wave, a startup foundry that creates novel software products from scratch. Spencer joined Chris to discuss principles and techniques for improving our decision-making and reducing thinking biases. For the video, transcript, and show notes, visit https://forcingfunction.com/podcast/spencer-greenberg (forcingfunctionhour.com/spencer-greenberg).  

decisions uplift spencer greenberg spark wave clearerthinking
Clearer Thinking with Spencer Greenberg
Spencer Greenberg @ THINKERS Workshop

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 14, 2020 58:03


Why is it important to learn about cognitive biases? What are the various modes of nuanced thinking? What kind of mindset do people have to have in order to change their minds? When should we make "gut", intuitive decisions? When should we make careful, measured, reflective decisions? This episode was originally recorded on the THINKERS Workshop show. Watch the original recording here, or visit THINKERS Workshop or THINKERS Notebook to learn more.

Clearer Thinking with Spencer Greenberg
THINKERS Workshop (with Spencer Greenberg)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 13, 2020 58:03


Why is it important to learn about cognitive biases? What are the various modes of nuanced thinking? What kind of mindset do people have to have in order to change their minds? When should we make "gut", intuitive decisions? When should we make careful, measured, reflective decisions?This episode was originally recorded on the THINKERS Workshop show. Watch the original recording here, or visit THINKERS Workshop or THINKERS Notebook to learn more.

Clearer Thinking with Spencer Greenberg
Spencer Greenberg @ THINKERS Workshop

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 13, 2020 58:03


Why is it important to learn about cognitive biases? What are the various modes of nuanced thinking? What kind of mindset do people have to have in order to change their minds? When should we make "gut", intuitive decisions? When should we make careful, measured, reflective decisions?This episode was originally recorded on the THINKERS Workshop show. Watch the original recording here, or visit THINKERS Workshop or THINKERS Notebook to learn more.

The Psychology Podcast
Spencer Greenberg || Effective Altruism, Mental Health, & Habit Change

The Psychology Podcast

Play Episode Listen Later Sep 10, 2020 47:19


Today it is great to have Spencer Greenberg on the podcast. Spencer is an entrepreneur and mathematician and founder of Spark Wave — a software foundry which creates novel software products from scratch, designed to help solve problems in the world using social science. For example, scalable care for depression and anxiety and technology for accelerating and improving social science research. He also founded clearerthinking.org, which offers free tools and training programs used by over 250, 000 people which are designed to help you improve decision making and increase positive behaviors. Spencer has a PhD in Applied Math from NYU with a specialty in Machine Learning. Spencer’s work has been featured by numerous major media outlets such as the Wall Street Journal, the New York Times, the Independent, Lifehacker, Fast Companyand the Financial Times. Check out sparkwave.tech/conditions-for-change where you can apply the results of scientific studies to your habit development, from making a decision to cultivate a habit, to taking action, and finally, continuing that habit. Time Stamps [00:01:40] How the Effective Altruism movement works [00:02:55] The role of emotions in Effective Altruism [00:04:03] How Spencer applies Effective Altruism in his work and companies [00:06:27] How cultivating automatic if-then rules can improve your life [00:10:42] How to handle depression using behavioral activation [00:12:05] Introversion and the hierarchical nature of personality [00:14:58] Personality traits that are not captured by the Big Five model [00:18:04] How it is easier to present a scientific finding compared to explaining that finding [00:20:20] The “psychological immune system” and the five categories of behaviors for dealing with difficult situations [00:20:55] Facing reality and clarifying distortions of thinking [00:21:27] Feeling-based and emotion-based strategies for dealing with difficulty [00:22:10] Action-based strategies for dealing with difficulty [00:23:27] Refocusing techniques for dealing with difficulty [00:23:42] Reframing and finding the silver lining [00:29:47] Whether or not the Big Five personality traits are inherently valenced (i.e. positive or negative) [00:31:03] Personality as a distribution of traits [00:33:22] Finding optimal levels of different personality traits [00:33:59] Tips for forming new habits [00:38:22] How to determine why behavioral change is not happening [00:42:07] Tips and heuristics for sparking structured and unstructured creativity

Thinking Clearly
#44-Interactive Resources for Sharpening your Critical Thinking Skills-with Guest Spencer Greenberg

Thinking Clearly

Play Episode Listen Later Mar 6, 2020 57:30


This discussion, with Dr. Spencer Greenberg, focuses on a variety of free tools and mini-courses, on-line at: clearerthinking.org. These fun, interactive tools and mini-courses, developed by Dr. Greenberg and associates, have been meticulously designed to improve your critical thinking skills, help you understand yourself more deeply, form new positive habits, and make better decisions. Dr. Greenberg has a PhD in mathematics with a specialty in Machine Learning. Find out more about his work at: spencergreenberg.com.

Lightbulb Moment
Season 1, Episode 8: Spencer Greenberg

Lightbulb Moment

Play Episode Listen Later Jan 4, 2020 74:45


How to introduce Spencer Greenberg? He’s a man who wears many hats– entrepreneur, doctorate in applied math from New York University, researcher, startup founder, and he’s extremely productive in his spare time, too! He founded Spark Wave, a startup foundry which creates novel software products designed to solve problems in the world. A few of the issues they’ve tackled are scalable care for depression, and technology for improving social science. He also founded ClearerThinking.org, which offers free tools and training programs, that have been used by over 150,000 people, designed to help improve decision-making and reduce biases in people’s thinking.

new york university spencer greenberg spark wave clearerthinking
EARadio
EAG 2019 SF: Effective behavior change (Spencer Greenberg)

EARadio

Play Episode Listen Later Oct 6, 2019 29:27


If we want to improve the world (and ourselves), we need to start by changing the way we live — our habits and behaviors. In this talk, Spencer Greenberg, founder and CEO of ClearerThinking.org, discusses ways that behavior change matters and techniques we can use to get better at it. To learn more about effective … Continue reading EAG 2019 SF: Effective behavior change (Spencer Greenberg)

The Most Interesting People I Know
9 - Spencer Greenberg on Life-Changing Questions, Effective Altruism, and Burning Man

The Most Interesting People I Know

Play Episode Listen Later Jul 23, 2019 108:44


Spencer Greenberg is a mathematician, social scientist, and entrepreneur. He received his PhD in applied math from NYU and is the founder of SparkWave, a social venture foundry. As we discuss, SparkWave has created a number of apps tackling problems like depression, anxiety, and finding participants for academic studies. Spencer also created the site www.clearerthinking.org, which offers free online tools and training programs to help users avoid bias and make better decisions. This site has a lot of fun and thought-provoking exercises. My favorites that we didn't dig into: common misconceptions, political bias test, and leaving your mark on the world. Spencer has spoken at Effective Altruism Global and been published in the New York Times. We cover: life changing questions you can ask yourself, intrinsic values, some hard problems for utilitarianism, Sparkwave's apps for anxiety and depression, how to ensure social ventures don't become evil, Effective Altruism, the profound challenge of doing good in the world, the connection between our happiness and the news, gaming Facebook for your happiness, the best legal approach to prostitution, Spencer's thoughts on fiction and nonfiction, why memorizing is underrated, and the best description of Burning Man I've heard.  When I conceived this show, Spencer was one of the first people that came to mind. As you'll soon see, he has informed and well-developed thoughts on a huge range of topics. He's changed my mind quite a few times, and I appreciate his approach to thinking through the hardest problems we face as a species.  Spencer's referenced work: Life-Changing Questions Intrinsic Values Test Spencer's presentation at Effective Altruism Global on “Value traps, and how to avoid them” Mind Ease for anxiety UpLift for depression Facebook post on humor Facebook post on 10 policies Spencer supports Other links: The 36 Questions That Lead to Love The Repugnant Conclusion Current Affairs article on Wikipedia Is it fair to say that most social programmes don't work? Peter Singer's essay Famine, Affluence and Morality What it's like to go to Burning Man for the first time

Singal-Minded Conversations
Episode 8: Is Power-Posing Getting A Bad Rap? (Spencer Greenberg)

Singal-Minded Conversations

Play Episode Listen Later Jun 20, 2019 50:22


In today's episode, I spoke with Spencer Greenberg (https://www.spencergreenberg.com/) about the personality differences between men and women, the different intrinsic values liberals and conservatives find most meaningful, power-posing, and social-science reform efforts. Spencer is a mathematician and entrepreneur, as well as s the founder of Spark Wave, a "startup foundry" which creates software designed to help solve problems in the world using social science, and to accelerate and improve social-science research. He also founded ClearerThinking.org (ClearerThinking.org), which offers free tools and training programs geared at improving decision-making, increasing positive behaviors, and reducing cognitive biases. Spencer has a PhD in applied mathematics from NYU, with a specialty in machine learning, and his work has been featured in media outlets like the Wall Street Journal, the Independent, Lifehacker, Gizmodo, Fast Company, and the Financial Times. (Music: Intro: Why? - “The Vowels, Pt. 2” (https://www.youtube.com/watch?v=ggqe_uHvrlw); break: Dropkick Murphys - "The Dirty Glass" (https://open.spotify.com/track/2jggiA0przPmYj0Z96W7Q0?si=OUsvugSmT5WZ88Q9S0OC3Q); outro: Field Mouse - "Happy" (https://www.youtube.com/watch?v=oNe_9u3SmxY))

DEEP TALKS [ENG]
DEEP TALKS 01: Spencer Greenberg - Entrepreneur, Mathematician, and Founder of ClearerThinking

DEEP TALKS [ENG]

Play Episode Listen Later Jun 3, 2019 61:09


In this episode of Deep Talks, I interviewed Spencer Greenberg. Spencer has PhD. from applied math, and he is the founder of Spark Wave, a startup foundry which creates software products designed to solve problems in the world on a large scale. For example UpLift, an automated app for helping people with depression. He also founded ClearerThinking.org, which offers free tools and training programs that have been used by over 150,000 people, designed to help improve decision-making and reduce thinking biases. Spencer's work has been featured by major media outlets such as the Wall Street Journal, the Independent, Lifehacker, Gizmodo, Fast Company, and the Financial Times. ---- Video interview: https://www.youtube.com/watch?v=bIU-Ko8NZSQ ---- PODCAST DEEP TALKS Petr Ludwig, the author of the book The End of Procrastination, invites guests that do something meaningful and together they discuss topics like personal values, purpose at work and life, or how to improve today's society.

The Turing Test
The Turing Test #8: Spencer Greenberg

The Turing Test

Play Episode Listen Later May 13, 2019


How to introduce Spencer Greenberg? He’s a man who wears many hats– entrepreneur, doctorate in applied math from New York University, researcher, startup founder, and he’s extremely productive in his spare time, too! He founded Spark Wave, a startup foundry which creates novel software products designed to solve problems in the world. A few of … Continue reading "The Turing Test #8: Spencer Greenberg"

DEEP TALKS [CZE]
DEEP TALKS 24: Spencer Greenberg - Mathematician and Entrepreneur [ENG]

DEEP TALKS [CZE]

Play Episode Listen Later Apr 13, 2019 61:09


In this episode of Deep Talks, I interviewed Spencer Greenberg. Spencer has PhD. from applied math, and he is the founder of Spark Wave, a startup foundry which creates software products designed to solve problems in the world on a large scale. For example UpLift, an automated app for helping people with depression. He also founded ClearerThinking.org, which offers free tools and training programs that have been used by over 150,000 people, designed to help improve decision-making and reduce thinking biases. Spencer's work has been featured by major media outlets such as the Wall Street Journal, the Independent, Lifehacker, Gizmodo, Fast Company, and the Financial Times. Video interview: https://www.youtube.com/watch?v=pP8n1Bkv6f0 ---- PODCAST DEEP TALKS Petr Ludwig, autor knihy Konec prokrastinace, si do podcastu DEEP TALKS zve hosty, se kterými se baví o tématech jako jsou hodnoty, smysl práce/života a tom, co dělat pro lepší českou společnost.

Action Design Radio
Spencer Greenberg – Combining Technology and Behavioral Interventions

Action Design Radio

Play Episode Listen Later Mar 18, 2019 53:55


Spencer Greenberg is an applied mathematician, entrepreneur, and self-described “collector of powerful tools.” He is the Founder and CEO of multiple companies, including Spark Wave – a venture builder (a.k.a. a foundry, or startup studio) that creates software products with the goal of achieving large social impact. Spencer joins Erik and Zarak to discuss his unique perspective on psychology and behavior. He takes a background in technology and combines it with applied social science to build platforms that implement complex behavioral interventions. How does one choose the right methodology when conducting a study? What’s the difference between testing a hypothesis and trying to accurately predict the future? How does fatigue change throughout the day? Are most people who suffer from depression aware of it? How can social media be utilized to inspire creative thinking in research? Why publish a paper when you can release an app that people can use? All of those questions and more are addressed in our latest installment of Action Design Radio!

EARadio
EAG 2018 SF: Value traps, and how to avoid them (Spencer Greenberg)

EARadio

Play Episode Listen Later Feb 12, 2019 28:47


What do we really value? Are our values reducible to just one or two things, or are they more complex? What does it mean to value something? In this talk from EA Global 2018: San Francisco, Spencer Greenberg argues that we each have a set of multiple intrinsic values, and that it’s worth reflecting on … Continue reading EAG 2018 SF: Value traps, and how to avoid them (Spencer Greenberg)

The Jordan Harbinger Show
136: Spencer Greenberg | Cultivating Clearer Thinking for Cloudy Times

The Jordan Harbinger Show

Play Episode Listen Later Dec 20, 2018 61:15


Spencer Greenberg (@SpencrGreenberg) is a mathematician, entrepreneur, and founder of Clearer Thinking, a website that trains people to overcome their own biases and make better decisions rationally. What We Discuss with Spencer Greenberg: Common logical fallacies and concepts like black and white thinking, cherry picking, straw man arguments, and the typical mind fallacy. How these logical fallacies can be so powerfully persuasive even in the face of contrary evidence, and why they inhibit our thinking and keep us from getting closer to the truth. How to spot cognitive bias in ourselves and others -- and in the sources we choose to inform us about the outside world. How we can improve our critical thinking, cut through these faulty arguments and biases, and separate fact from fiction with logic over emotion. Practical tools that allow us to cultivate clearer thinking for these often cloudy times. And much more... Privacy.com allows you to generate a new credit card number for every purchase you make online for the sake of worry-free security -- and it doesn't cost you anything! Want to find out why we use this incredible service ourselves? Sign up for free and get a $5 credit at privacy.com/jordan! Hunt a Killer is a puzzle-style game that poses this tantalizing question: what if a serial killer delivered a package to your doorstep each month? Go to huntakiller.com/jordan for 10 percent off your first box and find out! Candid Co gives you clear braces without in-office visits for 65% less than a traditional orthodontist. Get started with an at-home modeling kit today for 25% off at this link with code JORDAN at checkout! Does your business have an Internet presence? Save up to a whopping 62% on new webhosting packages with HostGator at hostgator.com/jordan! Sign up for Six-Minute Networking -- our free networking and relationship development mini course -- at jordanharbinger.com/course! Like this show? Please leave us a review here -- even one sentence helps! Consider including your Twitter handle so we can thank you personally! Full show notes and resources can be found here.

Rationally Speaking
Rationally Speaking #222 - Spencer Greenberg and Seth Cottrell on "Ask a Mathematician, Ask a Physicist"

Rationally Speaking

Play Episode Listen Later Dec 2, 2018 57:37


This episode features the hosts of "Ask a Mathematician, Ask a Physicist," a blog that grew out of a Burning Man booth in which a good-natured mathematician (Spencer Greenberg) and physicist (Seth Cottrell) answer people's questions about life, the universe, and everything. Spencer and Seth discuss the weirdest and most controversial questions they've answered, why math is fundamentally arbitrary, Seth's preferred alternative to the Many Worlds Interpretation of quantum physics, how a weird group of parapsychologists changed the field of physics, and whether you could do a Double Slit Experiment with a Cat Cannon.

80,000 Hours Podcast with Rob Wiblin
#39 - Spencer Greenberg on the scientific approach to solving difficult everyday questions

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Aug 7, 2018 137:29


Will Trump be re-elected? Will North Korea give up their nuclear weapons? Will your friend turn up to dinner? Spencer Greenberg, founder of ClearerThinking.org has a process for working out such real life problems. Let’s work through one here: how likely is it that you’ll enjoy listening to this episode? The first step is to figure out your ‘prior probability’; what’s your estimate of how likely you are to enjoy the interview before getting any further evidence? Other than applying common sense, one way to figure this out is called reference class forecasting: looking at similar cases and seeing how often something is true, on average. Spencer is our first ever return guest. So one reference class might be, how many Spencer Greenberg episodes of the 80,000 Hours Podcast have you enjoyed so far? Being this specific limits bias in your answer, but with a sample size of at most 1 - you’d probably want to add more data points to reduce variability. Zooming out, how many episodes of the 80,000 Hours Podcast have you enjoyed? Let’s say you’ve listened to 10, and enjoyed 8 of them. If so 8 out of 10 might be your prior probability. But maybe the two you didn’t enjoy had something in common. If you’ve liked similar episodes in the past, you’d update in favour of expecting to enjoy it, and if you’ve disliked similar episodes in the past, you’d update negatively. You can zoom out further; what fraction of long-form interview podcasts have you ever enjoyed? Then you’d look to update whenever new information became available. Do the topics seem interesting? Did Spencer make a great point in the first 5 minutes? Was this description unbearably self-referential? Speaking of the Question of Evidence: in a world where Spencer was not worth listening to, how likely is it that we’d invite him back for a second episode? Links to learn more, summary and full transcript. We’ll run through several diverse examples, and how to actually work out the changing probabilities as you update. But that’s only a fraction of the conversation. We also discuss: * How could we generate 20-30 new happy thoughts a day? What would that do to our welfare? * What do people actually value? How do EAs differ from non EAs? * Why should we care about the distinction between intrinsic and instrumental values? * Would hedonic utilitarians really want to hook themselves up to happiness machines? * What types of activities are people generally under-confident about? Why? * When should you give a lot of weight to your prior belief? * When should we trust common sense? * Does power posing have any effect? * Are resumes worthless? * Did Trump explicitly collude with Russia? What are the odds of him getting re-elected? * What’s the probability that China and the US go to War in the 21st century? * How should we treat claims of expertise on diets? * Why were Spencer’s friends suspicious of Theranos for years? * How should we think about the placebo effect? * Does a shift towards rationality typically cause alienation from family and friends? How do you deal with that? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours podcast is produced by Keiran Harris.

Funny as Tech: a tech ethicist & comedian tackle the thorniest topics in tech w/ the help of experts!
Ep23: Spencer Greenberg on using tech for mental health & wellness

Funny as Tech: a tech ethicist & comedian tackle the thorniest topics in tech w/ the help of experts!

Play Episode Listen Later Apr 6, 2018 28:28


Can tech be used to improve our mental health & wellbeing?! Co-hosts David Ryan Polgar (tech ethicist) and Joe Leonardo (comedian) chat with entrepreneur Spencer Greenberg. Spencer Greenberg is a mathematician, entrepreneur and founder of Spark Wave, a startup foundry which creates novel software products designed to solve problems in the world. Spencer also founded ClearerThinking.org, which offers free tools and training programs designed to help improve decision-making and reduce thinking biases. Spencer's work has been featured on the Wall Street Journal, the Independent, Lifehacker, Gizmodo, Fast Company, and the Financial Times. http://www.spencergreenberg.com/ https://twitter.com/SpencrGreenberg https://www.clearerthinking.org/ (website for improving decision making) https://www.uplift.us (app for people with depression) This episode was recorded at Grand Central Tech. For more info visit their website at: www.grandcentraltech.com Twitter: https://twitter.com/GCTech Funny as Tech is a monthly live panel show and weekly podcast that tackles the thorniest issues in tech! Live shows are performed at the Peoples Improv Theater in Manhattan and podcast interviews at Grand Central Tech. Funny as Tech also performs on the road with conferences and special events. Have a question? Info@FunnyAsTech.com FUNNY AS TECH FunnyAsTech.com Twitter: https://twitter.com/FunnyAsTech Instagram: https://www.instagram.com/FunnyAsTech/ Facebook: https://www.facebook.com/FunnyAsTech/ Soundcloud: https://soundcloud.com/user-328735920 https://twitter.com/TechEthicist https://twitter.com/ImJoeLeonardo NEW EPISODES EVERY MONDAY

EARadio
EAG 2017 SF: Social science as lens on effective charity (Spencer Greenberg)

EARadio

Play Episode Listen Later Nov 2, 2017 25:13


What makes effective altruism so “strange”? We’ll take a data driven look into this question using results from four new studies that we conducted, which examine perceptions on catastrophic risk, charity cost effectiveness, animal suffering, and self-reported personality. Source: Effective Altruism Global (video).

80,000 Hours Podcast with Rob Wiblin
#11 - Dr Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 17, 2017 89:17


Do most meat eaters think it’s wrong to hurt animals? Do Americans think climate change is likely to cause human extinction? What is the best, state-of-the-art therapy for depression? How can we make academics more intellectually honest, so we can actually trust their findings? How can we speed up social science research ten-fold? Do most startups improve the world, or make it worse? If you’re interested in these question, this interview is for you. Click for a full transcript, links discussed in the show, etc. A scientist, entrepreneur, writer and mathematician, Spencer Greenberg is constantly working to create tools to speed up and improve research and critical thinking. These include: * Rapid public opinion surveys to find out what most people actually think about animal consciousness, farm animal welfare, the impact of developing world charities and the likelihood of extinction by various different means; * Tools to enable social science research to be run en masse very cheaply; * ClearerThinking.org, a highly popular site for improving people’s judgement and decision-making; * Ways to transform data analysis methods to ensure that papers only show true findings; * Innovative research methods; * Ways to decide which research projects are actually worth pursuing. In this interview, Spencer discusses all of these and more. If you don’t feel like listening, that just shows that you have poor judgement and need to benefit from his wisdom even more! Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.

D.N.A.
What it's like to be an entrepreneur?

D.N.A.

Play Episode Listen Later Aug 5, 2017 14:29


This week's special guest is Spencer Greenberg! Want to know what a "work-in-progress" entrepreneur's life is like? Check out Spencer's story and what his new product Uplift is about! Also on YouTube http://bit.ly/dnapingchang_entrepreneur

80,000 Hours Podcast with Rob Wiblin
#0 – Introducing the 80,000 Hours Podcast

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later May 1, 2017 3:53


80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks • #6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it • #17 – Will MacAskill on why our descendants might view us as moral monsters • #39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter • 80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it

Rationally Speaking
Rationally Speaking #182 - Spencer Greenberg on "How online research can be faster, better, and more useful"

Rationally Speaking

Play Episode Listen Later Apr 16, 2017 52:08


This episode features mathematician and social entrepreneur Spencer Greenberg, talking about how he's taking advantage of the Internet to improve the research process. Spencer and Julia explore topics such as: how the meaning of your research can change dramatically when you ask people *why* they gave the answers they did on your survey, how the sheer speed of online research can help us solve the p-hacking problem, and how to incentivize scientists to share their data and methods.

EARadio
EA Global: EA Entrepreneurship (Spencer Greenberg)

EARadio

Play Episode Listen Later Oct 15, 2015 14:51


Source: Effective Altruism Global (original video).

Singularity.FM
Spencer Greenberg: To Become Better Thinkers – Study Our Cognitive Biases and Logical Fallacies

Singularity.FM

Play Episode Listen Later Sep 23, 2011 29:03


Yesterday I interviewed Spencer Greenberg for Singularity 1 on 1. Spencer is the Chief Executive Officer of Rebellion Research, the quantitative hedge fund that he co-founded in 2005 at the age of 22. During our conversation with Spencer we discuss issues such as: the unique approach that Rebellion Research takes in investing; artificial intelligence and machine learning; the […]