Podcasts about epistemic

Branch of philosophy concerned with the nature and scope of knowledge

  • 284PODCASTS
  • 1,006EPISODES
  • 32mAVG DURATION
  • 1WEEKLY EPISODE
  • Apr 3, 2025LATEST
epistemic

POPULARITY

20172018201920202021202220232024


Best podcasts about epistemic

Latest podcast episodes about epistemic

LessWrong Curated Podcast
“The Pando Problem: Rethinking AI Individuality” by Jan_Kulveit

LessWrong Curated Podcast

Play Episode Listen Later Apr 3, 2025 27:39


Epistemic status: This post aims at an ambitious target: improving intuitive understanding directly. The model for why this is worth trying is that I believe we are more bottlenecked by people having good intuitions guiding their research than, for example, by the ability of people to code and run evals. Quite a few ideas in AI safety implicitly use assumptions about individuality that ultimately derive from human experience. When we talk about AIs scheming, alignment faking or goal preservation, we imply there is something scheming or alignment faking or wanting to preserve its goals or escape the datacentre. If the system in question were human, it would be quite clear what that individual system is. When you read about Reinhold Messner reaching the summit of Everest, you would be curious about the climb, but you would not ask if it was his body there, or his [...] ---Outline:(01:38) Individuality in Biology(03:53) Individuality in AI Systems(10:19) Risks and Limitations of Anthropomorphic Individuality Assumptions(11:25) Coordinating Selves(16:19) Whats at Stake: Stories(17:25) Exporting Myself(21:43) The Alignment Whisperers(23:27) Echoes in the Dataset(25:18) Implications for Alignment Research and Policy--- First published: March 28th, 2025 Source: https://www.lesswrong.com/posts/wQKskToGofs4osdJ3/the-pando-problem-rethinking-ai-individuality --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sports Daily
Foolish Sports Epistemic Ambivalence

Sports Daily

Play Episode Listen Later Apr 1, 2025 41:15


Hour 1 - March goes out like a emu but Jacob & Tommy come in with pranks tricks & general sports tomfoolery. In this segment they discuss the Royals pitching & centerfield situations as well as the latest on new stadiums.

Effective Altruism Forum Podcast
“Introducing The Spending What We Must Pledge” by Thomas Kwa

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 1, 2025 5:30


Epistemic status: highly certain, or something The Spending What We Must

LessWrong Curated Podcast
“Conceptual Rounding Errors” by Jan_Kulveit

LessWrong Curated Podcast

Play Episode Listen Later Mar 29, 2025 6:21


Epistemic status: Reasonably confident in the basic mechanism. Have you noticed that you keep encountering the same ideas over and over? You read another post, and someone helpfully points out it's just old Paul's idea again. Or Eliezer's idea. Not much progress here, move along. Or perhaps you've been on the other side: excitedly telling a friend about some fascinating new insight, only to hear back, "Ah, that's just another version of X." And something feels not quite right about that response, but you can't quite put your finger on it. I want to propose that while ideas are sometimes genuinely that repetitive, there's often a sneakier mechanism at play. I call it Conceptual Rounding Errors – when our mind's necessary compression goes a bit too far . Too much compression A Conceptual Rounding Error occurs when we encounter a new mental model or idea that's partially—but not fully—overlapping [...] ---Outline:(01:00) Too much compression(01:24) No, This Isnt The Old Demons Story Again(02:52) The Compression Trade-off(03:37) More of this(04:15) What Can We Do?(05:28) When It Matters--- First published: March 26th, 2025 Source: https://www.lesswrong.com/posts/FGHKwEGKCfDzcxZuj/conceptual-rounding-errors --- Narrated by TYPE III AUDIO.

deBuren
Bas Keemink | Epistemic power | Nieuw Geluid Live 2024

deBuren

Play Episode Listen Later Mar 17, 2025 7:50


Op 18 oktober stelden de deelnemers van Nieuw Geluid 2024 zichzelf voor aan het publiek. Bas Keemink is een filosoof, journalist en comedian uit Rotterdam die niets liever doet dan zijn eigen mening rondbazuinen. Een mening die, in tegenstelling tot zijn bewijsdrang, ieder jaar genuanceerder wordt. Ook is hij co-host van de podcast Ik zie, Ik zie, Filosofie!, waarin hij op zoek gaat naar een antwoord op de vraag: hoe vind je in godsnaam een baan als filosoof? Bas vraagt zich af: ondermijnen beroemdheden onze democratie?

Two Pint PLC
097 Team Teaching & Epistemic Empathy

Two Pint PLC

Play Episode Listen Later Mar 12, 2025 44:26


Team teaching is increasing in popularity among schools to help educators work together in their daily practice. We read about different models for team teaching and think about how it maps to our past experiences in a wide variety of teaming approaches from our own careers. Later, we reflect on how to develop epistemic empathy. Our ability to take the perspective of students who don't yet know our content helps us be better guides in their learning journeys, but relies on our hard won experience in the classroom.

Machine Learning Street Talk
Transformers Need Glasses! - Federico Barbero

Machine Learning Street Talk

Play Episode Listen Later Mar 8, 2025 60:54


Federico Barbero (DeepMind/Oxford) is the lead author of "Transformers Need Glasses!". Have you ever wondered why LLMs struggle with seemingly simple tasks like counting or copying long strings of text? We break down the theoretical reasons behind these failures, revealing architectural bottlenecks and the challenges of maintaining information fidelity across extended contexts.Federico explains how these issues are rooted in the transformer's design, drawing parallels to over-squashing in graph neural networks and detailing how the softmax function limits sharp decision-making.But it's not all bad news! Discover practical "glasses" that can help transformers see more clearly, from simple input modifications to architectural tweaks.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***https://federicobarbero.com/TRANSCRIPT + RESEARCH:https://www.dropbox.com/s/h7ys83ztwktqjje/Federico.pdf?dl=0TOC:1. Transformer Limitations: Token Detection & Representation[00:00:00] 1.1 Transformers fail at single token detection[00:02:45] 1.2 Representation collapse in transformers[00:03:21] 1.3 Experiment: LLMs fail at copying last tokens[00:18:00] 1.4 Attention sharpness limitations in transformers2. Transformer Limitations: Information Flow & Quantization[00:18:50] 2.1 Unidirectional information mixing[00:18:50] 2.2 Unidirectional information flow towards sequence beginning in transformers[00:21:50] 2.3 Diagonal attention heads as expensive no-ops in LAMA/Gemma[00:27:14] 2.4 Sequence entropy affects transformer model distinguishability[00:30:36] 2.5 Quantization limitations lead to information loss & representational collapse[00:38:34] 2.6 LLMs use subitizing as opposed to counting algorithms3. Transformers and the Nature of Reasoning[00:40:30] 3.1 Turing completeness conditions in transformers[00:43:23] 3.2 Transformers struggle with sequential tasks[00:45:50] 3.3 Windowed attention as solution to information compression[00:51:04] 3.4 Chess engines: mechanical computation vs creative reasoning[01:00:35] 3.5 Epistemic foraging introducedREFS:[00:01:05] Transformers Need Glasses!, Barbero et al.https://proceedings.neurips.cc/paper_files/paper/2024/file/b1d35561c4a4a0e0b6012b2af531e149-Paper-Conference.pdf[00:05:30] Softmax is Not Enough, Veličković et al.https://arxiv.org/abs/2410.01104[00:11:30] Adv Alg Lecture 15, Chawlahttps://pages.cs.wisc.edu/~shuchi/courses/787-F09/scribe-notes/lec15.pdf[00:15:05] Graph Attention Networks, Veličkovićhttps://arxiv.org/abs/1710.10903[00:19:15] Extract Training Data, Carlini et al.https://arxiv.org/pdf/2311.17035[00:31:30] 1-bit LLMs, Ma et al.https://arxiv.org/abs/2402.17764[00:38:35] LLMs Solve Math, Nikankin et al.https://arxiv.org/html/2410.21272v1[00:38:45] Subitizing, Railohttps://link.springer.com/10.1007/978-1-4419-1428-6_578[00:43:25] NN & Chomsky Hierarchy, Delétang et al.https://arxiv.org/abs/2207.02098[00:51:05] Measure of Intelligence, Chollethttps://arxiv.org/abs/1911.01547[00:52:10] AlphaZero, Silver et al.https://pubmed.ncbi.nlm.nih.gov/30523106/[00:55:10] Golden Gate Claude, Anthropichttps://www.anthropic.com/news/golden-gate-claude[00:56:40] Chess Positions, Chase & Simonhttps://www.sciencedirect.com/science/article/abs/pii/0010028573900042[01:00:35] Epistemic Foraging, Fristonhttps://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2016.00056/full

The Jim Rutt Show
EP 287 Jonathan Rauch on the Epistemic Crisis

The Jim Rutt Show

Play Episode Listen Later Feb 27, 2025 97:21


Jim talks with Jonathan Rauch about the ideas in his book The Constitution of Knowledge: A Defense of Truth. They discuss the epistemic crisis, Plato's Theaetetus, Trump & propaganda techniques, the Constitution of Knowledge as a framework for epistemics, the "marketplace of ideas" metaphor, the reality-based community, the personal-institutional spiral, the social funnel of knowledge, social media's impact on epistemics, advertising vs subscription models, meme space pollution, the anti-vax movement, the importance of free speech to the gay rights movement, recommendations for defending truth, supporting institutions, speaking out against misinformation, maintaining viewpoint diversity, and much more. Episode Links The Constitution of Knowledge: A Defense of Truth, by Jonathan Rauch Plato's Theaetetus Heterodox Academy JRS EP273 - Gregg Henriques on the Unified Theory of Knowledge Invisible Rulers: The People Who Turn Lies Into Reality, by Renée DiResta Cross Purposes: Christianity's Broken Bargain with Democracy, by Jonathan Rauch Jonathan Rauch, a senior fellow at the Brookings Institution in Washington, is the author of eight books and many articles on public policy, culture, and government. He is a contributing writer for The Atlantic and recipient of the 2005 National Magazine Award, the magazine industry's equivalent of the Pulitzer Prize. His latest book, published in 2021 by the Brookings Press, is The Constitution of Knowledge: A Defense of Truth, a spirited and deep-diving account of how to push back against disinformation, canceling, and other new threats to our fact-based epistemic order.

The Modern Therapist's Survival Guide with Curt Widhalm and Katie Vernoy
Exploring Systemic Trauma and Relational Privilege with BIPOC and LGBTQI Couples: An interview with Akilah Riley-Richardson, MSW, CCTP

The Modern Therapist's Survival Guide with Curt Widhalm and Katie Vernoy

Play Episode Listen Later Feb 17, 2025 37:59


Exploring Systemic Trauma and Relational Privilege with BIPOC and LGBTQI Couples: An interview with Akilah Riley-Richardson Curt and Katie chat with Akilah Riley Richardson, MSW about the challenges therapists face when working with BIPOC and LGBTQI couples. Akilah discusses the impact of systemic trauma, how it affects relationships, and the importance of creating therapeutic models tailored to marginalized communities. She introduces The PRIDE Model for therapy and The BIOME Stance for therapists, offering actionable strategies for inclusive, trauma-informed care. Transcripts for this episode will be available at mtsgpodcast.com! In this podcast episode, we talk with Akilah Riley-Richardson about what therapists get wrong when working with BIPOC and LGBTQI couples Too often, couples counselors take traditional models and make slight tweaks for couples from marginalized backgrounds, with little success. Akilah Riley-Richardson has developed a stance and a new model to help support therapists in doing more effective work with these couples.   Understanding Systemic Trauma in Therapy Defining systemic trauma: Chronic, unpredictable, and disenfranchised trauma caused by systemic forces (education, legal, and healthcare systems). Examples of systemic trauma: Microaggressions, mispronounced names, assumptions about authority roles, and misgendering. Effects on individuals and relationships: Reduced sense of safety, rejection sensitivity, emotional disconnect, and difficulty setting boundaries. Challenges in Therapy with BIPOC and LGBTQI Couples Common therapist mistakes: Ignoring systemic realities and the impact of privilege. Adapting existing models without acknowledging their white, cishet origins. Failing to create trauma-informed, inclusive frameworks. Importance of relational privilege: Self-acceptance, social acceptance, and feeling protected in relationships. The PRIDE Model & The BIOME Stance from Akilah Riley-Richardson PRIDE Model for Therapy: Relational curiosity: Actively exploring the client's lived experiences. Setting intentions: Creating space for safety and vulnerability. Trauma work: Addressing systemic trauma's long-term impact on relationships. BIOME Stance for Therapists: Bravery: Facing discomfort in recognizing privilege. Intimacy: Fostering deep emotional connections. Openness: Being receptive to client experiences. Micro-liberatory movements: Small but impactful actions toward social justice. Epistemic embracing: Validating client knowledge and lived experiences. How Therapists Can Engage Clients in Systemic Trauma Work Transparency in therapy: Clearly communicating the goal of liberation. Allowing resistance: Accepting client pushback as an assertion of power. Embracing uncertainty: Being comfortable with not having all the answers. Participating in decolonization conversations: Learning through community engagement and allyship. Stay in Touch with Curt, Katie, and the whole Therapy Reimagined #TherapyMovement: Our Linktree: https://linktr.ee/therapyreimagined Modern Therapist's Survival Guide Creative Credits: Voice Over by DW McCann https://www.facebook.com/McCannDW/ Music by Crystal Grooms Mangano https://groomsymusic.com/  

People vs Inequality Podcast
LIVE RECORDING: Finding agency and bridging worlds in challenging times.

People vs Inequality Podcast

Play Episode Listen Later Feb 10, 2025 30:40 Transcription Available


In a time of crisis and conflict, how can we find agency and work together across spaces in ways that contribute to a more just, caring and equal world? Join us for our very first LIVE recording, straight from the Politics of Inequality Conference at the London School of Economics in London, where we speak with the amazing Lyla Adwan-Kamara and Dr. Philippa Mullins. Two people bringing unique personal and professional perspectives to these questions, with a level of depth and care that we wish everyone to hear. Lyla is a Ghana-based Palestinian-Irish mental health and disability rights activist and leader who doesn't shy away from speaking out with great clarity. Philippa is a researcher and educator in disability and resistance studies with a clear vision for equity in knowledge production.We talk about working from a place of hurt and joy, how to navigate these challenging times whilst recognizing the injustices we see are not new, what it means to stand in solidarity and address inequalities in our everyday life, work and the institutions we are a part of. We hear about the importance of rest and kindness, understanding and honoring our values whilst recognizing fluidity and mess, and - of course - being in community.References coming up in the conversation: Tuck and Yang - Paris I Proof  bell hooks - Teaching to Transgress: Education as the Practice of Freedom Dionne Brand – “One enters a room and history follows; one enters a room and history precedes. [...] How do I know this? Only by self-observation, only by looking. Only by feeling. Only by being a part, sitting in the room with history.” From:  Brand, D. (2001). A Map to the Door of No Return: Notes to Belonging. Doubleday CanadaEsther Arma - Emotional Justice: A Roadmap for Racial Healing | Penguin Random House South Africa Philippa's essay: Epistemic injustice and unwellness in the classroom: Creating knowledge like we matter Mimi Khúc - on 'a pedagogy of unwellness—the recognition that we are all differentially unwell' - dear elia, Duke University PressLyla's Memory Stitches - more information and pictures in this blog post  Atlantic Fellows for Social and Economic Equity (AFSEE) | https://afsee.atlanticfellows.lse.ac.uk/   Politics of Inequality conference and programme  - https://www.lse.ac.uk/International-Inequalities/Research/Politics-of-Inequality More about Lyla: https://afsee.atlanticfellows.lse.ac.uk/en-gb/fellows/2023/lyla-adwan-kamara More about Philippa: https://www.lse.ac.uk/International-Inequalities/People/Philippa-Mullins/Philippa-Mullins This podcast is a joint production of Barbara van Paassen (creator, host), Elizabeth Maina (producer) and Alex Akenno (editor). For more information see https://peoplevsinequality.blogspot.com/ or contact us at peoplevsinequality@gmail.com. This episode was supported by the Atlantic Fellows for Social and Economic Equity.

Platypod, The CASTAC Podcast
Thinking with Epistemic Things: Quality and its Consequences in Agri-Commodities Markets

Platypod, The CASTAC Podcast

Play Episode Listen Later Jan 28, 2025


This bonus content is a reading from Platypus, the CASTAC Blog. The full post by Amrita Kurian can be read at https://blog.castac.org/2025/01/thinking-with-epistemic-things-quality-and-its-consequences-in-agri-commodities-markets/. About the post: What happens when an "epistemic thing"—an unstable, experimental object of scientific research—is taken out of the controlled confines of the lab or the pages collated from a scientific symposium and introduced into the real world?

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
301 | Tina Eliassi-Rad on Al, Networks, and Epistemic Instability

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Play Episode Listen Later Jan 13, 2025 69:21


Big data is ruling, or at least deeply infiltrating, all of modern existence. Unprecedented capacity for collecting and analyzing large amounts of data have given us a new generation of artificial intelligence models, but also everything from medical procedures to recommendation systems that guide our purchases and romantic lives. I talk with computer scientist Tina Elassi-Rad about how we can sift through all this data, make sure it is deployed in ways that align with our values, and how to deal with the political and social dangers associated with systems that are not always guided by the truth.Support Mindscape on Patreon.Blog post with transcript: https://www.preposterousuniverse.com/podcast/2025/01/13/301-tina-eliassi-rad-on-al-networks-and-epistemic-instability/Tina Eliassi-Rad received her Ph.D. in computer science from the University of Wisconsin-Madison. She is currently Joseph E. Aoun Chair of Computer Sciences and Core Faculty of the Network Science Institute at Northeastern University, External Faculty at the Santa Fe Institute, and External Faculty at the Vermont Complex Systems Center. She is a fellow of the Network Science Society, recipient of the Lagrange Prize, and was named one of the 100 Brilliant Women in AI Ethics.Web siteNortheastern web pageGoogle Scholar publicationsWikipediaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Jay's Analysis
Derek MythVision VS Jay Dyer Heated Exchange! Epistemic Foundations! Via MadebyJimbob & Crucible

Jay's Analysis

Play Episode Listen Later Dec 29, 2024 36:15


This is still getting tons of views so I am posting the audio here. Jimbob and the Crucible invited me to hop on a discussion which I thought was a debate. I came in hot and heated and things escalated into a wild chat! Full show is here: https://www.youtube.com/@MadebyJimbob Derek is here: https://www.youtube.com/@MythVisionPodcast Crucible is here: https://www.youtube.com/@The_Crucible Send Superchats at any time here: https://streamlabs.com/jaydyer/tip The New Philosophy Course is here: https://marketplace.autonomyagora.com/philosophy101 Use JAY50 promo code here https://choq.com for huge discounts - 50% off! Set up recurring Choq subscription with the discount code JAY53LIFE for 53% off now Lore coffee is here: https://www.patristicfaith.com/coffee/ Orders for the Red Book are here: https://jaysanalysis.com/product/the-red-book-essays-on-theology-philosophy-new-jay-dyer-book/ Subscribe to my site here: https://jaysanalysis.com/membership-account/membership-levels/ Follow me on R0kfin here: https://rokfin.com/jaydyerBecome a supporter of this podcast: https://www.spreaker.com/podcast/jay-sanalysis--1423846/support.

Dr. John Vervaeke
How Jhana Meditation Facilitates Insight and Cognitive Flexibility

Dr. John Vervaeke

Play Episode Listen Later Dec 27, 2024 68:14


Question - "How does Jhana meditation simplify experience to facilitate insight and cognitive flexibility?" John Vervaeke is joined by Mark Miller, Rick Repetti, to explore the intersection of predictive processing, relevance realization, and embodied cognition in contemplative practices. They introduce PhD candidate Jonas Mago, who discusses his research on Jhana meditation and its impact on cognitive flexibility and insight. The conversation touches on contrasting Jhana practices with charismatic Christian traditions such as speaking in tongues. They also explore how Jhana states temporarily reduce the complexity of perception, allowing practitioners to observe the construction and deconstruction of their experiential models. The neuroscientific evidence provided, such as changes in brain responses during these states, adds depth to this exploration, illustrating how such simplification can lead to profound insights. Mark Miller, a philosopher and cognitive scientist, holds a senior research fellowship at Monash University's Center for Consciousness and Contemplative Studies in Australia, with affiliations at the University of Toronto and Hokkaido University in Japan. His work, which dives into the interplay between human thought, technology's impact on well-being, and human-computer interaction, is at the forefront of integrating cognitive neuroscience with philosophical inquiry. Rick Repetti is Professor of Philosophy at Kingsborough Community College, CUNY, USA. He is the author of The Counterfactual Theory of Free Will (2010), as well as several articles on Buddhism, meditation, free will, and philosophy of religion. Jonas Mago is a cognitive neuroscientist and wellbeing aficionado, deeply invested in understanding the cognitive and neurobiological mechanisms underlying human flourishing. His research explores contemplative practices designed to cultivate wholesome states of mind—spanning meditation, prayer, collective cultural rituals, and psychedelic therapies. I approach these topics from an interdisciplinary perspective, integrating cognitive science, neurobiology, computational modeling, and phenomenology to investigate mechanisms of self-regulation and transformation. He is currently pursuing my doctoral studies in Neuroscience at McGill University under the supervision of Dr. Michael Lifshitz, with co-supervision from Prof. Dr. Karl Friston. His academic journey includes a master's degree in Mind, Language, and Embodied Cognition from the University of Edinburgh (UK) and undergraduate studies in Liberal Arts and Sciences at University College Maastricht (Netherlands). Connect with a community dedicated to self-discovery and purpose, and gain deeper insights by joining our Patreon.   Notes:  (0:00) Introduction to the Lectern. This is the beginning of The Predictive Processing Series (0:30) Mark Miller, Rick Repetti, and Jonas Mago joins John Vervaeke  (1:30) Predictive Processing and Meditation (4:00) Inside Jhanas Meditation (10:00) Phenomenology and Cognitive Functions of Jhanas (11:30) “Is Jhanas essential for the path of awakening?” (13:00) Predictive Coding and Cognitive Models (18:00) Jhana meditation and the transient nature of predictive models (25:00) Analysis of the risks and benefits associated with Jhana practice (30:00) EEG Studies on Jhana Practitioners (37:00) Jhana versus Pure Consciousness (45:00) High Arousal Contemplative States: Jhana and Christian prayer (54:00) The Importance of Context in Contemplative Practices (1:05:00) Final Words   ---  The Vervaeke Foundation is committed to advancing the scientific pursuit of wisdom and creating a significant impact on the world. Become a part of our mission.   Join Awaken to Meaning to explore practices that enhance your virtues and foster deeper connections with reality and relationships.   John Vervaeke: Website | Twitter | YouTube | Patreon     Ideas, People, and Works Mentioned in this Episode: Predictive Processing Epistemic Vulnerability Metacognition Absorption States fMRI Studies Sangha EEG Studies Relevance Realization Embodied Cognition Jhana Meditation Thomas Metzinger Michael Lifshitz Alton Ram Dass Buddha Tanya Luhrmann Shaila Catherine Metzinger, T. (2021). The Elephant and the Blind: Insights into pure consciousness experiences. Lerman, T. (2012). When God Talks Back: A study on evangelical experiences of speaking in tongues.   Quotes:   “What we're trying in, in meditation is starting to model our predictive hierarchy of the brain opaque. So to notice that experience ultimately is not something that's, that's kind of a real grasp on reality, that all we have is this imprint of reality on our experiential or generative modeling, through this, this predictive hierarchy.”   "The interplay between micro and macro perspectives mirrors the flexibility we aim for in meditation and science."   "Epistemic vulnerability can be a doorway to growth if properly framed—or a risk without it." Mark Miller: Website | X | Podcast | YouTube Rick Repetti: Website | X | Facebook Jonas Mago: Website | X |  —   Thank you for Listening!

Jay's Analysis
Paganism Vs Orthodoxy Debate: Flashback! - Jay Dyer Vs Boglord Dave Martel

Jay's Analysis

Play Episode Listen Later Dec 11, 2024 81:46


An. impromptu debate over the question of the Neo-pagan revival. Our debate revolves the question of whether paganism and in his case, Asatru, are objectively true and can be proven in terms of logical, rational and philosophical discourse. How are claims justified? What is a universal claim? How is it justified? Is logic just a word trick? Dave's channel is here: https://www.youtube.com/channel/UCnaQkJ6jGvxqMfFAmqlyLoQ Send Superchats at any time here: https://streamlabs.com/jaydyer/tip Get started with Bitcoin here: https://www.swanbitcoin.com/jaydyer/ The New Philosophy Course is here: https://marketplace.autonomyagora.com/philosophy101 Set up recurring Choq subscription with the discount code JAY44LIFE for 44% off now https://choq.com Lore coffee is here: https://www.patristicfaith.com/coffee/ Orders for the Red Book are here: https://jaysanalysis.com/product/the-red-book-essays-on-theology-philosophy-new-jay-dyer-book/ Subscribe to my site here: https://jaysanalysis.com/membership-account/membership-levels/ Follow me on R0kfin here: https://rokfin.com/jaydyerBecome a supporter of this podcast: https://www.spreaker.com/podcast/jay-sanalysis--1423846/support.

Medical Education Podcasts
Unravelling epistemic injustice in medical education: The case of the underperforming learner - An audio paper with Victoria Luong

Medical Education Podcasts

Play Episode Listen Later Nov 15, 2024 45:00


Victoria Luong and colleagues explain how epistemic injustice can help us reframe complex problems in medical education as a means of treating people as fully human. Read the accompanying article here: https://doi.org/10.1111/medu.15410

The Erick Erickson Show
S13 EP201: Hour 2 - The Epistemic Closure of the Media's Mind

The Erick Erickson Show

Play Episode Listen Later Nov 13, 2024 41:04


Learn more about your ad choices. Visit megaphone.fm/adchoices

The Ricochet Audio Network Superfeed
Erick Erickson Show: S13 EP201: Hour 2 – The Epistemic Closure of the Media's Mind (#201)

The Ricochet Audio Network Superfeed

Play Episode Listen Later Nov 13, 2024


Behavioral Grooves Podcast
How Can We Revive Our Democracy? | AJ Jacobs

Behavioral Grooves Podcast

Play Episode Listen Later Oct 28, 2024 71:40


Ahead of the 2024 US Election, Kurt and Tim sit down with bestselling author and serial experimenter AJ Jacobs to discuss his latest project, The Year of Living Constitutionally. AJ spent a year living life according to the principles of the U.S. Constitution, adopting 18th-century customs along the way. From wearing tricorn hats and writing with quill pens to exploring the deeper philosophical underpinnings of democracy, AJ brings history to life while reflecting on the balance between rights and responsibilities, a concept that feels more urgent than ever today. AJ also shares his mission to revive one of America's sweetest (and largely forgotten) traditions—Election Cakes! In the 1700s, Election Day was a festival of civic pride, complete with parades, music, and community-baked cakes shared at the polls. In true AJ fashion, he's on a quest to bring this tradition back, reminding us that democracy can be both a serious and joyful act.  Throughout the episode, AJ, Kurt, and Tim dive into the importance of ‘epistemic humility' - aka, the acknowledgment that we don't have all the answers and must remain open to learning. From Benjamin Franklin's introspection to modern-day challenges of misinformation, AJ challenges listeners to approach life and democracy with curiosity, gratitude, and a willingness to improve both ourselves and our society. So grab a slice of election cake (or pie!) and join us for this thought-provoking, timely conversation on what it means to live constitutionally. Need help finding a voting location near you? Check here! ©2024 Behavioral Grooves Topics [0:00] Election day traditions [4:25] Speed round with AJ Jacobs [9:44] Living Colonially: What I learned [18:56] Epistemic humility and political perspectives [23:52] Constitutional originalism [36:29] How do we frame the constitution? [40:40] Election cakes and celebrating democracy [48:52] Embracing experimentation in everyday life [52:56] Grooving session: open-mindedness, civic duty, and cake recipes ©2024 Behavioral Grooves Links Join our Facebook Group! AJ's Substack The Year of Living Constitutionally More about AJ The History of Election Cakes The US Constitution Musical Links Royal American Medley - Songs of the Revolutionary War Yankee Doodle

The Nonlinear Library
EA - The Best Argument is not a Simple English Yud Essay by Jonathan Bostock

The Nonlinear Library

Play Episode Listen Later Sep 20, 2024 6:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Argument is not a Simple English Yud Essay, published by Jonathan Bostock on September 20, 2024 on The Effective Altruism Forum. I was encouraged to post this here, but I don't yet have enough EA forum karma to crosspost directly! Epistemic status: these are my own opinions on AI risk communication, based primarily on my own instincts on the subject and discussions with people less involved with rationality than myself. Communication is highly subjective and I have not rigorously A/B tested messaging. I am even less confident in the quality of my responses than in the correctness of my critique. If they turn out to be true, these thoughts can probably be applied to all sorts of communication beyond AI risk. Lots of work has gone into trying to explain AI risk to laypersons. Overall, I think it's been great, but there's a particular trap that I've seen people fall into a few times. I'd summarize it as simplifying and shortening the text of an argument without enough thought for the information content. It comes in three forms. One is forgetting to adapt concepts for someone with a far inferential distance; another is forgetting to filter for the important information; the third is rewording an argument so much you fail to sound like a human being at all. I'm going to critique three examples which I think typify these: Failure to Adapt Concepts I got this from the summaries of AI risk arguments written by Katja Grace and Nathan Young here. I'm making the assumption that these summaries are supposed to be accessible to laypersons, since most of them seem written that way. This one stands out as not having been optimized on the concept level. This argument was below-aveage effectiveness when tested. I expect most people's reaction to point 2 would be "I understand all those words individually, but not together". It's a huge dump of conceptual information all at once which successfully points to the concept in the mind of someone who already understands it, but is unlikely to introduce that concept to someone's mind. Here's an attempt to do better: 1. So far, humans have mostly developed technology by understanding the systems which the technology depends on. 2. AI systems developed today are instead created by machine learning. This means that the computer learns to produce certain desired outputs, but humans do not tell the system how it should produce the outputs. We often have no idea how or why an AI behaves in the way that it does. 3. Since we don't understand how or why an AI works a certain way, it could easily behave in unpredictable and unwanted ways. 4. If the AI is powerful, then the consequences of unwanted behaviour could be catastrophic. And here's Claude's just for fun: 1. Up until now, humans have created new technologies by understanding how they work. 2. The AI systems made in 2024 are different. Instead of being carefully built piece by piece, they're created by repeatedly tweaking random systems until they do what we want. This means the people who make these AIs don't fully understand how they work on the inside. 3. When we use systems that we don't fully understand, we're more likely to run into unexpected problems or side effects. 4. If these not-fully-understood AI systems become very powerful, any unexpected problems could potentially be really big and harmful. I think it gets points 1 and 3 better than me, but 2 and 4 worse. Either way, I think we can improve upon the summary. Failure to Filter Information When you condense an argument down, you make it shorter. This is obvious. What is not always as obvious is that this means you have to throw out information to make the core point clearer. Sometimes the information that gets kept is distracting. Here's an example from a poster a friend of mine made for Pause AI: When I showed this to ...

Pilgrim Faith Podcast
Pilgrim Faith (Episode 83: Epistemic Abuse)

Pilgrim Faith Podcast

Play Episode Listen Later Sep 16, 2024 32:24


In this episode, Joseph and Dale talk about the epistemic challenges of young people living in 2024.

The Nonlinear Library
EA - Bringing the International Space Station down safely: A Billion dollar waste? by NickLaing

The Nonlinear Library

Play Episode Listen Later Sep 15, 2024 6:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bringing the International Space Station down safely: A Billion dollar waste?, published by NickLaing on September 15, 2024 on The Effective Altruism Forum. Epistemic status: Uncertain, shooting from the hip a little with no expertise in this area and only a couple of hours research done. I might well have missed something obvious, in which case I'll revise or even take the post down. Money Waste is Everywhere Here in Northern Uganda where poverty abounds, many expenditures feel wasteful. Last night I had a great time at the fanciest restaurant in town with friends but felt a pang of guilt about my $7 meal. Enough of a pang to avoid telling my wife after I came home. A bigger scale waste in these parts is the partial closure of the main bridge across the river Nile, because the bridge has apparently degraded and become hazardous. Vehicles larger than a minivan now can't cross, which has raised the price of public transport by 50% and trucks now have a 3 hour detour. Besides these direct costs, this closure increases the cost of fuel and commodities in Northern Uganda. By my loose, conservative BOTEC the closure costs $10,000 every day (1.2 million dollars in 4 months so far) which Ugandans now can't spend on education and healthcare, while likely causing more crashes due to increasingly tired drivers who now use worse roads. The detour itself may have already cost more lives than would be lost if the bridge does collapse and kills a few people.[1] But there are far bigger wastes of money on this good earth. A Billion Dollars to bring down a space station? Space X have secured an 843 million dollar contract[2] to build the boringly named "U.S. De-Orbit vehicle" (why not "Sky Shepherd")[3], which in 2031 will safely guide the decommissioned International Space Station (ISS) into the Pacific Ocean. This all sounded pretty cool until I thought… is this worth it?. No human has ever been definitively killed by an object falling from space, although there have been a couple of close calls with larger asteroids injuring many while Open Asteroid Impact could be a game changer here in future. This one time though, a wee piece of space junk did hit Lottie Williams in the shoulder and she took it home as a memento. I'm jealous. According to a great Nature article "Unnecessary risks created by uncontrolled rocket reentries", over the last 30 years over 1,000 space bodies have fallen to earth in uncontrolled re-entries and never killed anyone. The closest call might be a Chinese rocket in 2020 which damaged a house in the Ivory Coast. The article predicts a 10% chance of a fatal space junk accident in the next 10 years - far from zero and worth considering, but unlikely to be the next EA cause area. This low risk makes sense given that only 3% of the globe are urban areas and under 1% actually contain human homes[4]- most stuff falls down where there ain't people. Also the bulk of falling spacecraft burns up before hitting the ground. In contrast a million people die from car crashes every year,[5] and each of us has about a 1 in 100 chance of dying that way. Although the ISS is the biggest ever at 450 tons, we do have priors. Two 100 ton uncontrolled re-entries (Skylab and tragically the Columbia) crashed to earth without issue. So what actually is the risk if the ISS was left to crash uncontrolled? The U.S. Government requires controlled re-entry for anything that poses over a 1 in 10,000 risk to human life so this risk must be higher. NASA doesn't give us their risk estimate but only state "The ISS requires a controlled re-entry because it is very large, and uncontrolled re-entry would result in very large pieces of debris with a large debris footprint, posing a significant risk to the public worldwide" [6]. I hesitate to even guesstimate the risk to human life at the ISS falli...

Bounce! Conversations with Larry Weeks
Ep. 81: In Another's Shoes: A.J. Jacobs on Curiosity, Living the Constitution, and Navigating Differences

Bounce! Conversations with Larry Weeks

Play Episode Listen Later Sep 13, 2024 46:10


A.J. Jacobs is a renowned writer and participatory journalist, best known for his immersive, experiment-driven projects that push the boundaries of immersive learning. From living according to the Ten Commandments to exploring radical honesty, A.J. dives headfirst into his experiments, bringing humor and insight into everything he does. His latest endeavor? An exploration of the U.S. Constitution, attempting to live by its original meaning. A.J. is no stranger to this podcast—this is his second appearance, and if you missed our previous conversation, I highly recommend checking out Life As Experiment: A.J. Jacobs – Lessons From Living On The Edge. It's one of my favorites, offering a deeper dive into A.J.'s life and wild approach to self-experimentation. In his latest book, The Year of Living Constitutionally: One Man's Humble Quest to Follow the Constitution's Original Meaning, A.J. documents his year-long quest to embody the Constitution in its original context. From carrying a musket in New York City to using a quill pen, he immerses himself in the mindset of the Founding Fathers, bringing history to life in ways you'd never expect. In this episode, A.J. and I talk about his experiences, the surprises he encountered, and why curiosity is more vital than ever. Whether you're fired up about politics or just curious about how the past continues to shape our present, you'll love this conversation. Our conversation includes: The role of curiosity in A.J.'s life and work. A.J.'s immersive journalism. Past experiments like practicing radical honesty. Acting "as if"  Curiosity as key to personal and professional growth. A.J.'s latest book and his experiences living 18th-century standards. The balance between rights and responsibilities as understood by the Founding Fathers. The original intent of free speech, its historical limits on sedition, and its modern implications in the age of social media. How the office of the U.S. president has evolved beyond what the Founding Fathers envisioned. Strategies for engaging in productive conversations with opposing views. Epistemic humility—recognizing that no one is always right. The future of society and existential risks, with insights from A.J.'s participation in the Longview Conference. I hope this episode inspires you to be more open and curious, and question your assumptions. Life is one big experiment—full of choices, tests, and lessons that help us grow and adapt. Keep exploring!  Enjoy! For show notes and more, visit www.larryweeks.com 

The Nonlinear Library
LW - The Best Lay Argument is not a Simple English Yud Essay by J Bostock

The Nonlinear Library

Play Episode Listen Later Sep 10, 2024 6:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Lay Argument is not a Simple English Yud Essay, published by J Bostock on September 10, 2024 on LessWrong. Epistemic status: these are my own opinions on AI risk communication, based primarily on my own instincts on the subject and discussions with people less involved with rationality than myself. Communication is highly subjective and I have not rigorously A/B tested messaging. I am even less confident in the quality of my responses than in the correctness of my critique. If they turn out to be true, these thoughts can probably be applied to all sorts of communication beyond AI risk. Lots of work has gone into trying to explain AI risk to laypersons. Overall, I think it's been great, but there's a particular trap that I've seen people fall into a few times. I'd summarize it as simplifying and shortening the text of an argument without enough thought for the information content. It comes in three forms. One is forgetting to adapt concepts for someone with a far inferential distance; another is forgetting to filter for the important information; the third is rewording an argument so much you fail to sound like a human being at all. I'm going to critique three examples which I think typify these: Failure to Adapt Concepts I got this from the summaries of AI risk arguments written by Katja Grace and Nathan Young here. I'm making the assumption that these summaries are supposed to be accessible to laypersons, since most of them seem written that way. This one stands out as not having been optimized on the concept level. This argument was below-aveage effectiveness when tested. I expect most people's reaction to point 2 would be "I understand all those words individually, but not together". It's a huge dump of conceptual information all at once which successfully points to the concept in the mind of someone who already understands it, but is unlikely to introduce that concept to someone's mind. Here's an attempt to do better: 1. So far, humans have mostly developed technology by understanding the systems which the technology depends on. 2. AI systems developed today are instead created by machine learning. This means that the computer learns to produce certain desired outputs, but humans do not tell the system how it should produce the outputs. We often have no idea how or why an AI behaves in the way that it does. 3. Since we don't understand how or why an AI works a certain way, it could easily behave in unpredictable and unwanted ways. 4. If the AI is powerful, then the consequences of unwanted behaviour could be catastrophic. And here's Claude's just for fun: 1. Up until now, humans have created new technologies by understanding how they work. 2. The AI systems made in 2024 are different. Instead of being carefully built piece by piece, they're created by repeatedly tweaking random systems until they do what we want. This means the people who make these AIs don't fully understand how they work on the inside. 3. When we use systems that we don't fully understand, we're more likely to run into unexpected problems or side effects. 4. If these not-fully-understood AI systems become very powerful, any unexpected problems could potentially be really big and harmful. I think it gets points 1 and 3 better than me, but 2 and 4 worse. Either way, I think we can improve upon the summary. Failure to Filter Information When you condense an argument down, you make it shorter. This is obvious. What is not always as obvious is that this means you have to throw out information to make the core point clearer. Sometimes the information that gets kept is distracting. Here's an example from a poster a friend of mine made for Pause AI: When I showed this to my partner, they said "This is very confusing, it makes it look like an AGI is an AI which makes a chess AI". Making more AI...

The Nonlinear Library
EA - Reconsidering the Celebration of Project Cancellations: Have We Updated Too Far? by Midtermist12

The Nonlinear Library

Play Episode Listen Later Sep 10, 2024 3:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reconsidering the Celebration of Project Cancellations: Have We Updated Too Far?, published by Midtermist12 on September 10, 2024 on The Effective Altruism Forum. Reconsidering the Celebration of Project Cancellations: Have We Updated Too Far? Epistemic status: Low certainty. These are tentative thoughts, and I'm open to alternative perspectives. Posting from an alt account. The Effective Altruism community has made significant strides in recognizing the importance of quitting projects that don't deliver short-term results, which helps counteract the sunk cost fallacy and promotes the efficient use of resources. This mindset, in many cases, is a positive development. However, I wonder if we've over-updated in this direction. Recent posts about project cancellations, like the one regarding the Center for Effective Aid Policy (CEAP), have received considerable attention - CEAP's closure post garnered 538 karma, for instance. While I don't have a strong opinion on whether it was prudent to shutter CEAP, I am concerned that its closure, and the community's reaction to it, vibes in a direction where initial setbacks are seen as definitive reasons to quit, even when there might still be significant long-term potential. From an outside perspective, it seemed that CEAP was building valuable relationships and developing expertise in a complex field - global aid policy - where results may take years to materialize. Yet, the organization was closed, seemingly because it wasn't achieving short-term success. This raises a broader concern: are we in danger of quitting too early when projects encounter early challenges, rather than giving them the time they need to realize high expected value (EV) outcomes? There's a tension here between sticking with projects that have a low probability of short-term success but could yield immense value in the long run, and the temptation to cut losses when things aren't immediately working out. High-EV projects often have low-impact modal outcomes, especially in the early stages. It's entirely possible that a project with a 20% chance of success could still be worth pursuing if the potential upside is transformative. However, these projects can look like failures early on, and if we're too quick to celebrate quitting, we may miss out on rare but important successes. This is particularly relevant in fields like AI safety, global aid policy, or other high-risk, high-reward areas, where expertise and relationships are slow to develop but crucial for long-term impact. At the same time, it's essential not to continue investing in clearly failing projects just because they might turn around. The ability to pivot is important, and I don't want to downplay that. But I wonder if, as a community, we are at risk of overupdating based on short-term signals. Novel and complex projects often need more time to bear fruit, and shutting them down prematurely could mean forfeiting potentially transformative outcomes. I don't have an easy answer here, but it might be valuable to explore frameworks that help us better balance the tension between short-term setbacks and long-term EV. How can we better distinguish between projects that genuinely need to be ended and those that just need more time? Are there ways we can improve our evaluations to avoid missing out on projects with high potential because of an overemphasis on early performance metrics? I'd love to hear thoughts from others working on long-term, high-risk projects - how do you manage this tension between the need to pivot and the potential upside of sticking with a challenging project? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
AF - Conflating value alignment and intent alignment is causing confusion by Seth Herd

The Nonlinear Library

Play Episode Listen Later Sep 5, 2024 13:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conflating value alignment and intent alignment is causing confusion, published by Seth Herd on September 5, 2024 on The AI Alignment Forum. Submitted to the Alignment Forum. Contains more technical jargon than usual. Epistemic status: I think something like this confusion is happening often. I'm not saying these are the only differences in what people mean by "AGI alignment". Summary: Value alignment is better but probably harder to achieve than personal intent alignment to the short-term wants of some person(s). Different groups and people tend to primarily address one of these alignment targets when they discuss alignment. Confusion abounds. One important confusion stems from an assumption that the type of AI defines the alignment target: strong goal-directed AGI must be value aligned or misaligned, while personal intent alignment is only viable for relatively weak AI. I think this assumption is important but false. While value alignment is categorically better, intent alignment seems easier, safer, and more appealing in the short term, so AGI project leaders are likely to try it.[1] Overview Clarifying what people mean by alignment should dispel some illusory disagreement, and clarify alignment theory and predictions of AGI outcomes. Caption: Venn diagram of three types of alignment targets. Value alignment and Personal intent alignment are both subsets of Evan Hubinger's definition of intent alignment: AGI aligned with human intent in the broadest sense. Prosaic alignment work usually seems to be addressing a target somewhere in the neighborhood of personal intent alignment (following instructions or doing what this person wants now), while agent foundations and other conceptual alignment work usually seems to be addressing value alignment. Those two clusters have different strengths and weaknesses as alignment targets, so lumping them together produces confusion. People mean different things when they say alignment. Some are mostly thinking about value alignment (VA): creating sovereign AGI that has values close enough to humans' for our liking. Others are talking about making AGI that is corrigible (in the Christiano or Harms sense)[2] or follows instructions from its designated principal human(s). I'm going to use the term personal intent alignment (PIA) until someone has a better term for that type of alignment target. Different arguments and intuitions apply to these two alignment goals, so talking about them without differentiation is creating illusory disagreements. Value alignment is better almost by definition, but personal intent alignment seems to avoid some of the biggest difficulties of value alignment. Max Harms' recent sequence on corrigibility as a singular target (CAST) gives both a nice summary and detailed arguments. We do not need us to point to or define values, just short term preferences or instructions. The principal advantage is that an AGI that follows instructions can be used as a collaborator in improving its alignment over time; you don't need to get it exactly right on the first try. This is more helpful in slower and more continuous takeoffs. This means that PI alignment has a larger basin of attraction than value alignment does.[3] Most people who think alignment is fairly achievable seem to be thinking of PIA, while critics often respond thinking of value alignment. It would help to be explicit. PIA is probably easier and more likely than full VA for our first stabs at AGI, but there are reasons to wonder if it's adequate for real success. In particular, there are intuitions and arguments that PIA doesn't address the real problem of AGI alignment. I think PIA does address the real problem, but in a non-obvious and counterintuitive way. Another unstated divide There's another important clustering around these two conceptions of al...

The Nonlinear Library
LW - Conflating value alignment and intent alignment is causing confusion by Seth Herd

The Nonlinear Library

Play Episode Listen Later Sep 5, 2024 13:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conflating value alignment and intent alignment is causing confusion, published by Seth Herd on September 5, 2024 on LessWrong. Submitted to the Alignment Forum. Contains more technical jargon than usual. Epistemic status: I think something like this confusion is happening often. I'm not saying these are the only differences in what people mean by "AGI alignment". Summary: Value alignment is better but probably harder to achieve than personal intent alignment to the short-term wants of some person(s). Different groups and people tend to primarily address one of these alignment targets when they discuss alignment. Confusion abounds. One important confusion stems from an assumption that the type of AI defines the alignment target: strong goal-directed AGI must be value aligned or misaligned, while personal intent alignment is only viable for relatively weak AI. I think this assumption is important but false. While value alignment is categorically better, intent alignment seems easier, safer, and more appealing in the short term, so AGI project leaders are likely to try it.[1] Overview Clarifying what people mean by alignment should dispel some illusory disagreement, and clarify alignment theory and predictions of AGI outcomes. Caption: Venn diagram of three types of alignment targets. Value alignment and Personal intent alignment are both subsets of Evan Hubinger's definition of intent alignment: AGI aligned with human intent in the broadest sense. Prosaic alignment work usually seems to be addressing a target somewhere in the neighborhood of personal intent alignment (following instructions or doing what this person wants now), while agent foundations and other conceptual alignment work usually seems to be addressing value alignment. Those two clusters have different strengths and weaknesses as alignment targets, so lumping them together produces confusion. People mean different things when they say alignment. Some are mostly thinking about value alignment (VA): creating sovereign AGI that has values close enough to humans' for our liking. Others are talking about making AGI that is corrigible (in the Christiano or Harms sense)[2] or follows instructions from its designated principal human(s). I'm going to use the term personal intent alignment (PIA) until someone has a better term for that type of alignment target. Different arguments and intuitions apply to these two alignment goals, so talking about them without differentiation is creating illusory disagreements. Value alignment is better almost by definition, but personal intent alignment seems to avoid some of the biggest difficulties of value alignment. Max Harms' recent sequence on corrigibility as a singular target (CAST) gives both a nice summary and detailed arguments. We do not need us to point to or define values, just short term preferences or instructions. The principal advantage is that an AGI that follows instructions can be used as a collaborator in improving its alignment over time; you don't need to get it exactly right on the first try. This is more helpful in slower and more continuous takeoffs. This means that PI alignment has a larger basin of attraction than value alignment does.[3] Most people who think alignment is fairly achievable seem to be thinking of PIA, while critics often respond thinking of value alignment. It would help to be explicit. PIA is probably easier and more likely than full VA for our first stabs at AGI, but there are reasons to wonder if it's adequate for real success. In particular, there are intuitions and arguments that PIA doesn't address the real problem of AGI alignment. I think PIA does address the real problem, but in a non-obvious and counterintuitive way. Another unstated divide There's another important clustering around these two conceptions of alignment. Peop...

The Nonlinear Library
EA - The Protester, Priest and Politician: Effective Altruists before their time by NickLaing

The Nonlinear Library

Play Episode Listen Later Sep 5, 2024 11:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Protester, Priest and Politician: Effective Altruists before their time, published by NickLaing on September 5, 2024 on The Effective Altruism Forum. Epistemic status: Motivated and biased as a Christian, gazing in awe through rose-tinted glasses at inspiring humans of days gone by. Written primarily for a Christian audience, with hopefully something of use for all. Benjamin Lay - The Protester Benjamin Lay, only 4½ feet tall, stood outside the Quaker meeting house in the heart of Pennsylvania winter, his right leg exposed and thrust deep into the snow. One shocked churchgoer after another urged him to protect his life and limb - but he only replied "Ah, you pretend compassion for me but you do not feel for the poor slaves in your fields, who go all winter half clad."[1] Portrait of Benjamin Lay (1790) by William Williams. In 1700 Lay's moral stances were more than radical.[2] He thought women equal to men, was anti-death penalty, pro animal rights and an early campaigner for the abolition of slavery. In the Caribbean he made friends with indentured people while he boycotted all slave produced products such as tea, sugar and coffee. I thought Bruce Friedrich of the Good Food Institute[3] was ahead of his time for going vegan in 1987 - well, how about Lay in the 1700s? Many of these moral stances might seem unimpressive now, but back then I would bet under 1% of people held any one of them. These were deeply neglected and important causes, and Lay fought against the odds to make them tractable. His creative protests were perhaps as impressive as his morals. He smashed fine china teacups in the street saying people cared more about the cups than the slaves that produced tea. He yelled out "there's another Negro master" when slave owners spoke in Quaker meetings. He even temporarily kidnapped a slave owner's child, so his parents would experience a taste of the pain parents back in Africa felt while their children were permanently kidnapped. These protests stemmed from a deep spiritual devotion to do and proclaim the right thing - people's feelings and cultural norms be darned. Extreme actions like these have potential to backfire, but Lay chose wisely to perform most protests within his own Quaker church. Perhaps he knew that within the Quakers lay fertile ground to change hearts and minds - despite it taking 50 years to make serious inroads. When the Quakers officially denounced slavery In 1758 - perhaps the first large organization to do so - a then feeble Lay, aged 77, exclaimed: "Thanksgiving and praise be rendered unto the Lord God… I can now die in peace." John Wesley - The Priest "Employ whatever God has entrusted you with, in doing good, all possible good, in every possible kind and degree to the household of faith, to all men!" - John Wesley A key early insight of the "Effective Altruism" movement was the power of "earning to give" - that we can do great good not just through direct deeds, but by earning as much money as possible and then giving it away to effective causes. Yet one man had the same insight with similar depth of understanding 230 years earlier, outlined in just one sermon derived almost entirely from biblical principles.[4] John Wesley preached extreme generosity as a clear mandate from Jesus. His message was simple but radical. Earn all you can, live simply to save money, then give the rest to good causes. Sounds great but who actually does that? He also had deep insight in the pitfalls of earning to give. We should keep ourselves healthy and not overwork. We should sleep well and preserve "the spirit of a healthful mind". We should eschew evil on the path to the big bucks. And don't get rich while you're earning the big bucks, as you risk falling away from your faith and mission. He also understood that earning to give wasn't a path for e...

The Nonlinear Library
AF - Epistemic states as a potential benign prior by Tamsin Leake

The Nonlinear Library

Play Episode Listen Later Aug 31, 2024 13:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Epistemic states as a potential benign prior, published by Tamsin Leake on August 31, 2024 on The AI Alignment Forum. Malignancy in the prior seems like a strong crux of the goal-design part of alignment to me. Whether your prior is going to be used to model: processes in the multiverse containing a specific "beacon" bitstring, processes in the multiverse containing the AI, processes which would output all of my blog, so I can make it output more for me, processes which match an AI chatbot's hypotheses about what it's talking with, then you have to sample hypotheses from somewhere; and typically, we want to use either solomonoff induction or time-penalized versions of it such as levin search (penalized by log of runtime) or what QACI uses (penalized by runtime, but with quantum computation available in some cases), or the implicit prior of neural networks (large sequences of multiplying by a matrix, adding a vector, and ReLU, often with a penalty related to how many non-zero weights are used). And the solomonoff prior is famously malign. (Alternatively, you could have knightian uncertainty about parts of your prior that aren't nailed down enough, and then do maximin over your knightian uncertainty (like in infra-bayesianism), but then you're not guaranteed that your AI gets anywhere at all; its knightian uncertainty might remain so immense that the AI keeps picking the null action all the time because some of its knightian hypotheses still say that anything else is a bad idea. Note: I might be greatly misunderstanding knightian uncertainty!) (It does seem plausible that doing geometric expectation over hypotheses in the prior helps "smooth things over" in some way, but I don't think this particularly removes the weight of malign hypotheses in the prior? It just allocates their steering power in a different way, which might make things less bad, but it sounds difficult to quantify.) It does feel to me like we do want a prior for the AI to do expected value calculations over, either for prediction or for utility maximization (or quantilization or whatever). One helpful aspect of prior-distribution-design is that, in many cases, I don't think the prior needs to contain the true hypothesis. For example, if the problem that we're using a prior for is to model processes which match an AI chatbot's hypotheses about what it's talking with then we don't need the AI's prior to contain a process which behaves just like the human user it's interacting with; rather, we just need the AI's prior to contain a hypothesis which: is accurate enough to match observations. is accurate enough to capture the fact that the user (if we pick a good user) implements the kind of decision theory that lets us rely on them pointing back to the actual real physical user when they get empowered - i.e. in CEV(user-hypothesis), user-hypothesis builds and then runs CEV(physical-user), because that's what the user would do in such a situation. Let's call this second criterion "cooperating back to the real user". So we need a prior which: Has at least some mass on hypotheses which correspond to observations cooperate back to the real user and can eventually be found by the AI, given enough evidence (enough chatting with the user) Call this the "aligned hypothesis". Before it narrows down hypothesis space to mostly just aligned hypotheses, doesn't give enough weight to demonic hypothesis which output whichever predictions cause the AI to brainhack its physical user, or escape using rowhammer-type hardware vulnerabilities, or other failures like that. Formalizing the chatbot model First, I'll formalize this chatbot model. Let's say we have a magical inner-aligned "soft" math-oracle: Which, given a "scoring" mathematical function from a non-empty set a to real numbers (not necessarily one that is tractably ...

A Therapist Can't Say That
Ep 3.9 - Epistemic Justice in Diagnosis: Exploring Borderline Personality Disorder with Dr. Awais Aftab

A Therapist Can't Say That

Play Episode Listen Later Aug 28, 2024 42:39


Suppose you polled therapists and asked them what the most controversial diagnosis is in the current version of the DSM. Many of us would likely say Borderline Personality Disorder, and it would certainly be in almost everybody's top three.I've been wanting to do an episode on BPD for a bit because there is something about this controversial diagnosis that allows us to explore the challenging and consequential nature of psychiatric diagnosis itself.To guide us in this exploration, I've had the privilege of inviting Dr. Awais Aftab, a leading authority in the field. His extensive work on philosophical, ethical, and scientific issues related to diagnosis makes him the perfect person to delve into this complex topic with. Awais Aftab, MD, is a psychiatrist in Cleveland, Ohio, and Clinical Assistant Professor of Psychiatry at Case Western Reserve University. He led the interview series "Conversations in Critical Psychiatry" for Psychiatric Times, which explores critical and philosophical perspectives in psychiatry, with a book adaptation forthcoming from Oxford University Press. He is a senior editor for Philosophy, Psychiatry, & Psychology and has been actively involved in initiatives to educate psychiatrists and trainees on conceptual and critical issues. He blogs at Psychiatry at the Margins.In the conversation, we dig into whether Borderline Personality Disorder is “real” and what that means, how it relates to the philosophical concept of epistemic injustice, how context influences the utility of a diagnosis, and more.Listen to the full episode to hear:How treatment of people diagnosed with Borderline Personality Disorder frequently illustrates aspects of epistemic injustice/justiceThe ways that clinical setting and context influence the use, or misuse, of BPD as a diagnostic label and how that impacts patientsHow quantitative psychology is influencing how we conceptualize personality disordersWhy a BPD diagnosis can be intensely valuable for some clients, and how it helps guide cliniciansWhy we can't chalk up all psychopathology to traumaHow calls for testimonial justice from psychiatric patients should serve as a corrective force to excessive skepticism of patient narrativesLearn more about Dr. Awais Aftab:Psychiatry at the MarginsX: @awaisaftabLearn more about Riva Stoudt:Into the Woods CounselingThe Kiln SchoolInstagram: @atherapistcantsaythatResources:Borderline Personality and Self-Understanding of PsychopathologyEpistemic injusticeThe epistemic injustice of borderline personality disorder,  Jay Watts, BJPsych InternationalA Metaphysics of Psychopathology, Peter ZacharPeter Fonagy

Scaling Superforecasting: AI Forecasting Tournaments & the Road to Epistemic Security, with Deger Turan, CEO of Metaculus

Play Episode Listen Later Aug 28, 2024 116:21


Nathan explores the cutting-edge world of AI-powered forecasting with Deger Turan, CEO of Metaculus. In this episode of The Cognitive Revolution, we discuss how AI is revolutionizing prediction markets, the potential for AI to outperform human forecasters, and Metaculus's ambitious new AI forecasting benchmark tournament. Join us for an insightful conversation about the future of decision-making and collective intelligence. Participate in Metaculus' first of its kind bot forecasting tournament: https://www.metaculus.com/aib/ Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/JCkphVqj RECOMMENDED PODCAST: 1 to 100 | Hypergrowth Companies Worth Joining Every week we sit down with the founder of a hyper-growth company you should consider joining. Our goal is to give you the inside story behind breakout, early stage companies potentially worth betting your career on. This season, discover how the founders of Modal Labs, Clay, Mercor, and more built their products, cultures, and companies. Apple: https://podcasts.apple.com/podcast/id1762756034 Spotify:https://open.spotify.com/show/70NOWtWDY995C8qDqojxGw RECOMMENDED PODCAST: Second Opinion A new podcast for health-tech insiders from Christina Farr of the Second Opinion newsletter. Join Christina Farr, Luba Greenwood, and Ash Zenooz every week as they challenge industry experts with tough questions about the best bets in health-tech. Apple Podcasts: https://podcasts.apple.com/us/podcast/id1759267211 Spotify: https://open.spotify.com/show/0A8NwQE976s32zdBbZw6bv SPONSORS: Building an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Join top startups like Vercel, Perplexity, Jasper & Webflow in powering your app with WorkOS. Enjoy a free tier for up to 1M users! Start now at https://bit.ly/WorkOS-Turpentine-Network 80,000 Hours offers free one-on-one career advising for Cognitive Revolution listeners aiming to tackle global challenges, especially in AI. They connect high-potential individuals with experts, opportunities, and personalized career plans to maximize positive impact. Apply for a free call at https://80000hours.org/cognitiverevolution to accelerate your career and contribute to solving pressing AI-related issues. The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. CHAPTERS: (00:00:00) About the Show (00:00:22) Sponsor: WorkOS (00:01:22) About the Episode (00:04:47) Introduction and Background (00:08:42) Deger's Journey to Forecasting (00:13:06) Current State of Forecasting (00:20:23) Sponsors: 80,000 Hours | Brave (00:22:55) Overview of Metaculous (00:23:30) AI Forecasting Research Papers (00:35:44) Sponsors: Omneky | Squad (00:37:31) AI Forecasting Benchmark Tournament (00:44:01) Improving Forecasting Accuracy (00:51:56) Scoring System and Tournament Rules (01:04:17) AI Forecasting Benchmark Series Details (01:11:03) Tournament Structure and Participation (01:17:07) Consistency in AI Forecasting (01:34:36) Risks and Challenges in Forecasting (01:42:43) Consensus Illusion and Policy Making (01:53:55) Outro

J. Brown Yoga Talks
Geoffrey Dugue, MD - "Epistemic Humility, Actual Christianity"

J. Brown Yoga Talks

Play Episode Listen Later Aug 26, 2024 108:35


Geoffrey Dugue, MD, talks with J about actual Christianity and being humble in seeking for truth. They discuss the mutual friendship that brought them together, disagreement and debate, being trained as a medical doctor at esteemed institutions and the lack of meta narrative, indoctrination into the great laws, miracles, having a personal relationship with God, choosing to believe, the hypocrisy of the church, giving Satan too much credit, being in the character of Jesus, the Holy Trinity, sin, and the love that gets you to heaven.   J. Brown Yoga Teacher Training… EARLY BIRD PRICING 40% OFF - REGISTER NOW!.   To subscribe and support the show… GET PREMIUM.   Check out J's other podcast… J. BROWN YOGA THOUGHTS.    

New Books Network
Stephen Pinfield, "Achieving Global Open Access: The Need for Scientific, Epistemic and Participatory Openness" (Routledge, 2024)

New Books Network

Play Episode Listen Later Aug 17, 2024 95:15


Often assumed to be a self-evident good, Open Access has been subject to growing criticism for perpetuating global inequities and epistemic injustices. it has been seen as imposing exploitative business and publishing models and as exacerbating exclusionary research evaluation culture and practices. Achieving Global Open Access: The Need for Scientific, Epistemic, and Participatory Openness (Taylor & Francis, 2024) engages with these issues, recognizing that the global Open Access debate is now not just about publishing and business models or academic reward structures, but also about what constitutes valid and valuable knowledge, how we know and who gets to say. the book argues that, for Open Access to deliver its potential, it first needs to be associated with "epistemic openness", a wider and more inclusive understanding of what constitutes valid and valuable knowledge. it also needs to be accompanied by "participatory openness", enabling contributions to knowledge from more diverse communities. interacting with relevant theory and current practices, the book discusses the challenges in implementing these different forms of openness, the relationship between them and their limits. Stephen Pinfield is Professor of Information Services Management at the University of Sheffield, UK, and a Senior Research Fellow at the Research on Research Institute (RoRI). Xiaoli Chen is project lead at DataCite, a non-profit organization that provides open scholarly infrastructure and supports the global research community to ensure the open availability and connectedness of research outputs. She has a background in Library and Information Science and worked with different disciplinary communities to create and integrate services and workflows for open and FAIR scholarship. She can be reached at xiaoli.chen@datacite.org Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in Communications
Stephen Pinfield, "Achieving Global Open Access: The Need for Scientific, Epistemic and Participatory Openness" (Routledge, 2024)

New Books in Communications

Play Episode Listen Later Aug 17, 2024 95:15


Often assumed to be a self-evident good, Open Access has been subject to growing criticism for perpetuating global inequities and epistemic injustices. it has been seen as imposing exploitative business and publishing models and as exacerbating exclusionary research evaluation culture and practices. Achieving Global Open Access: The Need for Scientific, Epistemic, and Participatory Openness (Taylor & Francis, 2024) engages with these issues, recognizing that the global Open Access debate is now not just about publishing and business models or academic reward structures, but also about what constitutes valid and valuable knowledge, how we know and who gets to say. the book argues that, for Open Access to deliver its potential, it first needs to be associated with "epistemic openness", a wider and more inclusive understanding of what constitutes valid and valuable knowledge. it also needs to be accompanied by "participatory openness", enabling contributions to knowledge from more diverse communities. interacting with relevant theory and current practices, the book discusses the challenges in implementing these different forms of openness, the relationship between them and their limits. Stephen Pinfield is Professor of Information Services Management at the University of Sheffield, UK, and a Senior Research Fellow at the Research on Research Institute (RoRI). Xiaoli Chen is project lead at DataCite, a non-profit organization that provides open scholarly infrastructure and supports the global research community to ensure the open availability and connectedness of research outputs. She has a background in Library and Information Science and worked with different disciplinary communities to create and integrate services and workflows for open and FAIR scholarship. She can be reached at xiaoli.chen@datacite.org Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/communications

DEPTH Work: A Holistic Mental Health Podcast
93. Neoliberalism and the Global Export of Psychiatry: Toward Epistemic Humility with Psychologist Justin Karter

DEPTH Work: A Holistic Mental Health Podcast

Play Episode Listen Later Aug 16, 2024 53:35


Commercialized psychiatric and psychological knowledge encourages us to think of ourselves primarily as consumers and promotes a set of values that suggest some of us have minds or brains that should be ‘fixed' with particular products or services. These neoliberal values have led to a great deal of institutional corruption and also has been exported beyond the western world across the globe. Many researchers, clinicians and activists have rallied together to fight against medicalized global mental health initiatives which promote a narrow westernized notion of wellness and defined how treatment should look, often at the expense of local healing practices and without the participation of people with lived experience. Justin Karter, couseling psychologist, and research news editor at Mad In America, has spent a long time advocating for epistemic justice in the psy disciplines and helping to expose practices and policies that undermine people's human rights and agency. In this episode we discuss: how the political and psychological meet within and outside of therapy commercialization of psychopharmaceuticals and institutional corruption how neoliberalism and capitalist values are embedded in psychiatry the global mental health movement and psychiatric export as a neocolonial practice the ways in which global advocates with lived experience uniting and fighting back the need for epistemic justice, humility, and polyphony legal updates from the UN Committee for the Rights of Persons with Disabilities psychological humanities, mad studies, and other exciting emerging disciples of study Bio: Justin M. Karter, PhD, is a Counseling Psychologist in private practice in Boston and an instructor for the Center for Psychological Humanities & Ethics at Boston College. He is also the long-time research news editor of the Mad in America webzine. He completed his doctorate at the University of Massachusetts Boston in 2021. Justin does research in critical psychology, critical psychiatry, and philosophy of psychology. He is currently working on a book on the activism of psychosocial disability advocates in the context of the movement for global mental health. Links: Exploring the Fault Lines in Mental Health Discourse - Mad In America - https://www.madinamerica.com/2022/10/interview-psychologist-justin-karter/ Can Psychosocial Disability Transform Global Mental Health? - https://www.madinamerica.com/2023/08/can-psychosocial-disability-decolonize-mental-health-a-conversation-with-luis-arroyo-and-justin-karter/ Boston College Psychological Humanities - https://www.bc.edu/content/bc-web/schools/lynch-school/sites/Psychological-Humanities-Ethics/About.html#tab-mission_and_history Justin's Research Gate Profile: https://www.researchgate.net/profile/Justin-Karter Resources Mentioned Psychiatry Under The Influence by Robert Whitaker and Lisa Cosgrove - https://link.springer.com/book/10.1057/9781137516022 Vikram Patel lancet article: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(23)02241-9/abstract UN CRPD: https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-Disabilities.html Sessions & Information about the host: ⁠⁠JazmineRussell.com⁠⁠ Disclaimer: The DEPTH Work Podcast is for educational and entertainment purposes only. Any information on this podcast in no way to be construed or substituted as psychological counseling, psychotherapy, mental health counseling, or any other type of therapy or medical advice.

Seize The Moment Podcast
Constantine Sandis - Debunking Trans Myths: Detransition, Sports, and Epistemic Injustice | Seize The Moment Podcast #218

Seize The Moment Podcast

Play Episode Listen Later Aug 11, 2024 81:22


On episode 218, we welcome Constantine Sandis to discuss the manifestations of transphobia, why feeling you're in the wrong body isn't delusional, the errors of segregation, ways of addressing the placements of trans people in sports, how biology and culture may interact to form trans identities, the rates and common causes of detransitioning, the sensationalism behind transphobic reactions, epistemic injustice and why we question trans identities unfairly, Imane Khelif and the olympic boxing controversy, and the consequences of assuming widespread deception from trans individuals. Constantine Sandis is Director of Lex Academic, Visiting Professor of Philosophy at the University of Hertfordshire, and a Fellow of the Royal Society of Arts. His books include The Things We Do and Why We Do Them, Philosophy of Action: An Anthology, and Human Nature, and From Action to Ethics: A Pluralistic Approach to Reasons and Responsibility. His newest book, coauthored with Danièle Moyal-Sharrock, is called Real Gender: A Cis Defence of Trans Realities.  | Constantine Sandis | ► Website | https://www.constantinesandis.com ► Twitter | https://twitter.com/csandis ► Instagram | https://www.instagram.com/csandis ► Bluesky |  https://bsky.app/profile/csandis.bsky.social ► Facebook | https://www.facebook.com/csandis ► Linkedin | https://www.linkedin.com/in/constantine-sandis-723454a4 ► Real Gender Book | https://bit.ly/46YA7bb Where you can find us: | Seize The Moment Podcast | ► Facebook | https://www.facebook.com/SeizeTheMoment ► Twitter | https://twitter.com/seize_podcast  ► Instagram | https://www.instagram.com/seizethemoment ► TikTok | https://www.tiktok.com/@seizethemomentpodcast  

The Nonlinear Library
LW - Twitter thread on AI safety evals by Richard Ngo

The Nonlinear Library

Play Episode Listen Later Jul 31, 2024 3:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Twitter thread on AI safety evals, published by Richard Ngo on July 31, 2024 on LessWrong. Epistemic status: raising concerns, rather than stating confident conclusions. I'm worried that a lot of work on AI safety evals matches the pattern of "Something must be done. This is something. Therefore this must be done." Or, to put it another way: I judge eval ideas on 4 criteria, and I often see proposals which fail all 4. The criteria: 1. Possible to measure with scientific rigor. Some things can be easily studied in a lab; others are entangled with a lot of real-world complexity. If you predict the latter (e.g. a model's economic or scientific impact) based on model-level evals, your results will often be BS. (This is why I dislike the term "transformative AI", by the way. Whether an AI has transformative effects on society will depend hugely on what the society is like, how the AI is deployed, etc. And that's a constantly moving target! So TAI a terrible thing to try to forecast.) Another angle on "scientific rigor": you're trying to make it obvious to onlookers that you couldn't have designed the eval to get your preferred results. This means making the eval as simple as possible: each arbitrary choice adds another avenue for p-hacking, and they add up fast. (Paraphrasing a different thread): I think of AI risk forecasts as basically guesses, and I dislike attempts to make them sound objective (e.g. many OpenPhil worldview investigations). There are always so many free parameters that you can get basically any result you want. And so, in practice, they often play the role of laundering vibes into credible-sounding headline numbers. I'm worried that AI safety evals will fall into the same trap. (I give Eliezer a lot of credit for making roughly this criticism of Ajeya's bio-anchors report. I think his critique has basically been proven right by how much people have updated away from 30-year timelines since then.) 2. Provides signal across scales. Evals are often designed around a binary threshold (e.g. the Turing Test). But this restricts the impact of the eval to a narrow time window around hitting it. Much better if we can measure (and extrapolate) orders-of-magnitude improvements. 3. Focuses on clearly worrying capabilities. Evals for hacking, deception, etc track widespread concerns. By contrast, evals for things like automated ML R&D are only worrying for people who already believe in AI xrisk. And even they don't think it's necessary for risk. 4. Motivates useful responses. Safety evals are for creating clear Schelling points at which action will be taken. But if you don't know what actions your evals should catalyze, it's often more valuable to focus on fleshing that out. Often nobody else will! In fact, I expect that things like model releases, demos, warning shots, etc, will by default be much better drivers of action than evals. Evals can still be valuable, but you should have some justification for why yours will actually matter, to avoid traps like the ones above. Ideally that justification would focus either on generating insight or being persuasive; optimizing for both at once seems like a good way to get neither. Lastly: even if you have a good eval idea, actually implementing it well can be very challenging Building evals is scientific research; and so we should expect eval quality to be heavy-tailed, like most other science. I worry that the fact that evals are an unusually easy type of research to get started with sometimes obscures this fact. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - Confusing the metric for the meaning: Perhaps correlated attributes are "natural" by NickyP

The Nonlinear Library

Play Episode Listen Later Jul 24, 2024 7:02


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Confusing the metric for the meaning: Perhaps correlated attributes are "natural", published by NickyP on July 24, 2024 on LessWrong. Epistemic status: possibly trivial, but I hadn't heard it before. TL;DR: What I thought of as a "flaw" in PCA - its inability to isolate pure metrics - might actually be a feature that aligns with our cognitive processes. We often think in terms of composite concepts (e.g., "Age + correlated attributes") rather than pure metrics, and this composite thinking might be more natural and efficient Introduction I recently found myself describing Principal Component Analysis (PCA) and pondering its potential drawbacks. However, upon further reflection, I'm reconsidering whether what I initially viewed as a limitation might actually be a feature. This led me to think about how our minds - and, potentially, language models - might naturally encode information using correlated attributes. An important aspect of this idea is the potential conflation between the metric we use to measure something and the actual concept we're thinking about. For instance, when we think about a child's growth, we might not be consciously separating the concept of "age" from its various correlated attributes like height, cognitive development, or physical capabilities. Instead, we might be thinking in terms of a single, composite dimension that encompasses all these related aspects. After looking at active inference a while ago, it seems like in general, a lot of human heuristics and biases seem like they are there to encode real-world relationships that exist in the world in a more efficient way, which are then strained in out-of-distribution experimental settings to seem "irrational". I think the easiest way to explain is with a couple of examples: 1 - Age and Associated Attributes in Children Suppose we plotted two attributes: Age (in years) vs Height (in cm) in children. These are highly correlated, so if we perform Principal Component Analysis, we will find there are two main components. These will not correspond to orthogonal Age and Height components, since they are quite correlated. Instead, we will find an "Age + Height" direction, and a "Height relative to what is standard for that age" direction. While once can think of this as a "failure" of PCA to find the "true things we are measuring", I think this is perhaps not the correct way to think about it. For example, if I told you to imagine a 10-year-old, you would probably imagine them to be of height ~140 5cm. And if I told you they were 2.0m tall or 0.5m tall, you would be very surprised. On the other hand, one often hears phrases like "about the height of a 10-year-old". That is, when we think about a child's development, we don't typically separate each attribute into distinct vectors like "age," "height," "voice pitch," and so on. Instead, we might encode a single "age + correlated attributes" vector, with some adjustments for individual variations. This approach is likely more efficient than encoding each attribute separately. It captures the strong correlations that exist in typical development, while allowing for deviations when necessary. When one talks about age, one can define it as: "number of years of existence" (independent of anything else) but when people talk about "age" in everyday life, the definition is more akin to: "years of existence, and all the attributes correlated to that". 2 - Price and Quality of Goods Our tendency to associate price with quality and desirability might not be a bias, but an efficient encoding of real-world patterns. A single "value" dimension that combines price, quality, and desirability could capture the most relevant information for everyday decision-making, with additional dimensions only needed for finer distinctions. That is, "cheap" can be conceptualised ...

Make it Plain
S2 EP4 · BLACK WORLD NEWS: Attempted Trump Assassination, US Violence · GUILAINE KINOUANI (Trigger Warning: Racialised Trauma): Working as a Black Female Clinician, Afrikan Griots, "Living While Black," White Minds," Fanon, Race Trauma Certificate +

Make it Plain

Play Episode Listen Later Jul 22, 2024 77:58


CONTENT WARNING: Racialised Trauma - In this week's Black World News, Kehinde Andrews makes plain Trump's so-called assassination attempt (leaving one person dead and two wounded) and as Malcolm said of Kennedy's assassination, "it's chickens coming home to roost." However, it's not surprising as America is founded and sustained on political violence: the first violence of the genocide of native people, the second violence of enslaving us, and the third violence of kicking the British out with the American Revolutionary War. Much of the violence of the world today can be traced back to the US and Western imperialism. This is America, this is the West. This. Is. What. The System. Is.    - In this week's official guest interview, Kehinde Andrews talks with Guilaine Kinouani a leading voice in Black psychiatry, psychology, and psychosocial studies in the UK. They talk about her work as a clinician, re-traumatizing mental health systems, being dissuaded from working as a clinician as well as dealing with her own trauma as a clinician with over 15 years of research and study including her books Living While Black and White Minds. They talk about the origin story of Race Reflections which started as a blog (now a social enterprise) about not being silenced, and how it developed. They highlight the Certificate in Working with Racial Trauma training.  - Guilaine Kinouani is a UK-based Paris-born woman of Afrikan (Congolese) descent; an award-winning critical and radical psychologist and group analyst, scholar, activist, and "[she] likes to think…a bit of a fashion connoisseur." Guilaine is also a current PhD (her second doctorate!) researcher, in psychosocial studies, and founder of the social enterprise, Race Reflections.  She's written three books: Living While Black: The Essential Guide to Overcoming Racial Trauma (2021 + Guardian Book of the Year), White Minds Everyday Performance, Violence and Resistance (2023), and a third book Creative Disruption: Psychosocial Scholarship as Praxis is expected in November 2024. - BLACK WORLD NEWS LINKS The Assassination Attempt on Donald Trump and Political Violence Waged by the U.S.https://blackagendareport.com/index.php/assassination-attempt-donald-trump-and-political-violence-waged-us The Jakarta Method https://en.wikipedia.org/wiki/The_Jakarta_Method Book Review: The Jakarta Method: Washington's Anticommunist Crusade and the Mass Murder Program That Shaped Our World by Vincent Bevins https://blogs.lse.ac.uk/lsereviewofbooks/2020/07/29/book-review-the-jakarta-method-washingtons-anticommunist-crusade-and-the-mass-murder-program-that-shaped-our-world-by-vincent-bevins/ Indonesian mass killings of 1965–66 https://en.wikipedia.org/wiki/Indonesian_mass_killings_of_1965%E2%80%9366 - GUEST LINKS Certificate in Working With Racial Trauma: A Conversation With Helping Professionals (Sign Up)https://racereflections.co.uk/events/open-day-certificate-in-working-with-racial-trauma-and-race-based-injuries-using-the-foundation-of-group-analysis/ Certificate in Working With Racial Trauma: A Conversation With Helping Professionals - Prospectus https://racereflections.co.uk/wp-content/uploads/2024/06/CWRT-JUNE-2024-w-dates.pdf Certificate in Working With Racial Trauma & Race Based Wounds Using the Foundation of Group Analysis – Curriculum  https://racereflections.co.uk/certificate-in-working-with-racial-trauma-race-based-wounds-using-the-foundation-of-group-analysis/ Guilaine Biographyhttps://racereflections.co.uk/about-the-author/ Living While Black The Essential Guide to Overcoming Racial Trauma https://www.penguin.co.uk/books/442992/living-while-black-by-kinouani-guilaine/9781529109436 White Minds Everyday Performance, Violence and Resistance By Guilaine Kinouanihttps://policy.bristoluniversitypress.co.uk/white-minds Epistemic homelessness | Guilaine Kinouani | TEDxUCLWomen https://www.youtube.com/watch?v=MoKBLPbkB5I&embeds_referring_euri=https%3A%2F%2Fracereflections.co.uk%2F - THE HARAMBEE ORGANISATION OF BLACK UNITY NEEDS YOU Harambee Organisation of Black Unity (Marcus Garvey Centre + Nicole Andrews Community Library, Birmingham, UK)https://www.blackunity.org.uk/ CAP25 - Convention of Afrikan People - Gambia - May 17-19, 2025 (Everyone's Welcome) On Malcolm X's 100th birthday, the Harambee Organisation of Black Unity is bringing together those in Afrika and the Diaspora who want to fulfill Malcolm's legacy and build a global organization for Black people. This is an open invitation to anyone.https://make-it-plain.org/convention-of-afrikan-people/ BUF - Black United Front Global directory of Black organizations. This will be hosted completely free of charge so if you run a Black organization please email the name, address, website, and contact info to mip@blackunity.org.uk to be listed. - Guest: @living_while_black_(IG) @Racereflections (X) @RRDirector (X) Host: @kehindeandrews (IG) @kehinde_andrews (X) Podcast team: @makeitplainorg @weylandmck @inhisownterms @farafinmuso Platform: www.make-it-plain.org (Blog) www.youtube.com/@MakeItPlain1964 (YT) - For any help with your audio visit: https://weylandmck.com/

Blogging Theology
Atheism and Radical Skepticism: Ibn Taymiyyah's Epistemic Critique with Dr. Nazir Khan

Blogging Theology

Play Episode Listen Later Jul 22, 2024 124:25


Article: Atheism and Radical Skepticism: Ibn Taymiyyah's Epistemic Critique by Dr. Nazir Khan: https://yaqeeninstitute.org/read/paper/atheism-and-radical-skepticism-ibn-taymiyyahs-epistemic-critiqueSupport this podcast at — https://redcircle.com/blogging-theology/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

The Nonlinear Library
AF - BatchTopK: A Simple Improvement for TopK-SAEs by Bart Bussmann

The Nonlinear Library

Play Episode Listen Later Jul 20, 2024 7:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: BatchTopK: A Simple Improvement for TopK-SAEs, published by Bart Bussmann on July 20, 2024 on The AI Alignment Forum. Work done in Neel Nanda's stream of MATS 6.0. Epistemic status: Tried this on a single sweep and seems to work well, but it might definitely be a fluke of something particular to our implementation or experimental set-up. As there are also some theoretical reasons to expect this technique to work (adaptive sparsity), it seems probable that for many TopK SAE set-ups it could be a good idea to also try BatchTopK. As we're not planning to investigate this much further and it might be useful to others, we're just sharing what we've found so far. TL;DR: Instead of taking the TopK feature activations per token during training, taking the Top(K*batch_size) for every batch seems to improve SAE performance. During inference, this activation can be replaced with a single global threshold for all features. Introduction Sparse autoencoders (SAEs) have emerged as a promising tool for interpreting the internal representations of large language models. By learning to reconstruct activations using only a small number of features, SAEs can extract monosemantic concepts from the representations inside transformer models. Recently, OpenAI published a paper exploring the use of TopK activation functions in SAEs. This approach directly enforces sparsity by only keeping the K largest activations per sample. While effective, TopK forces every token to use exactly k features, which is likely suboptimal. We came up with a simple modification that solves this and seems to improve its performance. BatchTopK Standard TopK SAEs apply the TopK operation independently to each sample in a batch. For a target sparsity of K, this means exactly K features are activated for every sample. BatchTopK instead applies the TopK operation across the entire flattened batch: 1. Flatten all feature activations across the batch 2. Take the top (K * batch_size) activations 3. Reshape back to the original batch shape This allows more flexibility in how many features activate per sample, while still maintaining an average of K active features across the batch. Experimental Set-Up For both the TopK and the BatchTopK SAEs we train a sweep with the following hyperparameters: Model: gpt2-small Site: layer 8 resid_pre Batch size: 4096 Optimizer: Adam (lr=3e-4, beta1 = 0.9, beta2=0.99) Number of tokens: 1e9 Expansion factor: [4, 8, 16, 32] Target L0 (k): [16, 32, 64] As in the OpenAI paper, the input gets normalized before feeding it into the SAE and calculating the reconstruction loss. We also use the same auxiliary loss function for dead features (features that didn't activate for 5 batches) that calculates the loss on the residual using the top 512 dead features per sample and gets multiplied by a factor 1/32. Results For a fixed number of active features (L0=32) the BatchTopK SAE has a lower normalized MSE than the TopK SAE and less downstream loss degradation across different dictionary sizes. Similarly, for fixed dictionary size (12288) BatchTopK outperforms TopK for different values of k. Our main hypothesis for the improved performance is thanks to adaptive sparsity: some samples contain more highly activating features than others. Let's have look at the distribution of number of active samples for the BatchTopK model. The BatchTopK model indeed makes use of its possibility to use different sparsities for different inputs. We suspect that the weird peak on the left side are the feature activations on BOS-tokens, given that its frequency is very close to 1 in 128, which is the sequence length. This serves as a great example of why BatchTopK might outperform TopK. At the BOS-token, a sequence has very little information yet, but the TopK SAE still activates 32 features. The BatchTopK model "saves" th...

The Dissenter
#969 Seth Robertson: Moral Realism and Anti-Realism, Confucian Ethics, and Epistemic Injustice

The Dissenter

Play Episode Listen Later Jul 19, 2024 94:51


******Support the channel****** Patreon: https://www.patreon.com/thedissenter PayPal: paypal.me/thedissenter PayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9l PayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpz PayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9m PayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao   ******Follow me on****** Website: https://www.thedissenter.net/ Facebook: https://www.facebook.com/thedissenteryt/ Twitter: https://twitter.com/TheDissenterYT   This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/   Dr. Seth Robertson is a Lecturer in Philosophy and Associate Director of Undergraduate Studies at Harvard University. His research interests include moral psychology, the history of ethics, early Chinese ethics, social epistemology, virtue ethics, and metaethics. Dr. Robertson's research focuses on ways in which non-normative information should constrain our normative theorizing. He has written about the intersection of social intelligence and virtue ethics as well as situationist psychology and moral development in the context of early Confucian ethics, and is currently working on epistemic injustice and rhetorical manipulation.   In this episode, we start by talking about normative theorizing (with a focus on ethics), non-normative information, and how non-normative information should constrain our normative theorizing. We talk about a novel version of metaethical Humean Constructivism: “perspectival naturalism”. We also talk about a pluralist approach to ethics. We discuss Confucian ethics and its moral anti-realist features. Finally, we talk about different forms of epistemic injustice, and what predicts continuation for women in academic philosophy. -- A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: PER HELGE LARSEN, JERRY MULLER, BERNARDO SEIXAS, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, FILIP FORS CONNOLLY, DAN DEMETRIOU, ROBERT WINDHAGER, RUI INACIO, ZOOP, MARCO NEVES, COLIN HOLBROOK, PHIL KAVANAGH, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, FERGAL CUSSEN, HAL HERZOG, NUNO MACHADO, JONATHAN LEIBRANT, JOÃO LINHARES, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, ROMAIN ROCH, DIEGO LONDOÑO CORREA, YANICK PUNTER, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, NELLEKE BAK, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, EDWARD HALL, HEDIN BRØNNER, DOUGLAS FRY, FRANCA BORTOLOTTI, GABRIEL PONS CORTÈS, URSULA LITZCKE, SCOTT, ZACHARY FISH, TIM DUFFY, SUNNY SMITH, JON WISMAN, WILLIAM BUCKNER, PAUL-GEORGE ARNAUD, LUKE GLOWACKI, GEORGIOS THEOPHANOUS, CHRIS WILLIAMSON, PETER WOLOSZYN, DAVID WILLIAMS, DIOGO COSTA, ANTON ERIKSSON, ALEX CHAU, AMAURI MARTÍNEZ, CORALIE CHEVALLIER, BANGALORE ATHEISTS, LARRY D. LEE JR., OLD HERRINGBONE, MICHAEL BAILEY, DAN SPERBER, ROBERT GRESSIS, IGOR N, JEFF MCMAHAN, JAKE ZUEHL, BARNABAS RADICS, MARK CAMPBELL, TOMAS DAUBNER, LUKE NISSEN, KIMBERLY JOHNSON, JESSICA NOWICKI, LINDA BRANDIN, NIKLAS CARLSSON, GEORGE CHORIATIS, VALENTIN STEINMANN, PER KRAULIS, KATE VON GOELER, ALEXANDER HUBBARD, BR, MASOUD ALIMOHAMMADI, JONAS HERTNER, URSULA GOODENOUGH, DAVID PINSOF, SEAN NELSON, MIKE LAVIGNE, JOS KNECHT, ERIK ENGMAN, LUCY, YHONATAN SHEMESH, MANVIR SINGH, PETRA WEIMANN, PEDRO BONILLA, CAROLA FEEST, STARRY, MAURO JÚNIOR, 航 豊川, TONY BARRETT, AND BENJAMIN GELBART! A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, TOM VANEGDOM, BERNARD HUGUENEY, CURTIS DIXON, BENEDIKT MUELLER, THOMAS TRUMBLE, KATHRINE AND PATRICK TOBIN, JONCARLO MONTENEGRO, AL NICK ORTIZ, NICK GOLDEN, AND CHRISTINE GLASS! AND TO MY EXECUTIVE PRODUCERS, MATTHEW LAVENDER, SERGIU CODREANU, BOGDAN KANIVETS, ROSEY, AND GREGORY HASTINGS!

The Gradient Podcast
Kevin Dorst: Against Irrationalist Narratives

The Gradient Podcast

Play Episode Listen Later Jul 18, 2024 135:21


Episode 131I spoke with Professor Kevin Dorst about:* Subjective Bayesianism and epistemology foundations* What happens when you're uncertain about your evidence* Why it's rational for people to polarize on political mattersEnjoy—and let me know what you think!Kevin is an Associate Professor in the Department of Linguistics and Philosophy at MIT. He works at the border between philosophy and social science, focusing on rationality.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:15) When do Bayesians need theorems?* (05:52) Foundations of epistemology, metaethics, formal models, error theory* (09:35) Extreme views and error theory, arguing for/against opposing positions* (13:35) Changing focuses in philosophy — pragmatic pressures* (19:00) Kevin's goals through his research and work* (25:10) Structural factors in coming to certain (political) beliefs* (30:30) Acknowledging limited resources, heuristics, imperfect rationality* (32:51) Hindsight Bias is Not a Bias* (33:30) The argument* (35:15) On eating cereal and symmetric properties of evidence* (39:45) Colloquial notions of hindsight bias, time and evidential support* (42:45) An example* (48:02) Higher-order uncertainty* (48:30) Explicitly modeling higher-order uncertainty* (52:50) Another example (spoons)* (54:55) Game theory, iterated knowledge, even higher order uncertainty* (58:00) Uncertainty and philosophy of mind* (1:01:20) Higher-order evidence about reliability and rationality* (1:06:45) Being Rational and Being Wrong* (1:09:00) Setup on calibration and overconfidence* (1:12:30) The need for average rational credence — normative judgments about confidence and realism/anti-realism* (1:15:25) Quasi-realism about average rational credence?* (1:19:00) Classic epistemological paradoxes/problems — lottery paradox, epistemic luck* (1:25:05) Deference in rational belief formation, uniqueness and permissivism* (1:39:50) Rational Polarization* (1:40:00) Setup* (1:37:05) Epistemic nihilism, expanded confidence akrasia* (1:40:55) Ambiguous evidence and confidence akrasia* (1:46:25) Ambiguity in understanding and notions of rational belief* (1:50:00) Claims about rational sensitivity — what stories we can tell given evidence* (1:54:00) Evidence vs presentation of evidence* (2:01:20) ChatGPT and the case for human irrationality* (2:02:00) Is ChatGPT replicating human biases?* (2:05:15) Simple instruction tuning and an alternate story* (2:10:22) Kevin's aspirations with his work* (2:15:13) OutroLinks:* Professor Dorst's homepage and Twitter* Papers* Modest Epistemology* Hedden: Hindsight bias is not a bias* Higher-order evidence + (Almost) all evidence is higher-order evidence* Being Rational and Being Wrong* Rational Polarization* ChatGPT and human irrationality Get full access to The Gradient at thegradientpub.substack.com/subscribe

The Nonlinear Library
EA - An Epistemic Defense of Rounding Down by Hayley Clatterbuck

The Nonlinear Library

Play Episode Listen Later Jul 15, 2024 54:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Epistemic Defense of Rounding Down, published by Hayley Clatterbuck on July 15, 2024 on The Effective Altruism Forum. This post is part of WIT's CRAFT sequence. It examines one of the decision theories included in the Portfolio Builder Tool. Executive summary Expected value maximization (EVM) leads to problems of fanaticism, recommending that you ought to take gambles on actions that have very low probabilities of success if the potential outcomes would be extremely valuable. This has motivated some to adopt alternative decision procedures. One common method for moderating the fanatical effects of EVM is to ignore very low probability outcomes, rounding them down to 0. Then, one maximizes EV across the remaining set of sufficiently probable outcomes. We can distinguish between two types of low probabilities that could be candidates for rounding down. A decision-theoretic defense of rounding down states that we should (or are permitted to) round down low objective chances. An epistemic defense states that we should (or are permitted to) round down low subjective credences that reflect uncertainty about how the world really is. Rounding down faces four key objections: The choice of a threshold for rounding down (i.e., how low a probability must be before we round it to 0) is arbitrary. It implies that normative principles change at some probability threshold, which is implausible. It ignores important outcomes and thus leads to bad decisions. It either gives no or bad advice about how to make decisions among options under the threshold. Epistemic rounding down fares much better with respect to these four objections than does decision-theoretic rounding down. The resolution or specificity of our evidence constrains our ability to distinguish between probabilistic hypotheses. Our evidence does not typically have enough resolution to give us determinate probabilities for very improbable outcomes. In such cases, we sometimes have good reasons for rounding them down to 0. 1. Intro Expected value maximization is the most prominent and well-defended theory about how to make decisions under uncertainty. However, it famously leads to problems of fanaticism: it recommends pursuing actions that have extremely small values of success when the payoffs, if successful, would be astronomically large. Because many people find these recommendations highly implausible, several solutions have been offered that retain many of the attractive features of EVM but rule out fanatical results. One solution is to dismiss outcomes that have very low probabilities - in effect, rounding them down to 0 - and then maximizing EV among the remaining set of sufficiently probable outcomes. This "truncated EVM" strategy yields more intuitive results about what one ought to do in paradigm cases where traditional EVM recommends fanaticism. It also retains many of the virtues of EVM, in that it provides a simple and mathematically tractable way of balancing probabilities and value. However, rounding down faces four key objections.[1] The first two suggest that rounding down will sometimes keep us from making correct decisions, and the second two present problems of arbitrariness: 1. Ignores important outcomes: events that have very low probabilities are sometimes important to consider when making decisions. 2. Disallows decisions under the threshold: every event with a probability below the threshold is ignored. Therefore, rounding down precludes us from making rational decisions about events under the threshold, sometimes leading to violations of Dominance. 3. Normative arbitrariness: rounding down implies that normative principles governing rational behavior change discontinuously at some cut-off of probability. This is unparsimonious and unmotivated. 4. Threshold arbitrariness: the choice of a threshold...

The Nonlinear Library
LW - Sherlockian Abduction Master List by Cole Wyeth

The Nonlinear Library

Play Episode Listen Later Jul 13, 2024 37:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sherlockian Abduction Master List, published by Cole Wyeth on July 13, 2024 on LessWrong. [Radically updated with many new entries around 07/10/24] Epistemic status: The List has been tested in the real world by me (with mixed results) and extensively checked for errors by many commenters. The Background section is mostly speculation and anecdotes, feel free to skip to The List once you understand its reason for existence. tldr: This is a curated list of observable details about a person's appearance that indicate something useful/surprising about them. Ideally, studying this list will be an efficient way to cultivate more insightful observational/abductive abilities, approaching the fictional example of Sherlock Holmes. Please contribute in the comments section after reading the Rules. Background Is it possible to develop observational abilities comparable to Sherlock Holmes? Sir Arthur Conan Doyle's fictional detective has many enviable skills, including mastery of disguise and some expertise at unarmed combat, as well as generally being a genius, but we will focus primarily on his more well known observational power. Though Holmes is often described as a master of logical "deduction," this power is better described as (possibly superhuman) abduction. That is, Holmes perceives tiny details that many people would miss, then constructs explanations for those details. By reasoning through the interacting implications of these explanations, he is able to make inferences that seem impossible to those around him. The final step is actually deductive, but the first two are perhaps more interesting. Holmes' ability to perceive more than others does seem somewhat realistic; it is always possible to actively improve one's situational awareness, at least on a short term basis, simply by focusing on one's surroundings. The trick seems to be the second step, where Holmes is able to work backwards from cause to effect, often leveraging slightly obscure knowledge about a wide variety of topics. I spent several of my naive teenage years trying to become more like Holmes. I carefully examined people's shoes (often I actually requested that the shoes be handed over) for numerous features: mud and dirt from walking outside, the apparent price of the shoe, the level of wear and tear, and more specifically the distribution of wear between heel and toe (hoping to distinguish sprinters and joggers), etc. I "read palms," studying the subtle variations between biking and weightlifting calluses. I looked for ink stains and such on sleeves (this works better in fiction than reality). I'm pretty sure I even smelled people. None of this worked particularly well. I did come up with some impressive seeming "deductions," but I made so many mistakes that these may have been entirely chance. There were various obstacles. First, it is time consuming and slightly awkward to stare at everyone you meet from head to toe. I think there are real tradeoffs here; you have only so much total attention, and by spending more on observing your surroundings, you have less left over to think. Certainly it is not possible to read a textbook at the same time, so practicing your observational techniques comes at a cost. Perhaps it becomes more habitual and easier over time, but I am not convinced it ever comes for free. Second, the reliability of inferences decays quickly with the number of steps involved. Many of Holmes' most impressive "deductions" come from combining his projected explanations for several details into one cohesive story (perhaps using some of them to rule out alternative explanations for the others) and drawing highly non-obvious, shocking conclusions from this story. In practice, one of the explanations is usually wrong, the entire story is base on false premises, and the conclusions are only sh...

Australia in the World
Ep. 134: Debating the future of Australia-China relations

Australia in the World

Play Episode Listen Later Jul 3, 2024 60:18


China's Premier Li Qiang successfully visited Australia last month. The loan of two more pandas captured headlines, but if one looks closely at how the visit unfolded it's clear Australia faces a very complex strategic landscape. The Albanese government clearly wants to maintain a stabilised relationship with China under the PM's manta “cooperative where we can, disagree where we must, and engage in the national interest”.  But what are the benefits and costs of that strategy, and are the trade-offs worth it? Darren is joined once again by Dr Ben Herscovitch of the ANU for a conversation that lays out alternative framings for the choices the Australian government is, and is not, making in how it manages the China bilateral, and whether these choices are in the national interest. A little while ago the podcast crossed 500,000 lifetime downloads! Thanks to all of you for giving up your time to listen. Australia in the World is written, hosted, and produced by Darren Lim, with research and editing this episode by Corbin Duncan and theme music composed by Rory Stenning. Relevant links Noah Barkin LinkedIn post on German Economy Minister Habeck's visit to China: https://www.linkedin.com/feed/update/urn:li:activity:7210557349697122304/ “A Sustainable Economic Partnership for Partnership for Australia and China”, East Asian Bureau of Economic Research, Crawford School of Public Policy, ANU, May 2024: https://eaber.org/wp-content/uploads/2024/05/A-Sustainable-Economic-Partnership-for-Australia-and-China.pdf Department of Foreign Affairs and Trade, “Statement regarding recent incidents in the South China Sea”, 18 June 2024: https://www.dfat.gov.au/news/media-release/statement-regarding-recent-incidents-south-china-sea Darren Lim and John Ikenberry, “China and the logic of illiberal hegemony”, Security Studies: (ungated) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4244377  || (gated) https://www.tandfonline.com/doi/full/10.1080/09636412.2023.2178963 Jennifer Hewett, “The contradictions in Australia's China policy”, Australian Financial Review, 12 June 2024: https://www.afr.com/policy/foreign-affairs/the-contradictions-in-australia-s-china-policy-20240612-p5jl66 Deutschland 83 (TV series): https://tv.apple.com/au/show/deutschland-83/umc.cmc.4tlfhbbwsfeijwbe74so97qv6 Derisky Business (podcast): https://www.cnas.org/publications/podcast/everyone-loves-tariffs “Epistemic humility” mug on Etsy: https://www.etsy.com/au/listing/1751474343/epistemic-humility-ceramic-mug

The Sunday Show
AI and Epistemic Risk: A Coming Crisis?

The Sunday Show

Play Episode Listen Later Jun 10, 2024 45:55


What are the risks to democracy as AI is incorporated more and more into the systems and platforms we use to find and share information and engage in communication? In this episode, Justin Hendrix speaks with Elise Silva, a postdoctoral associate at the University of Pittsburgh Cyber Institute for Law, Policy, and Security, and John Wihbey, an associate professor at Northeastern University in the College of Arts, Media, and Design. Silva is the author of a recent piece in Tech Policy Press titled "AI-Powered Search and the Rise of Google's 'Concierge Wikipedia.'” Wihbey is the author a paper published last month titled "AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?"

Talkin‘ Politics & Religion Without Killin‘ Each Other
Pete Wehner on Evangelicals' Embrace of Trump: Moral Deformity? Blasphemy? Or the Epistemic Twilight Zone?

Talkin‘ Politics & Religion Without Killin‘ Each Other

Play Episode Listen Later Jun 6, 2024 91:54


We're joined by friend of the pod, Pete Wehner, contributing writer at The Atlantic and senior fellow at the Trinity Forum, to explore how Peter's Christian faith influences his views on suffering, scriptural interpretation, and ethical dilemmas. The conversation addresses the current political climate, particularly focusing on Donald Trump's impact on the Evangelical community and the Republican Party. Emphasizing honest and empathetic dialogue, we discuss the challenges of maintaining faith in divisive times and envision potential paths for the church's and conservative movement's future redemption. Highlighting the power of personal connections, there's a fun story about 2 New York Rangers fans finding common ground across political differences while watching the game at a bar in LA, illustrating the broader theme of building relationships beyond political and ideological differences.   It would mean so much if you could leave us a review: https://ratethispodcast.com/goodfaithpolitics    And we're on Patreon! Join the community: https://www.patreon.com/politicsandreligion   00:00 Introduction to the Podcast  00:32 Big Announcement: Join Us on Patreon  01:36 Introducing Today's Guest: Pete Wehner  02:46 Pete Wehner on Faith and Doubts  03:33 The Question of Suffering and Hermeneutics  07:32 Literal vs. Figurative Interpretation of Scripture  22:59 The Profundity of the Crucifixion  40:54 Evangelicals' Support for Trump  52:23 Political Reactions to Trump's Conviction  57:40 The Cult of Personality in Modern Republicanism  01:02:46 The Future of the Republican Party  01:05:33 Hope and Redemption in Politics and Faith   You can also find Corey on all the socials @coreysnathan such as www.threads.net/@coreysnathan.   Talkin' Politics & Religion Without Killin' Each Other is part of The Democracy Group, a network of podcasts that examines what's broken in our democracy and how we can work together to fix it.   Very grateful for our sponsor Meza Wealth Management. Reach out to Jorge and his team: www.mezawealth.com   https://www.theatlantic.com/author/peter-wehner/   https://www.nytimes.com/column/peter-wehner

Theories of Everything with Curt Jaimungal
Michael Lynch: AGI, Epistemic Shock, Truth Seeking, AI Risks, Humanity

Theories of Everything with Curt Jaimungal

Play Episode Listen Later May 24, 2024 66:30


This presentation was recorded at MindFest, held at Florida Atlantic University, CENTER FOR THE FUTURE MIND, spearheaded by Susan Schneider. Center for the Future Mind (Mindfest @ FAU): https://www.fau.edu/future-mind/Please consider signing up for TOEmail at https://www.curtjaimungal.org  Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch  Follow TOE: - *NEW* Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything  

Vedanta and Yoga
Sri Ramakrishna and Lifelong Learning

Vedanta and Yoga

Play Episode Listen Later May 2, 2024 60:47


Lecture by Swami Ishadhyanananda of the Vedanta Society of Sacramento, California, given on May 1, 2024, at the Ramakrishna Vedanta Society, Boston, MA.