Best podcasts about lesswrong

Latest podcast episodes about lesswrong

The Bayesian Conspiracy
256 – Writing for LLMs

The Bayesian Conspiracy

Play Episode Listen Later Feb 18, 2026 96:11


We are inspired by Andrew Cutler’s Writing for AI to consider the value of writing for LLMs LINKS Andrew Cutler’s Writing for AI Gwern’s Writing for LLMs Tracing Woodgrain’s Reliable Sources Shambaugh’s An AI Agent Published a Hit Piece on Me Eneasz’s Stone Age Billionaire Can’t Word Good InkHaven LessOnline The main purpose of the AFFINE Seminar is to give promising newcomers to AI alignment an opportunity to acquire a deep understanding of some large pieces of the problem, making them better equipped for work on the mitigation of AI existential risk. AFFINE Alignment Seminar Paid Bonus content for the week – Preshow chatter, Full Show Video 00:00:49 – Announcements & Feedback 00:42:15 – Writing for AI 01:23:15 – AFFINE Alignment Seminar 01:31:11 – Guild of the Rose 01:33:37 – Thank the Supporter! Our Patreon, or if you prefer Our SubStack Hey look, we have a discord! What could possibly go wrong? We now partner with The Guild of the Rose, check them out. LessWrong Sequence Posts Discussed in this Episode: on hiatus, returning soon

Effective Altruism Forum Podcast
“Long-term risks from ideological fanaticism” by David_Althaus, Jamie_Harris, vanessa16, Clare_Diane, Will Aldred

Effective Altruism Forum Podcast

Play Episode Listen Later Feb 16, 2026 162:42


Cross-posted to LessWrong.Summary History's most destructive ideologies—like Nazism, totalitarian communism, and religious fundamentalism—exhibited remarkably similar characteristics: epistemic and moral certainty extreme tribalism dividing humanity into a sacred “us” and an evil “them” a willingness to use whatever means necessary, including brutal violence. Such ideological fanaticism was a major driver of eight of the ten greatest atrocities since 1800, including the Taiping Rebellion, World War II, and the regimes of Stalin, Mao, and Hitler. We focus on ideological fanaticism over related concepts like totalitarianism partly because it better captures terminal preferences, which plausibly matter most as we approach superintelligent AI and technological maturity. Ideological fanaticism is considerably less influential than in the past, controlling only a small fraction of world GDP. Yet at least hundreds of millions still hold fanatical views, many regimes exhibit concerning ideological tendencies, and the past two decades have seen widespread democratic backsliding. The long-term influence of ideological fanaticism is uncertain. Fanaticism faces many disadvantages including a weak starting position, poor epistemics, and difficulty assembling broad coalitions. But it benefits from greater willingness to use extreme measures, fervent mass followings, and a historical tendency to survive and even thrive amid technological and societal upheaval. Beyond complete victory or defeat, multipolarity may [...] ---Outline:(00:16) Summary(05:19) What do we mean by ideological fanaticism?(08:40) I. Dogmatic certainty: epistemic and moral lock-in(10:02) II. Manichean tribalism: total devotion to us, total hatred for them(12:42) III. Unconstrained violence: any means necessary(14:33) Fanaticism as a multidimensional continuum(16:09) Ideological fanaticism drove most of recent historys worst atrocities(19:24) Death tolls dont capture all harm(20:55) Intentional versus natural or accidental harm(22:44) Why emphasize ideological fanaticism over political systems like totalitarianism?(25:07) Fanatical and totalitarian regimes have caused far more harm than all other regime types(26:29) Authoritarianism as a risk factor(27:19) Values change political systems: Ideological fanatics seek totalitarianism, not democracy(29:50) Terminal values may matter independently of political systems, especially with AGI(31:02) Fanaticisms connection to malevolence (dark personality traits)(34:22) The current influence of ideological fanaticism(34:42) Historical perspective: it was much worse, but we are sliding back(37:19) Estimating the global scale of ideological fanaticism(43:57) State actors(48:12) How much influence will ideological fanaticism have in the long-term future?(48:57) Reasons for optimism: Why ideological fanaticism will likely lose(49:45) A worse starting point and historical track record(50:33) Fanatics intolerance results in coalitional disadvantages(51:53) The epistemic penalty of irrational dogmatism(54:21) The marketplace of ideas and human preferences(55:57) Reasons for pessimism: Why ideological fanatics may gain power(56:04) The fragility of democratic leadership in AI(56:37) Fanatical actors may grab power via coups or revolutions(59:36) Fanatics have fewer moral constraints(01:01:13) Fanatics prioritize destructive capabilities(01:02:13) Some ideologies with fanatical elements have been remarkably resilient and successful(01:03:01) Novel fanatical ideologies could emerge--or existing ones could mutate(01:05:08) Fanatics may have longer time horizons, greater scope-sensitivity, and prioritize growth more(01:07:15) A possible middle ground: Persistent multipolar worlds(01:08:33) Why multipolar futures seem plausible(01:10:00) Why multipolar worlds might persist indefinitely(01:15:42) Ideological fanaticism increases existential and suffering risks(01:17:09) Ideological fanaticism increases the risk of war and conflict(01:17:44) Reasons for war and ideological fanaticism(01:26:27) Fanatical ideologies are non-democratic, which increases the risk of war(01:27:00) These risks are both time-sensitive and timeless(01:27:44) Fanatical retributivism may lead to astronomical suffering(01:29:50) Empirical evidence: how many people endorse eternal extreme punishment?(01:33:53) Religious fanatical retributivism(01:40:45) Secular fanatical retributivism(01:41:43) Ideological fanaticism could undermine long-reflection-style frameworks and AI alignment(01:42:33) Ideological fanaticism threatens collective moral deliberation(01:47:35) AI alignment may not solve the fanaticism problem either(01:53:33) Prevalence of reality-denying, anti-pluralistic, and punitive worldviews(01:55:44) Ideological fanaticism could worsen many other risks(01:55:49) Differential intellectual regress(01:56:51) Ideological fanaticism may give rise to extreme optimization and insatiable moral desires(01:59:21) Apocalyptic terrorism(02:00:05) S-risk-conducive propensities and reverse cooperative intelligence(02:01:28) More speculative dynamics: purity spirals and self-inflicted suffering(02:03:00) Unknown unknowns and navigating exotic scenarios(02:03:43) Interventions(02:05:31) Societal or political interventions(02:05:51) Safeguarding democracy(02:06:40) Reducing political polarization(02:10:26) Promoting anti-fanatical values: classical liberalism and Enlightenment principles(02:13:55) Growing the influence of liberal democracies(02:15:54) Encouraging reform in illiberal countries(02:16:51) Promoting international cooperation(02:22:36) Artificial intelligence-related interventions(02:22:41) Reducing the chance that transformative AI falls into the hands of fanatics(02:27:58) Making transformative AIs themselves less likely to be fanatical(02:36:14) Using AI to improve epistemics and deliberation(02:38:13) Fanaticism-resistant post-AGI governance(02:39:51) Addressing deeper causes of ideological fanaticism(02:41:26) Supplementary materials(02:41:39) Acknowledgments(02:42:22) References --- First published: February 12th, 2026 Source: https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Tú Qué Harías?
NO ESCUCHES ESTO: El Experimento Mental que te puede CONDENAR

Tú Qué Harías?

Play Episode Listen Later Feb 12, 2026 31:53


"Una vez que lo sabes, ya no hay vuelta atrás". ¿Puede una idea ser un virus? En este episodio, Manny León nos sumerge en la oscuridad de LessWrong para desenterrar el Basilisco de Roko: la teoría que afirma que una IA todopoderosa del futuro está observando quién la ayudó a nacer... y quién no. Exploramos la delgada línea entre la ciencia ficción y el pánico real de los hombres más poderosos de la tecnología. Si creías que los algoritmos de hoy eran intrusivos, espera a conocer al "Dios Digital" que podría estar creando una simulación de ti para cobrarte tus deudas. ¿Es una locura colectiva o la apuesta más lógica del siglo XXI? Dale play... si te atreves a entrar en la lista. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Metagame
#45 - Alex Zhu & Romeo Stevens | Debating the Perennial Philosophy

The Metagame

Play Episode Listen Later Feb 9, 2026 96:43


Alex Zhu is a math olympian and researcher exploring the convergence of analytical rationality and religion. He's also the co-founder of AlphaSheets. He's currently working on a rigorous framework for bridging AI alignment and mysticism.Romeo Stevens is one of co-founders of the Qualia Research Institute and the founder of Mealsquares. He writes extensively about buddhism, pedagogy, skill development and psychotherapeutic modalities. You can find his work on his blog, Lesswrong and Twitter.In this episode, Alex and Romeo explore their disagreement around perennialism. The idea that all the world's major religions are pointing to the same thing. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit themetagame.substack.com

ai stevens debating perennial philosophy lesswrong qualia research institute
LessWrong Curated Podcast
"IABIED Book Review: Core Arguments and Counterarguments" by Stephen McAleese

LessWrong Curated Podcast

Play Episode Listen Later Feb 5, 2026 50:18


The recent book “If Anyone Builds It Everyone Dies” (September 2025) by Eliezer Yudkowsky and Nate Soares argues that creating superintelligent AI in the near future would almost certainly cause human extinction: If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die. The goal of this post is to summarize and evaluate the book's key arguments and the main counterarguments critics have made against them. Although several other book reviews have already been written I found many of them unsatisfying because a lot of them are written by journalists who have the goal of writing an entertaining piece and only lightly cover the core arguments, or don't seem understand them properly, and instead resort to weak arguments like straw-manning, ad hominem attacks or criticizing the style of the book. So my goal is to write a book review that has the following properties: Written by someone who has read a substantial amount of AI alignment and LessWrong content and won't make AI alignment beginner mistakes or misunderstandings (e.g. not knowing about the [...] ---Outline:(07:43) Background arguments to the key claim(09:21) The key claim: ASI alignment is extremely difficult to solve(12:52) 1. Human values are a very specific, fragile, and tiny space of all possible goals(15:25) 2. Current methods used to train goals into AIs are imprecise and unreliable(16:42) The inner alignment problem(17:25) Inner alignment introduction(19:03) Inner misalignment evolution analogy(21:03) Real examples of inner misalignment(22:23) Inner misalignment explanation(25:05) ASI misalignment example(27:40) 3. The ASI alignment problem is hard because it has the properties of hard engineering challenges(28:10) Space probes(29:09) Nuclear reactors(30:18) Computer security(30:35) Counterarguments to the book(30:46) Arguments that the books arguments are unfalsifiable(33:19) Arguments against the evolution analogy(37:38) Arguments against counting arguments(40:16) Arguments based on the aligned behavior of modern LLMs(43:16) Arguments against engineering analogies to AI alignment(45:05) Three counterarguments to the books three core arguments(46:43) Conclusion(49:23) Appendix --- First published: January 24th, 2026 Source: https://www.lesswrong.com/posts/qFzWTTxW37mqnE6CA/iabied-book-review-core-arguments-and-counterarguments --- Narrated by TYPE III AUDIO. ---Images from the article:

The Bayesian Conspiracy
255 – Eneasz goes to CFAR, and Epistemically Honest Reassurance

The Bayesian Conspiracy

Play Episode Listen Later Feb 4, 2026 95:02


Eneasz talks a bit about his CFAR experience, and we discuss DaystarEld’s Epistemically Honest Reassurance LINKS CFAR’s home page Upcoming CFAR workshops our episode 152 – Frame Control with Aella Epistemically Honest Reassurance Pokémon, Origin of the Species (also in audio) Paid Bonus content for the week – Preshow chatter, Full Show Video 00:00:56 – Announcements & Feedback 00:13:15 – Eneasz at CFAR 00:48:15 – Epistemically Honest Reassurance 01:29:49 – Guild of the Rose 01:33:20 – Thank the Supporter! Our Patreon, or if you prefer Our SubStack Hey look, we have a discord! What could possibly go wrong? We now partner with The Guild of the Rose, check them out. LessWrong Sequence Posts Discussed in this Episode: on hiatus, returning soon

LessWrong Curated Podcast
"The Possessed Machines (summary)" by L Rudolf L

LessWrong Curated Podcast

Play Episode Listen Later Jan 29, 2026 16:43


The Possessed Machines is one of the most important AI microsites. It was published anonymously by an ex- lab employee, and does not seem to have spread very far, likely at least partly due to this anonymity (e.g. there is no LessWrong discussion at the time I'm posting this). This post is my attempt to fix that. I do not agree with everything in the piece, but I think cultural critiques of the "AGI uniparty" are vastly undersupplied and incredibly important in modeling & fixing the current trajectory. The piece is a long but worthwhile analysis of some of the cultural and psychological failures of the AGI industry. The frame is Dostoevsky's Demons (alternatively translated The Possessed), a novel about ruin in a small provincial town. The author argues it's best read as a detailed description of earnest people causing a catastrophe by following tracks laid down by the surrounding culture that have gotten corrupted: What I know is that Dostoevsky, looking at his own time, saw something true about how intelligent societies destroy themselves. He saw that the destruction comes from the best as well as the worst, from the idealists as well as the cynics, from the [...] --- First published: January 25th, 2026 Source: https://www.lesswrong.com/posts/ppBHrfY4bA6J7pkpS/the-possessed-machines-summary --- Narrated by TYPE III AUDIO.

The Bayesian Conspiracy
254 – The True Theme and Meaning of HPMOR

The Bayesian Conspiracy

Play Episode Listen Later Jan 21, 2026 95:30


WSCFriedman gets to the core of what HPMOR is ACTUALLY about, and finally pinpoints why we love it so much, in his essay Harry Potter And The Methods Of Rationality Is A Disney Movie About A Serial Killer. LINKS Audio version of HPMOR is A Disney Movie About A Serial Killer, from AskWho William’s blog, “As Our Days” ACX Non-Book Review 2025 Winners Post Just HPMOR substack, and Spotify playlist Why the AI Water Issue Has Nothing to Do With Water (and audio version here, again from AskWho) Money is Life Eneasz’s post on InkHaven Inkhaven.Blog – apply today! Paid Bonus content for the week – Preshow chatter (audio, video), Full Show Video 00:04:33 – Announcements & Feedback 00:27:24 – Eneasz’s Podcast Meta-Worries 00:28:21 – HPMOR Is A Disney Movie About A Serial Killer 01:28:17 – Guild of the Rose 01:31:18 – Thank the Supporter! Our Patreon, or if you prefer Our SubStack Hey look, we have a discord! What could possibly go wrong? We now partner with The Guild of the Rose, check them out. LessWrong Sequence Posts Discussed in this Episode: on hiatus, returning soon

The Bayesian Conspiracy
253 – The Seven Vicious Vices of Rationalists

The Bayesian Conspiracy

Play Episode Listen Later Jan 7, 2026 103:49


Discussing Ben Pace’s recent post on the Rationalist Vices. LINKS The Seven Vicious Vices of Rationalists (includes AI audio version) Lightcone Fundraiser! 2025 LessWrong Census/Survey Simone & Malcom Collins vs Reporter on whether genes exist MIRI is hiring Slime Mold Time Mold’s long-delayed Lithium response If Anyone Builds It Everyone Dies Dear Grom by Eneasz Paid Bonus content for the week – Preshow chatter (audio, video), Full Show Video 00:00:05 – Announcements 00:23:05 – InkHaven reflections 00:31:29 – The Seven Vices of Rationalists 01:38:36 – Guild of the Rose 01:41:05 – Thank the Supporter! Our Patreon, or if you prefer Our SubStack Hey look, we have a discord! What could possibly go wrong? We now partner with The Guild of the Rose, check them out. LessWrong Sequence Posts Discussed in this Episode: on hiatus, returning soon

Behind the Bastards
CZM Rewind: How The Zizians Went Full On Death Cult & The Zizian Murder Spree (or Exactly How Harry Potter Fanfic Killed A Border Patrol Agent)

Behind the Bastards

Play Episode Listen Later Jan 1, 2026 146:07 Transcription Available


Part Three: Robert tells David about Ziz's glorious plan to take to the sea and sever the right and left brains of her followers in order to make them psychopaths god that sentence was weird to write trust us the episode is weirder. Part Four: Robert concludes the story of the Zizians with a spree of horrific violent crimes and deaths, culminating in a shoot out with the Border Patrol in Vermont of all places. Sources: https://medium.com/@sefashapiro/a-community-warning-about-ziz-76c100180509 https://web.archive.org/web/20230201130318/https://sinceriously.fyi/rationalist-fleet/ https://knowyourmeme.com/memes/infohazard https://web.archive.org/web/20230201130316/https://sinceriously.fyi/net-negative/ Wayback Machine The Zizians Spectral Sight True Hero Contract Schelling Orders – Sinceriously Glossary – Sinceriously https://web.archive.org/web/20230201130330/https://sinceriously.fyi/my-journey-to-the-dark-side/ https://web.archive.org/web/20230201130302/https://sinceriously.fyi/glossary/#zentraidon https://web.archive.org/web/20230201130259/https://sinceriously.fyi/vampires-and-more-undeath/ https://web.archive.org/web/20230201130316/https://sinceriously.fyi/net-negative/ https://web.archive.org/web/20230201130318/https://sinceriously.fyi/rationalist-fleet/ https://x.com/orellanin?s=21&t=F-n6cTZFsKgvr1yQ7oHXRg https://zizians.info/ according to The Boston Globe Inside the ‘Zizians’: How a cultish crew of radical vegans became linked to killings across the United States | The Independent Silicon Valley ‘Rationalists’ Linked to 6 Deaths The Delirious, Violent, Impossible True Story of the Zizians | WIRED Good Group and Pasek’s Doom – Sinceriously Glossary – Sinceriously Mana – Sinceriously Effective Altruism’s Problems Go Beyond Sam Bankman-Fried - Bloomberg The Zizian Facts - Google Docs Several free CFAR summer programs on rationality and AI safety - LessWrong 2.0 viewer This guy thinks killing video game characters is immoral | Vox Inadequate Equilibria: Where and How Civilizations Get Stuck Eliezer Yudkowsky comments on On Terminal Goals and Virtue Ethics - LessWrong 2.0 viewer Effective Altruism’s Problems Go Beyond Sam Bankman-Fried - Bloomberg SquirrelInHell: Happiness Is a Chore PLUM OF DISCORD — I Became a Full-time Internet Pest and May Not... Roko Harassment of PlumOfDiscord Composited – Sinceriously Intersex Brains And Conceptual Warfare – Sinceriously Infohazardous Glossary – Sinceriously SquirrelInHell-Decision-Theory-and-Suicide.pdf - Google Drive The Matrix is a System – Sinceriously A community alert about Ziz. Police investigations, violence, and… | by SefaShapiro | Medium Intersex Brains And Conceptual Warfare – Sinceriously A community alert about Ziz. Police investigations, violence, and… | by SefaShapiro | Medium PLUM OF DISCORD (Posts tagged cw-abuse) Timeline: Violence surrounding the Zizians leading to Border Patrol agent shooting See omnystudio.com/listener for privacy information.

The Bayesian Conspiracy
A Harried Meeting (audio)

The Bayesian Conspiracy

Play Episode Listen Later Dec 31, 2025 11:38


A short story by Ben Pace. Original can be found here. Donate to the fundraiser here! Harry sings karaoke here. Happy New Year.

Behind the Bastards
CZM Rewind: The Zizians: How Harry Potter Fanfic Inspired a Death Cult & The Zizians: Birth of a Cult Leader

Behind the Bastards

Play Episode Listen Later Dec 30, 2025 149:40 Transcription Available


Part One: Earlier this year a Border Patrol officer was killed in a shoot-out with people who have been described as members of a trans vegan AI death cult. But who are the Zizians, really? Robert sits down with David Gborie to trace their development, from part of the Bay Area Rationalist subculture to killers. Part Two: Robert tells David Gborie about the early life of Ziz LaSota, a bright young girl from Alaska who came to the Bay Area with dreams of saving the cosmos or destroying it, all based on her obsession with Rationalist blogs and fanfic. Sources: https://medium.com/@sefashapiro/a-community-warning-about-ziz-76c100180509 https://web.archive.org/web/20230201130318/https://sinceriously.fyi/rationalist-fleet/ https://knowyourmeme.com/memes/infohazard https://web.archive.org/web/20230201130316/https://sinceriously.fyi/net-negative/ Wayback Machine The Zizians Spectral Sight True Hero Contract Schelling Orders – Sinceriously Glossary – Sinceriously https://web.archive.org/web/20230201130330/https://sinceriously.fyi/my-journey-to-the-dark-side/ https://web.archive.org/web/20230201130302/https://sinceriously.fyi/glossary/#zentraidon https://web.archive.org/web/20230201130259/https://sinceriously.fyi/vampires-and-more-undeath/ https://web.archive.org/web/20230201130316/https://sinceriously.fyi/net-negative/ https://web.archive.org/web/20230201130318/https://sinceriously.fyi/rationalist-fleet/ https://x.com/orellanin?s=21&t=F-n6cTZFsKgvr1yQ7oHXRg https://zizians.info/ according to The Boston Globe Inside the ‘Zizians’: How a cultish crew of radical vegans became linked to killings across the United States | The Independent Silicon Valley ‘Rationalists’ Linked to 6 Deaths The Delirious, Violent, Impossible True Story of the Zizians | WIRED Good Group and Pasek’s Doom – Sinceriously Glossary – Sinceriously Mana – Sinceriously Effective Altruism’s Problems Go Beyond Sam Bankman-Fried - Bloomberg The Zizian Facts - Google Docs Several free CFAR summer programs on rationality and AI safety - LessWrong 2.0 viewer This guy thinks killing video game characters is immoral | Vox Inadequate Equilibria: Where and How Civilizations Get Stuck Eliezer Yudkowsky comments on On Terminal Goals and Virtue Ethics - LessWrong 2.0 viewer Effective Altruism’s Problems Go Beyond Sam Bankman-Fried - Bloomberg SquirrelInHell: Happiness Is a Chore PLUM OF DISCORD — I Became a Full-time Internet Pest and May Not... Roko Harassment of PlumOfDiscord Composited – Sinceriously Intersex Brains And Conceptual Warfare – Sinceriously Infohazardous Glossary – Sinceriously SquirrelInHell-Decision-Theory-and-Suicide.pdf - Google Drive The Matrix is a System – Sinceriously A community alert about Ziz. Police investigations, violence, and… | by SefaShapiro | Medium Intersex Brains And Conceptual Warfare – Sinceriously A community alert about Ziz. Police investigations, violence, and… | by SefaShapiro | Medium PLUM OF DISCORD (Posts tagged cw-abuse) Timeline: Violence surrounding the Zizians leading to Border Patrol agent shooting See omnystudio.com/listener for privacy information.

The Bayesian Conspiracy
House elves are crystalized cosmic power

The Bayesian Conspiracy

Play Episode Listen Later Dec 25, 2025 5:05


A short story by Prerat. Original can be found here. Merry Xmas!

The Bayesian Conspiracy
Christmas 2025: Couple of Quick Shoutouts

The Bayesian Conspiracy

Play Episode Listen Later Dec 24, 2025 3:42


Hello and happy holiday season to you all! Eneasz is back and we’re here to say hi and give a shoutout to Skyler’s awesome annual LessWrong survey and Lighthaven’s fundraising event. See the links below for more details. LINKS The LessWrong survey will remain open from now until at least January 7th, 2026. Lighthaven is once again seeking support. If you’re inclined to help, check out all of the details here. Related, our interview with Oliver from last year. Our Patreon, or if you prefer Our SubStack Hey look, we have a discord! What could possibly go wrong? (also merch) We now partner with The Guild of the Rose, check them out. LessWrong Sequence Posts Discussed in this Episode: on hiatus, returning soon

LessWrong Curated Podcast
"A high integrity/epistemics political machine?" by Raemon

LessWrong Curated Podcast

Play Episode Listen Later Dec 17, 2025 19:04


I have goals that can only be reached via a powerful political machine. Probably a lot of other people around here share them. (Goals include “ensure no powerful dangerous AI get built”, “ensure governance of the US and world are broadly good / not decaying”, “have good civic discourse that plugs into said governance.”) I think it'd be good if there was a powerful rationalist political machine to try to make those things happen. Unfortunately the naive ways of doing that would destroy the good things about the rationalist intellectual machine. This post lays out some thoughts on how to have a political machine with good epistemics and integrity. Recently, I gave to the Alex Bores campaign. It turned out to raise a quite serious, surprising amount of money. I donated to Alex Bores fairly confidently. A few years ago, I donated to Carrick Flynn, feeling kinda skeezy about it. Not because there's necessarily anything wrong with Carrick Flynn, but, because the process that generated "donate to Carrick Flynn" was a self-referential "well, he's an EA, so it's good if he's in office." (There might have been people with more info than that, but I didn't hear much about [...] ---Outline:(02:32) The AI Safety Case(04:27) Some reason things are hard(04:37) Mutual Reputation Alliances(05:25) People feel an incentive to gain power generally(06:12) Private information is very relevant(06:49) Powerful people can be vindictive(07:12) Politics is broadly adversarial(07:39) Lying and Misleadingness are contagious(08:11) Politics is the Mind Killer / Hard Mode(08:30) A high integrity political machine needs to work longterm, not just once(09:02) Grift(09:15) Passwords should be costly to fake(10:08) Example solution: Private and/or Retrospective Watchdogs for Political Donations(12:50) People in charge of PACs/similar needs good judgment(14:07) Don't share reputation / Watchdogs shouldn't be an org(14:46) Prediction markets for integrity violation(16:00) LessWrong is for evaluation, and (at best) a very specific kind of rallying --- First published: December 14th, 2025 Source: https://www.lesswrong.com/posts/2pB3KAuZtkkqvTsKv/a-high-integrity-epistemics-political-machine --- Narrated by TYPE III AUDIO.

The Bayesian Conspiracy
252 – The 12 Virtues of Rationality, with Alex and David

The Bayesian Conspiracy

Play Episode Listen Later Dec 10, 2025 112:21


Join me as I sit down with Alex and David, both previous guests on the show and both cofounders of the Guild of the Rose. Together, we go over the core – the heart – of what we consider to be the Rationalist tradition. The 12 Virtues are an awesome distillation of what the rest of the sequences build on. Be sure to check out the Guild of the Rose. If our constantly pitching it to you hasn’t been enough to persuade you to check it out, hopefully hearing two more of the founders discuss Rationality in general and giving their own pitches for the Guild will tip the scales. LINKS The Twelve Virtues Abridged Version Alex and David have been on more than a couple of times, but I’ll limit myself to one episode from each of them 189 – AI Bloomer David Youssef 192 – Absurdism and the Meaning of Life, with Alex And the episode with both of them, the original announcement for the Guild Also, Alex’s dating profile! In all sincerity, I’d date him if I was a woman. 00:00:05 – Introduction and The 12 Virtues 01:41:50 – Guild of the Rose Our Patreon, or if you prefer Our SubStack Hey look, we have a discord! What could possibly go wrong? (also merch) We now partner with The Guild of the Rose, check them out. LessWrong Sequence Posts Discussed in this Episode: on hiatus, returning soon

LessWrong Curated Podcast
“Writing advice: Why people like your quick bullshit takes better than your high-effort posts” by null

LessWrong Curated Podcast

Play Episode Listen Later Nov 30, 2025 9:21


Right now I'm coaching for Inkhaven, a month-long marathon writing event where our brave residents are writing a blog post every single day for the entire month of November. And I'm pleased that some of them have seen success – relevant figures seeing the posts, shares on Hacker News and Twitter and LessWrong. The amount of writing is nuts, so people are trying out different styles and topics – some posts are effort-rich, some are quick takes or stories or lists. Some people have come up to me – one of their pieces has gotten some decent reception, but the feeling is mixed, because it's not the piece they hoped would go big. Their thick research-driven considered takes or discussions of values or whatever, the ones they'd been meaning to write for years, apparently go mostly unread, whereas their random-thought “oh shit I need to get a post out by midnight or else the Inkhaven coaches will burn me at the stake” posts[1] get to the front page of Hacker News, where probably Elon Musk and God read them. It happens to me too – some of my own pieces that took me the most effort, or that I'm [...] ---Outline:(02:00) The quick post is short, the effortpost is long(02:34) The quick post is about something interesting, the topic of the effortpost bores most people(03:13) The quick post has a fun controversial take, the effortpost is boringly evenhanded or laden with nuance(03:30) The quick post is low-context, the effortpost is high-context(04:28) The quick post is has a casual style, the effortpost is inscrutably formal The original text contained 1 footnote which was omitted from this narration. --- First published: November 28th, 2025 Source: https://www.lesswrong.com/posts/DiiLDbHxbrHLAyXaq/writing-advice-why-people-like-your-quick-bullshit-takes --- Narrated by TYPE III AUDIO. ---Images from the article:

The Bayesian Conspiracy
251 – Matt Freeman on What Makes a Good Story

The Bayesian Conspiracy

Play Episode Listen Later Nov 26, 2025 114:49


Matt Freeman has been cohosting several media analysis podcasts for over a decade. He and his cohost Scott have been doing weekly episodes of the Doofcast every Friday and they cover movies, books, and TV shows. Matt and Scott's analysis podcasts have made me love stories even more and have equipped me with tools to […]

The Bayesian Conspiracy
Bayes Blast 46 – Get Involved in Local Politics with Booker Lightman

The Bayesian Conspiracy

Play Episode Listen Later Nov 22, 2025 25:59


Booker is a long-time attendee and one of the coordinators of the Denver area Less Wrong community. Community engagement isn't just a background task for him – he's taken real steps to get involved with and improve his community and you can too! He's here to tell us about the things he's done and give […]

Faster, Please! — The Podcast

My fellow pro-growth/progress/abundance Up Wingers in America and around the world:What really gets AI optimists excited isn't the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson's new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario.Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality.Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government's AI Security Institute.In This Episode* Making human minds (1:43)* Theory to reality (6:45)* The world with automated research (10:59)* Considering constraints (16:30)* Worries and what-ifs (19:07)Below is a lightly edited transcript of our conversation. Making human minds (1:43). . . you don't have to build any more computer chips, you don't have to build any more fabs . . . In fact, you don't have to do anything at all in the physical world.Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?”The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn't happen. When GDP goes up, that doesn't mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now we've got more output, we're getting even fewer people as a result, so that's been blocked.This first paper is basically saying, look, if we can manufacture human minds or human-equivalent minds in any way, be it by building more computer chips, or making better computer chips, or any way at all, then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers. That's the first paper.The second paper double clicks on one specific way that we can use output to create more human minds. It's actually, in a way, the scariest way because it's the way of creating human minds which can happen the quickest. So this is the way where you don't have to build any more computer chips, you don't have to build any more fabs, as they're called, these big factories that make computer chips. In fact, you don't have to do anything at all in the physical world.It seems like most of the conversation has been about how much investment is going to go into building how many new data centers, and that seems like that is almost the entire conversation, in a way, at the moment. But you're not looking at compute, you're looking at software.Exactly, software. So the idea is you don't have to build anything. You've already got loads of computer chips and you just make the algorithms that run the AIs on those computer chips more efficient. This is already happening, but it isn't yet a big deal because AI isn't that capable. But already, one year out, Epoch, this AI forecasting organization, estimates that just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system. Just wait 12 months, and suddenly, for the same budget, you are able to run 10 times as many AI systems, or maybe even 1000 times as many for their most aggressive estimate. As I said, not a big deal today, but if we then develop an AI system which is better than any human at doing research, then now, in 10 months, you haven't built anything, but you've got 10 times as many researchers that you can set to work or even more than that. So then we get this feedback loop where you make some research progress, you improve your algorithms, now you've got loads more researchers, you set them all to work again, finding even more algorithmic improvements. So today we've got maybe a few hundred people that are advancing state-of-the-art AI algorithms.I think they're all getting paid a billion dollars a person, too.Exactly. But maybe we can 10x that initially by having them replaced by AI researchers that do the same thing. But then those AI researchers improve their own algorithms. Now you have 10x as many again, you have them building more computer chips, you're just running them more efficiently, and then the cycle continues. You're throwing more and more of these AI researchers at AI progress itself, and the algorithms are improving in what might be a very powerful feedback loop.In this case, it seems me that you're not necessarily talking about artificial general intelligence. This is certainly a powerful intelligence, but it's narrow. It doesn't have to do everything, it doesn't have to play chess, it just has to be able to do research.It's certainly not fully general. You don't need it to be able to control a robot body. You don't need it to be able to solve the Riemann hypothesis. You don't need it to be able to even be very persuasive or charismatic to a human. It's not narrow, I wouldn't say, it has to be able to do literally anything that AI researchers do, and that's a wide range of tasks: They're coding, they're communicating with each other, they're managing people, they are planning out what to work on, they are thinking about reviewing the literature. There's a fairly wide range of stuff. It's extremely challenging. It's some of the hardest work in the world to do, so I wouldn't say it's now, but it's not everything. It's some kind of intermediate level of generality in between a mere chess algorithm that just does chess and the kind of AGI that can literally do anything.Theory to reality (6:45)I think it's a much smaller gap for AI research than it is for many other parts of the economy.I think people who are cautiously optimistic about AI will say something like, “Yeah, I could see the kind of intelligence you're referring to coming about within a decade, but it's going to take a couple of big breakthroughs to get there.” Is that true, or are we actually getting pretty close?Famously, predicting the future of technology is very, very difficult. Just a few years before people invented the nuclear bomb, famous, very well-respected physicists were saying, “It's impossible, this will never happen.” So my best guess is that we do need a couple of fairly non-trivial breakthroughs. So we had the start of RL training a couple of years ago, became a big deal within the language model paradigm. I think we'll probably need another couple of breakthroughs of that kind of size.We're not talking a completely new approach, throw everything out, but we're talking like, okay, we need to extend the current approach in a meaningfully different way. It's going to take some inventiveness, it's going to take some creativity, we're going to have to try out a few things. I think, probably, we'll need that to get to the researcher that can fully automate OpenAI, is a nice way of putting it — OpenAI doesn't employ any humans anymore, they've just got AIs there.There's a difference between what a model can do on some benchmark versus becoming actually productive in the real world. That's why, while all the benchmark stuff is interesting, the thing I pay attention to is: How are businesses beginning to use this technology? Because that's the leap. What is that gap like, in your scenario, versus an AI model that can do a theoretical version of the lab to actually be incorporated in a real laboratory?It's definitely a gap. I think it's a pretty big gap. I think it's a much smaller gap for AI research than it is for many other parts of the economy. Let's say we are talking about car manufacturing and you're trying to get an AI to do everything that happens there. Man, it's such a messy process. There's a million different parts of the supply chain. There's all this tacit knowledge and all the human workers' minds. It's going to be really tough. There's going to be a very big gap going from those benchmarks to actually fully automating the supply chain for cars.For automating what OpenAI does, there's still a gap, but it's much smaller, because firstly, all of the work is virtual. Everyone at OpenAI could, in principle, work remotely. Their top research scientists, they're just on a computer all day. They're not picking up bricks and doing stuff like that. So also that already means it's a lot less messy. You get a lot less of that kind of messy world reality stuff slowing down adoption. And also, a lot of it is coding, and coding is almost uniquely clean in that, for many coding tasks, you can define clearly defined metrics for success, and so that makes AI much better. You can just have a go. Did AI succeed in the test? If not, try something else or do a gradient set update.That said, there's still a lot of messiness here, as any coder will know, when you're writing good code, it's not just about whether it does the function that you've asked it to do, it needs to be well-designed, it needs to be modular, it needs to be maintainable. These things are much harder to evaluate, and so AIs often pass our benchmarks because they can do the function that you asked it to do, the code runs, but they kind of write really spaghetti code — code that no one wants to look at, that no one can understand, and so no company would want to use that.So there's still going to be a pretty big benchmark-to-reality gap, even for OpenAI, and I think that's one of the big uncertainties in terms of, will this happen in three years versus will this happen in 10 years, or even 15 years?Since you brought up the timeline, what's your guess? I didn't know whether to open with that question or conclude with that question — we'll stick it right in the middle of our chat.Great. Honestly, my best guess about this does change more often than I would like it to, which I think tells us, look, there's still a state of flux. This is just really something that's very hard to know about. Predicting the future is hard. My current best guess is it's about even odds that we're able to fully automate OpenAI within the next 10 years. So maybe that's a 50-50.The world with AI research automation (10:59). . . I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself?So then what really would be the impact of that kind of AI research automation? How would you go about quantifying that kind of acceleration? What does the world look like?Yeah, so many possibilities, but I think what strikes me is that there is a plausible world where it is just way, way faster than almost everyone is expecting it to be. So that's the world where you fully automate OpenAI, and then we get that feedback loop that I was talking about earlier where AIs make their algorithms way more efficient, now you've got way more of them, then they make their algorithms way more efficient again, now they're way smarter. Now they're thinking a hundred times faster. The feedback loop continues and maybe within six months you now have a billion superintelligent AIs running on this OpenAI data center. The combined cognitive abilities of all these AIs outstrips the whole of the United States, outstrips anything we've seen from any kind of company or entity before, and they can all potentially be put towards any goal that OpenAI wants to. And then there's, of course, the risk that OpenAI's lost control of these systems, often discussed, in which case these systems could all be working together to pursue a particular goal. And so what we're talking about here is really a huge amount of power. It's a threat to national security for any government in which this happens, potentially. It is a threat to everyone if we lose control of these systems, or if the company that develops them uses them for some kind of malicious end. And, in terms of economic impacts, I personally think that that again could happen much more quickly than people think, and we can get into that.In the first paper we mentioned, it was kind of a thought experiment, but you were really talking about moving the decimal point in GDP growth, instead of talking about two and three percent, 20 and 30 percent. Is that the kind of world we're talking about?I speak to economists a lot, and —They hate those kinds of predictions, by the way.Obviously, they think I'm crazy. Not all of them. There are economists that take it very seriously. I think it's taken more seriously than everyone else realizes. It's like it's a bit embarrassing, at the moment, to admit that you take it seriously, but there are a few really senior economists who absolutely know their stuff. They're like, “Yep, this checks out. I think that's what's going to happen.” And I've had conversation with them where they're like, “Yeah, I think this is going to happen.” But the really loud, dominant view where I think people are a little bit scared to speak out against is they're like, “Obviously this is sci-fi.”One analogy I like to give to people who are very, very confident that this is all sci-fi and it's rubbish is to imagine that we were sitting there in the year 1400, imagine we had an economics professor who'd been studying the rate of economic growth, and they've been like, “Yeah, we've always had 0.1 percent growth every single year throughout history. We've never seen anything higher.” And then there was some kind of futurist economist rogue that said, “Actually, I think that if I extrapolate the curves in this way and we get this kind of technology, maybe we could have one percent growth.” And then all the other economists laugh at them, tell them they're insane – that's what happened. In 1400, we'd never had growth that was at all fast, and then a few hundred years later, we developed industrial technology, we started that feedback loop, we were investing more and more resources in scientific progress and in physical capital, and we did see much faster growth.So I think it can be useful to try and challenge economists and say, “Okay, I know it sounds crazy, but history was crazy. This crazy thing happened where growth just got way, way faster. No one would've predicted it. You would not have predicted it.” And I think being in that mindset can encourage people to be like, “Yeah, okay. You know what? Maybe if we do get AI that's really that powerful, it can really do everything, and maybe it is possible.”But to answer your question, yeah, I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself? So ultimately, what the economy is going to be like is it's going to have robots and factories that are able to fully create new versions of themselves. Everything you need: the roads, the electricity, the robots, the buildings, all of that will be replicated. And so you can look at actually biology and say, do we have any examples of systems which fully replicate themselves? How long does it take? And if you look at rats, for example, they're able to double the number of rats by grabbing resources from the environment, and giving birth, and whatnot. The doubling time is about six weeks for some types of rats. So that's an example of here's a physical system — ultimately, everything's made of physics — a physical system that has some intelligence that's able to go out into the world, gather resources, replicate itself. The doubling time is six weeks.Now, who knows how long it'll take us to get to AI that's that good? But when we do, you could see the whole physical economy, maybe a part that humans aren't involved with, a whole automated city without any humans just doubling itself every few weeks. If that happens, and the amount of stuff we're able to reduce as a civilization is doubling again on the order of weeks. And, in fact, there are some animals that double faster still, in days, but that's the kind of level of craziness. Now we're talking about 1000 percent growth, at that point. We don't know how crazy it could get, but I think we should take even the really crazy possibilities, we shouldn't fully rule them out.Considering constraints (16:30)I really hope people work less. If we get this good future, and the benefits are shared between all . . . no one should work. But that doesn't stop growth . . .There's this great AI forecast chart put out by the Federal Reserve Bank of Dallas, and I think its main forecast — the one most economists would probably agree with — has a line showing AI improving GDP by maybe two tenths of a percent. And then there are two other lines: one is more or less straight up, and the other one is straight down, because in the first, AI created a utopia, and in the second, AI gets out of control and starts killing us, and whatever. So those are your three possibilities.If we stick with the optimistic case for a moment, what constraints do you see as most plausible — reduced labor supply from rising incomes, social pushback against disruption, energy limits, or something else?Briefly, the ones you've mentioned, people not working, 100 percent. I really hope people work less. If we get this good future, and the benefits are shared between all — which isn't guaranteed — if we get that, then yeah, no one should work. But that doesn't stop growth, because when AI and robots can do everything that humans do, you don't need humans in the loop anymore. That whole thing is just going and kind of self-replicating itself and making as many goods as services as we want. Sure, if you want your clothes to be knitted by a human, you're in trouble, then your consumption is stuck. Bad luck. If you're happy to consume goods and services produced by AI systems or robots, fine if no one wants to work.Pushback: I think, for me, this is the biggest one. Obviously, the economy doubling every year is very scary as a thought. Tech progress will be going much faster. Imagine if you woke up and, over the course of the year, you go from not having any telephones at all in the world, to everyone's on their smartphones and social media and all the apps. That's a transition that took decades. If that happened in a year, that would be very disconcerting.Another example is the development of nuclear weapons. Nuclear weapons were developed over a number of years. If that happened in a month, or two months, that could be very dangerous. There'd be much less time for different countries, different actors to figure out how they're going to handle it. So I think pushback is the strongest one that we might as a society choose, “Actually, this is insane. We're going to go slower than we could.” That requires, potentially, coordination, but I think there would be broad support for some degree of coordination there.Worries and what-ifs (19:07)If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society.I imagine you certainly talk with people who are extremely gung-ho about this prospect. What is the common response you get from people who are less enthusiastic? Do they worry about a future with no jobs? Maybe they do worry about the existential kinds of issues. What's your response to those people? And how much do you worry about those things?I think there are loads of very worrying things that we're going to be facing. One class of pushback, which I think is very common, is worries about employment. It's a source of income for all of us, employment, but also, it's a source of pride, it's a source of meaning. If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society. I think people aren't just going to be down to just do it. I think people are scared about three AI companies literally now taking all the revenues that all of humanity used to be earning. It is naturally a very scary prospect. So that's one kind of pushback, and I'm sympathetic with it.I think that there are solutions, if we find a way to tax AI systems, which isn't necessarily easy, because it's very easy to move physical assets between countries. It's a lot easier to tax labor than capital already when rich people can move their assets around. We're going to have the same problem with AI, but if we can find a way to tax it, and we maintain a good democratic country, and we can just redistribute the wealth broadly, it can be solved. So I think it's a big problem, but it is doable.Then there's the problem of some people want to stop this now because they're worried about AI killing everyone. Their literally worry is that everyone will be dead because superintelligent AI will want that to happen. I think there's a real risk there. It's definitely above one percent, in my opinion. I wouldn't go above 10 percent, myself, but I think it's very scary, and that's a great reason to slow things down. I personally don't want to stop quite yet. I think you want to stop when the AI is a bit more powerful and a bit more useful than it is today so it can kind of help us figure out what to do about all of this crazy stuff that's coming.On what side of that line is AI as an AI researcher?That's a really great question. Should we stop? I think it's very hard to stop just after you've got the AI researcher AI, because that's when it's suddenly really easy to go very, very fast. So my out-of-the-box proposal here, which is probably very flawed, would be: When we're within a few spits distance — not spitting distance, but if you did that three times, and we can see we're almost at that AI automating OpenAI — then you pause, because you're not going to accidentally then go all the way. It is actually still a little bit a fair distance away, but it's actually still, at that point, probably a very powerful AI that can really help.Then you pause and do what?Great question. So then you pause, and you use your AI systems to help you firstly solve the problem of AI alignment, make extra, double sure that every time we increase the notch of AI capabilities, the AI is still loyal to humanity, not to its own kind of secret goals.Secondly, you solve the problem of, how are we going to make sure that no one person in government or no one CEO of an AI company ensures that this whole AI army is loyal to them, personally? How are we going to ensure that everyone, the whole world gets influenced over what this AI is ultimately programmed to do? That's the second problem.And then there's just a whole host of other things: unemployment that we've talked about, competition between different countries, US and China, there's a whole host of other things that I think you want to research on, figure out, get consensus on, and then slowly ratchet up the capabilities in what is now a very safe and controlled way.What else should we be working on? What are you working on next?One problem I'm excited about is people have historically worried about AI having its own goals. We need to make it loyal to humanity. But as we've got closer, it's become increasingly obvious, “loyalty to humanity” is very vague. What specifically do you want the AI to be programmed to do? I mean, it's not programmed, it's grown, but if it were programmed, if you're writing a rule book for AI, some organizations have employee handbooks: Here's the philosophy of the organization, here's how you should behave. Imagine you're doing that for the AI, but you're going super detailed, exactly how you want your AI assistant to behave in all kinds of situations. What should that be? Essentially, what should we align the AI to? Not any individual person, probably following the law, probably loads of other things. I think basically designing what is the character of this AI system is a really exciting question, and if we get that right, maybe the AI can then help us solve all these other problems.Maybe you have no interest in science fiction, but is there any film, TV, book that you think is useful for someone in your position to be aware of, or that you find useful in any way? Just wondering.I think there's this great post called “AI 2027,” which lays out a concrete scenario for how AI could go wrong or how maybe it could go right. I would recommend that. I think that's the only thing that's coming top of mind. I often read a lot of the stuff I read is I read a lot of LessWrong, to be honest. There's a lot of stuff from there that I don't love, but a lot of new ideas, interesting content there.Any fiction?I mean, I read fiction, but honestly, I don't really love the AI fiction that I've read because often it's quite unrealistic, and so I kind of get a bit overly nitpicky about it. But I mean, yeah, there's this book called Harry Potter and the Methods of Rationality, which I read maybe 10 years ago, which I thought was pretty fun.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

LessWrong Curated Podcast
“Tell people as early as possible it's not going to work out” by habryka

LessWrong Curated Podcast

Play Episode Listen Later Nov 17, 2025 3:19


Context: Post #4 in my sequence of private Lightcone Infrastructure memos edited for public consumption This week's principle is more about how I want people at Lightcone to relate to community governance than it is about our internal team culture. As part of our jobs at Lightcone we often are in charge of determining access to some resource, or membership in some group (ranging from LessWrong to the AI Alignment Forum to the Lightcone Offices). Through that, I have learned that one of the most important things to do when building things like this is to try to tell people as early as possible if you think they are not a good fit for the community; for both trust within the group, and for the sake of the integrity and success of the group itself. E.g. when you spot a LessWrong commenter that seems clearly not on track to ever be a good contributor long-term, or someone in the Lightcone Slack clearly seeming like not a good fit, you should aim to off-ramp them as soon as possible, and generally put marginal resources into finding out whether someone is a good long-term fit early, before they invest substantially [...] --- First published: November 14th, 2025 Source: https://www.lesswrong.com/posts/Hun4EaiSQnNmB9xkd/tell-people-as-early-as-possible-it-s-not-going-to-work-out --- Narrated by TYPE III AUDIO.

The Bayesian Conspiracy
250 – Making the Good Life with Matt Freeman

The Bayesian Conspiracy

Play Episode Listen Later Nov 12, 2025 116:38


While Eneasz is busy at InkHaven, Steven sits down with Matt Freeman to talk about not-AI stuff! We had (in my opinion) a great conversation about stoic philosophy, the traps of getting too entrenched in any philosophical framework, and some of the ingredients of a happy life. LINKS It's Okay to Feel Bad for a […]

The Bayesian Conspiracy
249 – Red Heart and IABIED with Max Harms

The Bayesian Conspiracy

Play Episode Listen Later Oct 29, 2025 128:06


We talk with Max Harms on the air for the first time since 2017! He's got a new book coming out (pre-order your copy here or at Amazon) and we spend about the first half talking about If Anyone Builds It, Everyone Dies. LINKS Max's first book, Crystal Society Eneasz's audiobook of about the first […]

The Bayesian Conspiracy
248 – Seeking Alpha, with Jay

The Bayesian Conspiracy

Play Episode Listen Later Oct 15, 2025 103:12


Jay talks with us about finding Alpha – returns above the base rate – in every day life (and what this means). LINKS Optimize Everything, Jay's substack Jay on Twitter Arbor Trading Bootcamp Kelsey's argument that We Need To Be Able To Sue AI Companies 00:00:05 – Alpha with Jay 01:28:53 – Guild of the […]

Complex Systems with Patrick McKenzie (patio11)
Bits and bricks: Oliver Habryka on LessWrong, LightHaven, and community infrastructure

Complex Systems with Patrick McKenzie (patio11)

Play Episode Listen Later Oct 9, 2025 74:44


Patrick McKenzie (patio11) is joined by Oliver Habryka, who runs Lightcone Infrastructure—the organization behind both the LessWrong forum and the Lighthaven conference venue in Berkeley. They explore how LessWrong became one of the most intellectually consequential forums on the internet, the surprising challenges of running a hotel with fractal geometry, and why Berkeley's building regulations include an explicit permission to plug in a lamp. The conversation ranges from fire codes that inadvertently shape traffic deaths, to nonprofit fundraising strategies borrowed from church capital campaigns, to why coordination is scarcer than money in philanthropy.–Full transcript available here: www.complexsystemspodcast.com/bits-and-bricks-oliver-habryka/–Sponsor: MercuryThis episode is brought to you by Mercury, the fintech trusted by 200K+ companies — from first milestones to running complex systems. Mercury offers banking that truly understands startups and scales with them. Start today at Mercury.comMercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC.–Links:Lightcone Infrastructure: https://www.lightconeinfrastructure.com/ Lighthaven: https://www.lighthaven.space/LessWrong: https://www.lesswrong.com/ –Timestamps:(00:00) Intro(01:08) The origins and evolution of LessWrong(03:54) Challenges of running an online forum(05:57) Reviving LessWrong(14:51) The unique structure of Lighthaven(17:35) The complexities of conference venues(19:14) Sponsor: Mercury(20:14) The realities of conference planning(25:32) Challenges of maintaining Lighthaven(29:54) Navigating permits and regulations(37:02) Impact of fire code regulations on traffic fatalities(39:06) Economic analysis of safety regulations(41:39) Housing policy and construction in Berkeley(43:30) Fundraising challenges in the nonprofit sector(46:44) Effective altruism and fundraising dynamics(54:20) Lessons from religious fundraising practices(01:05:36) Reflections on fundraising(01:13:26) Wrap

The Bayesian Conspiracy
247 – The Void, with Matt Freeman, part 2

The Bayesian Conspiracy

Play Episode Listen Later Oct 1, 2025 103:01


We continue discussing Nostalgebraist's “The Void” in the context of how to relate to LLMs. If God imagines Claude hard enough, does Claude become real? LINKS The Void Audio reading of The Void, from AskWho The referenced episode where the three of us spoke of Janus's post “Simulators” Claude-Clark post – Simulacra Welfare: Meet Clark, by […]

Conspiracy Clearinghouse
The Bicameral World of the Zizians

Conspiracy Clearinghouse

Play Episode Listen Later Sep 24, 2025 57:51


EPISODE 146 | The Bicameral World of the Zizians One striking thing about the many stories that have appeared about the Zizians and the crimes they are accused of committing, is that each one starts at a different place. Some start with the attack on a Vallejo landlord that resulted in his being run through with a samurai sword and then work backwards and then forwards. Others begin their tale with the shooting of an older couple in Pennsylvania. Still others kick things off with the shooting death of a Border Patrol agent up near the Canadian border.  As a result, it can be a bit difficult to get a handle on exactly what happened when and what the several people currently in police custody are accused of.  Like what we do? Then buy us a beer or three via our page on Buy Me a Coffee.  Review us here or on IMDb. And seriously, subscribe, will ya? Like, just do it. SECTIONS LEFT BRAIN 02:28 - Gimme Some Truth - Ziz LaSota, Effective Altruism, x-risk, MIRI, transhumanism and the Singularity, CFAR, LessWrong 12:25 - Digital Witness - Roko's Basilisk, “I Have No Mouth and I Must Scream”, utilitarianism, the many-worlds interpretation, online censorship 22:58 - Adrift in Sleepwakefulness - The Zizians start as vegan anarchotranshumanists, unbucketing, the bicameral mind, unihemispheric sleep (UHS), sleep deprivation, Ziz Theory, the Rationalist Fleet, Curtis Lind offers a place to stay, the first suicide, self-blackmail RIGHT BRAIN 35:02 - Friendship Train - The Westminster Woods protest, arrests and a lack of cooperation, Ziz dies, Curtis Lind is stabbed repeatedly, Ziz is alive, Richard and Rita Zajko are killed; Michelle Zajko, Daniel Blank and Ziz arrested; Michelle blames LessWrong, Ziz is released and vanishes, more legal issues 50:19 - Lose Control - Ophelia Bauckholt and Teresa Youngblut wander around Vermont, a firefight with the Border Patrol, Curtis Lind is killed, Maxmilian Snyder dictates a letter; Zajko, Blank and Ziz arrested (again); trials are set Music by Fanette Ronjat More Info LessWrong on RationalWIki Roko's Basilisk on RationalWiki Zizian Murdercult summary, for those out of the loop on X by @Aella_Girl - January 29, 2025 Who is ‘Ziz'? How a mysterious group with roots in the Bay Area is linked to six deaths in the San Francisco Chronicle ‘Death upon death': Defendant in killing tied to cult-like ‘Zizian' group dictates 1,500-word letter over jail phone in the San Francisco Chronicle How a Vermont border agent's death exposed violence linked to the cultlike Zizian group on CBS News A Vermont border agent's death was the latest violence linked to the cultlike Zizian group on AP Alleged leader of cultlike ‘Zizian' group to be held without bail after arrest in The Guardian Zizians: What we know about the 'cult' linked to six deaths on the BBC The Delirious, Violent, Impossible True Story of the Zizians by Evan Ratliff in Wired Alleged Leader of Roko's Basilisk Murder Cult Says She Did Nothing Wrong, and Would Appreciate Some Vegan Food in Jail on Futurism Who Are the Zizians: Why 6 Killings Are Linked to Alleged Vegan Techie "Cult" on E! News What to Know About the Alleged Zizian "Cult" Linked to 6 Killings on E! News Possible Suicide Cluster Linked to Zizian Group, on Top of Killings on SFist Judge confirms trial date for ‘Zizian cult' murder case on Courthouse News Service Grand jury indicts accused leader of cultlike 'Zizian' group on USA Today She Wanted to Save the World From A.I. Then the Killings Started in the New York Times Three Zizians face trial together in Maryland amid sprawling federal investigation on AP Follow us on social: Facebook X (Twitter) Other Podcasts by Derek DeWitt DIGITAL SIGNAGE DONE RIGHT - Winner of a Gold Quill Award, Gold MarCom Award, AVA Digital Award Gold, Silver Davey Award, and Communicator Award of Excellence, and on numerous top 10 podcast lists.  PRAGUE TIMES - A city is more than just a location - it's a kaleidoscope of history, places, people and trends. This podcast looks at Prague, in the center of Europe, from a number of perspectives, including what it is now, what is has been and where it's going. It's Prague THEN, Prague NOW, Prague LATER 

The Bayesian Conspiracy
246 – The Void, with Matt Freeman, part 1

The Bayesian Conspiracy

Play Episode Listen Later Sep 17, 2025 103:12


We discuss Nostalgebraist's “The Void” in the context of how to relate to LLMs. “When you talk to ChatGPT, who or what are you talking to?” LINKS The Void Audio reading of The Void, from AskWho The referenced episode where the three of us spoke of Janus's post “Simulators” The Measure of a Man episode […]

The Bayesian Conspiracy
245 – AI Welfare, with Rob Long and Rosie Campbell of Eleos

The Bayesian Conspiracy

Play Episode Listen Later Sep 3, 2025 93:54


Do we need to be concerned for the welfare of AIs today? What about the near future? Eleos AI Research is asking exactly that. LINKS Eleos AI Research People for the Ethical Treatment of Reinforcement Learners Bees Can't Suffer? Lena, by qntm When AI Seems Conscious Experience Machines, Rob's substack The War on General Computation […]

The Bayesian Conspiracy
244 – How and Why to Form a Church, with Andrew Willsen

The Bayesian Conspiracy

Play Episode Listen Later Aug 20, 2025 83:13


Andrew Willsen tells us how incorporating as a church allows you to navigate modernity, and gives us the basic steps to doing so. LINKS Andrew's church substack – The Church of the Infinite Game To incorporate in CA file ARTS-PB-501(c)(3) … Continue reading →

The Bayesian Conspiracy
Bayes Blast 45 – Bees Blast

The Bayesian Conspiracy

Play Episode Listen Later Aug 14, 2025 13:28


Is a bee worth 1/7th of a human? Can a bee suffer at all? Nathan joins us to discuss what neural structures are needed for this question to make sense. Map of all the fruitfly neurons

AXRP - the AI X-risk Research Podcast
46 - Tom Davidson on AI-enabled Coups

AXRP - the AI X-risk Research Podcast

Play Episode Listen Later Aug 7, 2025 125:26


Could AI enable a small group to gain power over a large country, and lock in their power permanently? Often, people worried about catastrophic risks from AI have been concerned with misalignment risks. In this episode, Tom Davidson talks about a risk that could be comparably important: that of AI-enabled coups. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Transcript: https://axrp.net/episode/2025/08/07/episode-46-tom-davidson-ai-enabled-coups.html   Topics we discuss, and timestamps: 0:00:35 How to stage a coup without AI 0:16:17 Why AI might enable coups 0:33:29 How bad AI-enabled coups are 0:37:28 Executive coups with singularly loyal AIs 0:48:35 Executive coups with exclusive access to AI 0:54:41 Corporate AI-enabled coups 0:57:56 Secret loyalty and misalignment in corporate coups 1:11:39 Likelihood of different types of AI-enabled coups 1:25:52 How to prevent AI-enabled coups 1:33:43 Downsides of AIs loyal to the law 1:41:06 Cultural shifts vs individual action 1:45:53 Technical research to prevent AI-enabled coups 1:51:40 Non-technical research to prevent AI-enabled coups 1:58:17 Forethought 2:03:03 Following Tom's and Forethought's research   Links for Tom and Forethought: Tom on X / Twitter: https://x.com/tomdavidsonx Tom on LessWrong: https://www.lesswrong.com/users/tom-davidson-1 Forethought Substack: https://newsletter.forethought.org/ Will MacAskill on X / Twitter: https://x.com/willmacaskill Will MacAskill on LessWrong: https://www.lesswrong.com/users/wdmacaskill   Research we discuss: AI-Enabled Coups: How a Small Group Could Use AI to Seize Power: https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power Seizing Power: The Strategic Logic of Military Coups, by Naunihal Singh: https://muse.jhu.edu/book/31450 Experiment using AI-generated posts on Reddit draws fire for ethics concerns: https://retractionwatch.com/2025/04/28/experiment-using-ai-generated-posts-on-reddit-draws-fire-for-ethics-concerns/   Episode art by Hamish Doodles: hamishdoodles.com

The Bayesian Conspiracy
243 – Wes Is Not The Monogamy Police, with Wesly Fenza and Jennifer Kesteloot

The Bayesian Conspiracy

Play Episode Listen Later Aug 6, 2025 113:56


Wes defends his post I Am Not the Monogamy Police, while Jennifer asserts it's about more than monogamy. LINKS Wes's post I Am Not the Monogamy Police His blog, Living Within Reason The original tweets – monogamy vs charity Aella's … Continue reading →

The Bayesian Conspiracy
Bayes Blast 44 – Play-By-Post Storytelling

The Bayesian Conspiracy

Play Episode Listen Later Aug 4, 2025 16:49


Olivia from the Guild of the Rose is back to tell us about the noble and most ancient tradition of play-by-post storytelling. (Spoiler, it's the precursor to glowfic!)

Joshua Citarella
Doomscroll 26: Aella

Joshua Citarella

Play Episode Listen Later Jul 28, 2025 77:00


My guest is Aella, a writer, blogger and sex worker. She writes the highly popular blog Knowingless. She is a member of the online community LessWrong. We discuss the emergence of AI, the sex industry (online and offline), robot gfs and libertarian transhumanism. Aella describes her evangelical religious upbringing and how she learned to navigate the secular world. We explore the overlap of niche online politics and fetishes, the Woke Wars, debate culture and free speech. You can get access to the full catalog for Doomscroll and more by becoming a paid supporter: www.patreon.com/joshuacitarella joshuacitarella.substack.com/subscribe

The Bayesian Conspiracy
242 – TracingWoodgrains, live at Manifest 2025

The Bayesian Conspiracy

Play Episode Listen Later Jul 23, 2025 93:03


Eneasz sits down with Tracing Woodgrains before a live audience at Manifest 2025 for a wide range of topics. Then we follow up some more afterwards. LINKS Tracing Woodgrains on Twitter and at his Substack A reddit history of what … Continue reading →

The Bayesian Conspiracy
Bayes Blast 43 – Die-ing to Intuit Bayes' Theorem

The Bayesian Conspiracy

Play Episode Listen Later Jul 22, 2025 13:19


Olivia is a member of the Guild of the Rose and a total badass. Enjoy the intuitive and fun lesson in Bayesian reasoning she shared with me at VibeCamp.

The Bayesian Conspiracy
Bonus – AI Village Hosts An Event For Humans

The Bayesian Conspiracy

Play Episode Listen Later Jul 11, 2025 36:24


Four AIs recruited a human to host a story-telling event in Dolores Park. Larissa Schiavo is this human. She tells of her interaction with the AIs, the story they wrote, and the meeting between human and machine in Dolores Park. … Continue reading →

The Bayesian Conspiracy
241 – Doom Debates, with Liron Shapira

The Bayesian Conspiracy

Play Episode Listen Later Jul 9, 2025 111:34


Liron Shapira debates AI luminaries and public intellectuals on the imminent possibility of human extinction. Let's get on the P(Doom) Train. LINKS Doom Debates on YouTube Doom Debates podcast Most Watched Debate – Mike Israetel Liron's current favorite debate – … Continue reading →

The Bayesian Conspiracy
240 – How To Live Well With High P(Doom) – with Ben Pace, Brandon Hendrickson, Miranda Dixon-Luinenburg

The Bayesian Conspiracy

Play Episode Listen Later Jun 25, 2025 58:23


Many of us have a high P(Doom) — a belief new AI tools could cause human extinction in the very near future. How can one live a good life in the face of this? We start with a panel discussion … Continue reading →

The Bayesian Conspiracy
Bayes Blast 42 – Epic AI Music

The Bayesian Conspiracy

Play Episode Listen Later Jun 14, 2025 16:26


David Youssef used Claude and Suno to make some truly awesome music. He tells us how he did it and some of his favorite lyrics. Check out the Spotify playlist or the Youtube playlist He's also one of the cofounders … Continue reading →

The Bayesian Conspiracy
239 – SymbyAI

The Bayesian Conspiracy

Play Episode Listen Later Jun 11, 2025 82:57


Steven works at SymbyAI, a startup that's bringing AI into research review and replication. We talk with founder Ashia Livaudais about improving how we all Do Science. Also – If Anyone Builds It Everyone Dies preorders here, or at Amazon. … Continue reading →

The Bayesian Conspiracy
238 – Part 2 of How A Rationalist Becomes A Christian

The Bayesian Conspiracy

Play Episode Listen Later May 28, 2025 86:10


We speak with a long-time Denver rationalist who's converting to Christianity about why. Eneasz can't get over the abandonment of epistemics. 🙁 This is Part 2, see the previous episode (here) for Part 1. LINKS Thomas Ambrose on Twitter Paid … Continue reading →

The Bayesian Conspiracy
237 – How A Rationalist Becomes A Christian

The Bayesian Conspiracy

Play Episode Listen Later May 14, 2025 86:53


We speak with a long-time Denver rationalist who's converting to Christianity about why. Part one, it turns out. LINKS Thomas Ambrose on Twitter The Rationalist Summer Trifecta: Manifest 2025 LessOnline 2025 VibeCamp 2025 00:00:05 – OK so why? 01:24:55 – … Continue reading →

The Bayesian Conspiracy
236 – Twilight of the Edgelords with Liam Nolan

The Bayesian Conspiracy

Play Episode Listen Later Apr 30, 2025 107:29


Eneasz and Liam discuss Scott Alexander's post “Twilight of the Edgelords,” an exploration of Truth, Morality, and how one balances love of truth vs not destabilizing the world economy and political regime. CORRECTION: Scott did make an explicitly clear pro … Continue reading →

The Bayesian Conspiracy
235 – Gender Differences, with Wes and Jen

The Bayesian Conspiracy

Play Episode Listen Later Apr 16, 2025 144:14


Wes Fenza and Jen Kesteloot join us to talk about whether there's significant personality differences between men and women, and what (if anything) we should do about that. LINKS Wes's post Men and Women are Not That Different Jacob's quoted … Continue reading →

The Bayesian Conspiracy
234 – GiveDirectly, with Nick Allardice

The Bayesian Conspiracy

Play Episode Listen Later Apr 2, 2025 112:54


We speak to Nick Allardice, President & CEO of GiveDirectly. Afterwards Steven and Eneasz get wrapped up talking about community altruism for a bit. LINKS Give Directly GiveDirectly Tech Innovation Fact Sheet 00:00:05 – Give Directly with Nick Allardice 01:12:19 … Continue reading →

Software Defined Talk
Episode 510: Vibe Code This Baby

Software Defined Talk

Play Episode Listen Later Mar 14, 2025 56:25


This week, we discuss Discord's IPO plans, Cursor's big raise, and how much coding developers actually do. Plus, is Southwest making a huge mistake with bag fees and assigned seats? Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/JmkVmwAMw6U?si=ywGs_F3DImUFC0LZ) 510 (https://www.youtube.com/live/JmkVmwAMw6U?si=ywGs_F3DImUFC0LZ) Runner-up Titles Cote's not here, so we'll keep it tight They treated me like I'm stupid I'm not here to buy into the culture Go eats some rocks and glue My head is full of simultaneous thoughts Fly high Icarus, fly high We are certain of the uncertainty Rundown Southwest Airlines shifts to paid baggage policy to lift earnings (https://www.reuters.com/business/aerospace-defense/southwest-airlines-shifts-paid-baggage-policy-lift-earnings-2025-03-11/) Discord in Early Talks With Bankers for Potential I.P.O. (https://www.nytimes.com/2025/03/05/technology/discord-ipo.html?unlocked_article_code=1.1k4.eQrV.NtKK_GpiT-Di&smid=nytcore-ios-share&referringSource=articleShare&sgrp=p) IDE Follow up DevTasks outside of the IDE (PDF) (https://www.microsoft.com/en-us/research/uploads/prod/2019/04/devtime-preprint-TSE19.pdf) How Much Are LLMs Actually Boosting Real-World Programmer Productivity? — LessWrong (https://www.lesswrong.com/posts/tqmQTezvXGFmfSe7f/how-much-are-llms-actually-boosting-real-world-programmer) [AI [CursorStartup Anysphere in Talks for Close to $10 Billion Valuation (https://www.bloomberg.com/news/articles/2025-03-07/ai-startup-anysphere-in-talks-for-close-to-10-billion-valuation) Market Trends Nvidia Is Down 27% From Its Peak. (https://www.fool.com/investing/2025/03/07/nvidia-stock-down-27-from-peak-history-says-this/) Millennium Loses $900 Million on Strategy Roiled by Market Chaos (https://www.bloomberg.com/news/articles/2025-03-08/millennium-loses-900-million-on-strategy-roiled-by-market-chaos) Relevant to your Interests OpenAI executives have told some investors about plans for a $20,000/month agent (https://www.theinformation.com/articles/openai-plots-charging-20-000-a-month-for-phd-level-agents> |

Behind the Bastards
Part One: The Zizians: How Harry Potter Fanfic Inspired a Death Cult

Behind the Bastards

Play Episode Listen Later Mar 11, 2025 72:27 Transcription Available


Earlier this year a Border Patrol officer was killed in a shoot-out with people who have been described as members of a trans vegan AI death cult. But who are the Zizians, really? Robert sits down with David Gborie to trace their development, from part of the Bay Area Rationalist subculture to killers. (4 Part series) Sources: https://medium.com/@sefashapiro/a-community-warning-about-ziz-76c100180509 https://web.archive.org/web/20230201130318/https://sinceriously.fyi/rationalist-fleet/ https://knowyourmeme.com/memes/infohazard https://web.archive.org/web/20230201130316/https://sinceriously.fyi/net-negative/ Wayback Machine The Zizians Spectral Sight True Hero Contract Schelling Orders – Sinceriously Glossary – Sinceriously https://web.archive.org/web/20230201130330/https://sinceriously.fyi/my-journey-to-the-dark-side/ https://web.archive.org/web/20230201130302/https://sinceriously.fyi/glossary/#zentraidon https://web.archive.org/web/20230201130259/https://sinceriously.fyi/vampires-and-more-undeath/ https://web.archive.org/web/20230201130316/https://sinceriously.fyi/net-negative/ https://web.archive.org/web/20230201130318/https://sinceriously.fyi/rationalist-fleet/ https://x.com/orellanin?s=21&t=F-n6cTZFsKgvr1yQ7oHXRg https://zizians.info/ according to The Boston Globe Inside the ‘Zizians’: How a cultish crew of radical vegans became linked to killings across the United States | The Independent Silicon Valley ‘Rationalists’ Linked to 6 Deaths The Delirious, Violent, Impossible True Story of the Zizians | WIRED Good Group and Pasek’s Doom – Sinceriously Glossary – Sinceriously Mana – Sinceriously Effective Altruism’s Problems Go Beyond Sam Bankman-Fried - Bloomberg The Zizian Facts - Google Docs Several free CFAR summer programs on rationality and AI safety - LessWrong 2.0 viewer This guy thinks killing video game characters is immoral | Vox Inadequate Equilibria: Where and How Civilizations Get Stuck Eliezer Yudkowsky comments on On Terminal Goals and Virtue Ethics - LessWrong 2.0 viewer Effective Altruism’s Problems Go Beyond Sam Bankman-Fried - Bloomberg SquirrelInHell: Happiness Is a Chore PLUM OF DISCORD — I Became a Full-time Internet Pest and May Not... Roko Harassment of PlumOfDiscord Composited – Sinceriously Intersex Brains And Conceptual Warfare – Sinceriously Infohazardous Glossary – Sinceriously SquirrelInHell-Decision-Theory-and-Suicide.pdf - Google Drive The Matrix is a System – Sinceriously A community alert about Ziz. Police investigations, violence, and… | by SefaShapiro | Medium Intersex Brains And Conceptual Warfare – Sinceriously A community alert about Ziz. Police investigations, violence, and… | by SefaShapiro | Medium PLUM OF DISCORD (Posts tagged cw-abuse) Timeline: Violence surrounding the Zizians leading to Border Patrol agent shooting See omnystudio.com/listener for privacy information.