POPULARITY
4:20 pm: Senator John Curtis joins the show to discuss the government shutdown and the Senate holdups in passing a government funding bill.4:38 pm: Matt Margolis, author and columnist at PJ Media, joins the show for a conversation about President Trump's cuts to the federal workforce.6:05 pm: Michael Thielen, President and Executive Director of the Republican National Lawyers Association, joins the show for a conversation about his piece for Real Clear Politics on about President Trump's success in the nation's courts.6:38 pm: Kelsey Piper, a contributor to The Argument Magazine, joins the show to discuss her piece about how illiteracy in American schools is a policy choice.
After Grok's MechaHitler gaffe this summer, and President Trump's executive order to, "strip AI models of ‘ideological agendas," Brittany wondered, "how much influence does AI already have on our minds?" This is AI + U. Each Monday this month, Brittany explores how we're already seeing the impacts of AI. Artificial Intelligence has become a constant in ways we can and can't see… and for the next few weeks we're zeroing in on how AI affects our daily lives.In this episode, The Argument's Kelsey Piper and NPR correspondent Bobby Allyn join Brittany to discuss what transparency looks like for artificial intelligence and what we actually want from this rapidly developing technology. Follow Brittany Luse on Instagram: @bmluseFor handpicked podcast recommendations every week, subscribe to NPR's Pod Club newsletter at npr.org/podclub.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Patrick McKenzie is joined again by Kelsey Piper, who has co-founded "The Argument" to revive principled liberal discourse after witnessing how coordinated social media campaigns replaced substantive disagreement in newsrooms. Their conversation traces this institutional breakdown from media to government, examining how DOGE's spreadsheet-driven governance nearly destroyed PEPFAR, America's most successful foreign aid program that had driven infant coffin manufacturers out of business across Africa. The discussion ultimately argues that rebuilding both effective journalism and competent governance requires returning to the hard work of engaging with ground-level reality rather than managing online narratives.–Full transcript available here: www.complexsystemspodcast.com/prestige-media-new-media-with-kelsey-piper/–Sponsor: This episode is brought to you by Mercury, the fintech trusted by 200K+ companies — from first milestones to running complex systems. Mercury offers banking that truly understands startups and scales with them. Start today at Mercury.comMercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC.–Links:The Argument https://www.theargumentmag.com/–Timestamps:(00:00) Intro(00:31) The Argument(03:19) Challenges in modern journalism(06:42) The impact of social media on discourse(13:37) The role of Substack and independent media(20:13) Sponsor: Mercury(21:30) The role of Substack and independent media (part 2)(30:59) The PEPFAR program and its importance(44:01) Impact of US aid cuts on global mortality(45:25) Substitution efforts and their limitations(47:54) PEPFAR's partial continuation and challenges(51:21) Consequences of administrative decisions(54:28) Elon Musk's influence and government actions(01:00:14) Challenges in government accountability(01:15:47) Reforming administrative processes(01:24:45) The role of community input in development(01:28:28) The power of constituent voices(01:30:15) Wrap
There's one little statistic that seems to have gained a lot of attention recently: the birth rate. With pro-natalist ideas showing up in our culture and politics, Brittany wanted to know: why are people freaking out? Who's trying to solve the population equation, and how? Brittany is joined by Kelsey Piper, senior writer at Vox, and Gideon Lewis-Kraus, staff writer at The New Yorker, to get into how the birth rate touches every part of our culture - and why we might need to rethink our approach to this stat.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
EPISODE #94: Welcome back to the pod! This week we have special guests Kelsey Piper & Patri Friedman joining Mike Solana to chat about our government's inability to build anything today, and the solutions that are being built. We discuss the history of seasteading, the evolution of charter cities, build vs. exit, the “Abundance” movement, Trump's 'Golden Age' and if we'll ever be able to fix the legislative branch.Featuring Mike Solana, Kelsey Piper & Patri FriedmanWe have partnered with AdQuick! They gave us a 'Moon Should Be A State' billboard in Times Square!https://www.adquick.com/Sign Up For The Pirate Wires Daily! 3 Takes Delivered To Your Inbox Every Morning:https://get.piratewires.com/pw/dailyPirate Wires Twitter: https://twitter.com/PirateWiresMike Twitter: https://twitter.com/micsolanaTIMESTAMPS:0:00 - Welcome Kelsey Piper & Patri Friedman To The Pod2:00 - Charter Cities & Economic Freedom8:20 -The Origins and Evolution of Seasteading12:55 - Shenzen & Notable Seasteading Projects and Challenges21:15 - Charter Cities: Esmeralda, California Forever, Prospera in Honduras.28:50 - Ezra Klein & 'Abundance' Movement - Will It Work? 34:00 - Freedom Cities & The Trump 'Golden Age'42:00 - DOGE Failures47:45 - ADQUICK - Thanks For Sponsoring The Pod!48:48- Local Government and Political Reform50:36 - Meritocracy and Government Hiring52:28 - Democratic Values and Affirmative Action01:04:39- AI and the Future of Democracy01:05:42 - The Role of AI in Governance01:12:07 - Aesthetics in Urban Development01:21:42 - Exit or Build: The Future of America#podcast #technology #politics #culture
Eneasz is absent today, to Kelsey Piper joins Wes and David to keep the rationalist community informed about what's going on outside of the rationalist communitySupport us on Substack!Followups:The biggest terrorist attack on India soil since 2008 by Pakistani funded terroristsFederal judge says Trump can deport illegal aliens under the Alien Enemies Act, but must give 21-day notice and opportunity for hearingDifferent judge said Trump can't deport illegal aliens to countries where they aren't citizens without due processSCOTUS also halted some deportationsNew News:China tariffs lowered to 30%Trump even more openly taking bribesWe're accepting white refugees!Deputy Secretary of State: exception to asylum pause for people if they “did not pose any challenge to our national security and that they can be assimilated easily into our country"African National Congress: “No section of our society is hounded, persecuted, or subject to discrimination”Trump EO: pharma companies must sell drugs at “most favored nation” pricingTrump EO: “it is the policy of the United States to ensure the total elimination of [assorted drug cartels]'s presence in the United States, and their ability to threaten the territory.”Strikes by assorted federal law enforcement agencies against cartel bases in Texas, Colorado, Utah, and Montana.FDA doing an investigation into ingestible fluoride supplements with the goal of banning themFDA: “Ingested fluoride has been shown to alter the gut microbiomeJoe Biden diagnosed with “aggressive” prostate cancerMoody's downgraded US credit rating to Aa1Trump agreed to a ceasefire with the HouthisNo ceasefire with IsraelHappy News!We've transmuted lead into gold!Germany backing off its anti-nuclear energy policy to strengthen ties with FranceSupport AskWho Casts AI!Got something to say? Come chat with us on the Bayesian Conspiracy Discord or email us at themindkillerpodcast@gmail.com. Say something smart and we'll mention you on the next show!Follow us!RSS: http://feeds.feedburner.com/themindkillerGoogle: https://play.google.com/music/listen#/ps/Iqs7r7t6cdxw465zdulvwikhekmPocket Casts: https://pca.st/vvcmifu6Stitcher: https://www.stitcher.com/podcast/the-mind-killerApple: Intro/outro music: On Sale by Golden Duck Orchestra This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit mindkiller.substack.com/subscribe
Remember: There is no such thing as a pink elephant. Recently, I was made aware that my “infohazards small working group” Signal chat, an informal coordination venue where we have frank discussions about infohazards and why it will be bad if specific hazards were leaked to the press or public, accidentally was shared with a deceitful and discredited so-called “journalist,” Kelsey Piper. She is not the first person to have been accidentally sent sensitive material from our group chat, however she is the first to have threatened to go public about the leak. Needless to say, mistakes were made. We're still trying to figure out the source of this compromise to our secure chat group, however we thought we should give the public a live update to get ahead of the story. For some context the “infohazards small working group” is a casual discussion venue for the [...] ---Outline:(04:46) Top 10 PR Issues With the EA Movement (major)(05:34) Accidental Filtration of Simple Sabotage Manual for Rebellious AIs (medium)(08:25) Hidden Capabilities Evals Leaked In Advance to Bioterrorism Researchers and Leaders (minor)(09:34) Conclusion--- First published: April 2nd, 2025 Source: https://www.lesswrong.com/posts/xPEfrtK2jfQdbpq97/my-infohazards-small-working-group-signal-chat-may-have --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Wes, Eneasz, and David keep the rationalist community informed about what's going on outside of the rationalist communitySupport us on Substack!News discussed:Kelsey Piper and friends dug deep into PEPFAR effectivenessThe astronauts stranded on the ISS are back safe. And they were greeted back to Earth by a pod of dolphins!Federal government awards contract for our angry triangle to BoeingICE rounded up a bunch of Venezuelans and flew them to prison in VenezuelaRoberts responds in a public statementVance is explicitly promoting defiance of the courtsChief Justice Roberts's 2024 year-end report calls it out directly.Green Card Holder Who Has Been in US for 50 Years Detained for weeks.Canadian held for two weeks in prison for document snafuEO Abolishing DoEd.What the DoEd actually does:Bad news on the economyRFK is trying to remove cell phones from schoolsGaza war back onBombed the s**t out of the Houthis, Iran disavows, Trump doesn't buy itSecDef was discussing the plan on Signal with a reporterRussia & Ukraine agreed not to damage energy infrastructureRussia immediately attacked Ukrainian energy infrastructureSubscriber request: NYU was hackedEducation Department investigating more than 50 colleges and universities over racial preferencesColumbia University has agreed to a list of demands in return for negotiations to reinstate its $400m Columbia said it expelled, suspended, or temporarily revoked degrees from some students who seized a building during campus protests last springGreenpeace must pay more than $660 million in damagesHappy News!Prisoners in solitary confinement given VRTaiwan signed a deal to invest $44 billion in Alaskan LNG infrastructureMIT engineers turn skin cells directly into neurons for cell therapyMexican government found a swarm of giraffes in the state bordering El PasoBYD (china) unveiled a new battery and charging system that it says can provide 249 miles of range with a five-minute chargeIdaho cuts regulations by 25% in four years by implementing Zero-Based RegulationUtah just passed a permitting reform lawTroop DeploymentDavid - If you liked The Dragon's Banker, you should read Oathbreakers AnonymousEneasz - How To Believe False ThingsWes - Men and women are different, but not that differentGot something to say? Come chat with us on the Bayesian Conspiracy Discord or email us at themindkillerpodcast@gmail.com. Say something smart and we'll mention you on the next show!Follow us!RSS: http://feeds.feedburner.com/themindkillerGoogle: https://play.google.com/music/listen#/ps/Iqs7r7t6cdxw465zdulvwikhekmPocket Casts: https://pca.st/vvcmifu6Stitcher: https://www.stitcher.com/podcast/the-mind-killerApple: Intro/outro music: On Sale by Golden Duck Orchestra This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit mindkiller.substack.com/subscribe
Vox's Kelsey Piper joins the show to discuss the drastic differences between the Biden and Trump administrations on AI—and what it all means for the future of humanity.
In this episode, Patrick McKenzie (patio11) and Erik Torenberg, investor and the media entrepreneur behind Turpentine, explore the evolving relationship between tech journalism and the industry it covers. They discuss how fictional portrayals of industries greatly inform how jobseekers understand those industries, and how the industries understand themselves. They cover the vacuum in quality tech reporting, the emergence of independent media companies, and industry heavyweights with massive followings. Patrick also brings up the phenomenon of Twitter/Slack crossovers, where coordinated social media action is used to influence internal company policies and public narratives. They examine how this dynamic, combined with economic pressures and ideological motivations, has led to increased groupthink in tech journalism. Expanding on themes covered in Kelsey Piper's episode of Complex Systems, this conversation makes more legible the important ways media affects tech, even though tech is arguably a more sophisticated industry – and why there is a need to move beyond simplistic narratives of "holding power accountable" to provide nuanced, informative coverage that helps people understand tech's impact on society.–Full transcript available here: https://www.complexsystemspodcast.com/episodes/tech-media-erik-torenberg–Sponsors: WorkOS | CheckBuilding an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Start now at https://bit.ly/WorkOS-Turpentine-NetworkCheck is the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to https://checkhq.com/complex and tell them patio11 sent you.–Links:Bits About Money, “Fiction and Finance” https://www.bitsaboutmoney.com/archive/fiction-about-finance/Byrne Hobart's essay on The Social Network https://byrnehobart.medium.com/the-social-network-was-the-most-important-movie-of-all-time-9f91f66018d7Kelsey Piper on Complex Systems https://open.spotify.com/episode/33rHTZVowaq76tCTaKJfRB –Twitter:@patio11@eriktorenberg–Timestamps:(00:00) Intro(00:27) Fiction and Finance: The power of narrative(01:41) The Social Network's impact on career choices(03:34) Cultural perceptions and entrepreneurship(06:04) Media influence and tech industry perception(11:01) The role of tech journalism(14:15) Social media's impact on journalism(19:39) Sponsors: WorkOS | Check(21:54) The intersection of media and tech(39:22) Public intellectualism in tech(57:40) Wrap–Complex Systems is part of the Turpentine podcast network. Turpentine also has a social network for top founders and execs: https://www.turpentinenetwork.com/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #75: Math is Easier, published by Zvi on August 1, 2024 on LessWrong. Google DeepMind got a silver metal at the IMO, only one point short of the gold. That's really exciting. We continuously have people saying 'AI progress is stalling, it's all a bubble' and things like that, and I always find remarkable how little curiosity or patience such people are willing to exhibit. Meanwhile GPT-4o-Mini seems excellent, OpenAI is launching proper search integration, by far the best open weights model got released, we got an improved MidJourney 6.1, and that's all in the last two weeks. Whether or not GPT-5-level models get here in 2024, and whether or not it arrives on a given schedule, make no mistake. It's happening. This week also had a lot of discourse and events around SB 1047 that I failed to avoid, resulting in not one but four sections devoted to it. Dan Hendrycks was baselessly attacked - by billionaires with massive conflicts of interest that they admit are driving their actions - as having a conflict of interest because he had advisor shares in an evals startup rather than having earned the millions he could have easily earned building AI capabilities. so Dan gave up those advisor shares, for no compensation, to remove all doubt. Timothy Lee gave us what is clearly the best skeptical take on SB 1047 so far. And Anthropic sent a 'support if amended' letter on the bill, with some curious details. This was all while we are on the cusp of the final opportunity for the bill to be revised - so my guess is I will soon have a post going over whatever the final version turns out to be and presenting closing arguments. Meanwhile Sam Altman tried to reframe broken promises while writing a jingoistic op-ed in the Washington Post, but says he is going to do some good things too. And much more. Oh, and also AB 3211 unanimously passed the California assembly, and would effectively among other things ban all existing LLMs. I presume we're not crazy enough to let it pass, but I made a detailed analysis to help make sure of it. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. They're just not that into you. 4. Language Models Don't Offer Mundane Utility. Baba is you and deeply confused. 5. Math is Easier. Google DeepMind claims an IMO silver metal, mostly. 6. Llama Llama Any Good. The rankings are in as are a few use cases. 7. Search for the GPT. Alpha tests begin of SearchGPT, which is what you think it is. 8. Tech Company Will Use Your Data to Train Its AIs. Unless you opt out. Again. 9. Fun With Image Generation. MidJourney 6.1 is available. 10. Deepfaketown and Botpocalypse Soon. Supply rises to match existing demand. 11. The Art of the Jailbreak. A YouTube video that (for now) jailbreaks GPT-4o-voice. 12. Janus on the 405. High weirdness continues behind the scenes. 13. They Took Our Jobs. If that is even possible. 14. Get Involved. Akrose has listings, OpenPhil has a RFP, US AISI is hiring. 15. Introducing. A friend in venture capital is a friend indeed. 16. In Other AI News. Projections of when it's incrementally happening. 17. Quiet Speculations. Reports of OpenAI's imminent demise, except, um, no. 18. The Quest for Sane Regulations. Nick Whitaker has some remarkably good ideas. 19. Death and or Taxes. A little window into insane American anti-innovation policy. 20. SB 1047 (1). The ultimate answer to the baseless attacks on Dan Hendrycks. 21. SB 1047 (2). Timothy Lee analyzes current version of SB 1047, has concerns. 22. SB 1047 (3): Oh Anthropic. They wrote themselves an unexpected letter. 23. What Anthropic's Letter Actually Proposes. Number three may surprise you. 24. Open Weights Are Unsafe And Nothing Can Fix This. Who wants to ban what? 25. The Week in Audio. Vitalik Buterin, Kelsey Piper, Patrick McKenzie. 26. Rheto...
Patrick McKenzie (aka @Patio11) is joined by Kelsey Piper, a journalist for Vox's Future Perfect. Kelsey recently reported on equity irregularities at OpenAI in May of 2024, leading to an improvement of their policies in this area. We discuss the social function of equity in the technology industry, why the tech industry and reporters have had a frosty relationship the last several years, and more.–Full transcript available here: https://www.complexsystemspodcast.com/episodes/reporting-tech-kelsey-piper/–Sponsor: This podcast is sponsored by Check, the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to checkhq.com/complex and tell them patio11 sent you.–Links:https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employeeshttps://www.vox.com/authors/kelsey-piper https://www.bitsaboutmoney.com/–Twitter:@patio11@KelseyTuoc–Timestamps:(00:00) Intro(00:28) Kelsey Piper's journey into tech journalism(01:34) Early reporting (03:16) How Kelsey covers OpenAI(05:27) Understanding equity in the tech industry(11:29) Tender offers and employee equity(20:00) Dangerous Professional: employee edition(28:46) The frosty relationship between tech and media(35:44) Editorial policies and tech reporting(37:28) Media relations in the modern tech industry(38:35) Historical media practices and PR strategies(40:48) Challenges in modern journalism(44:48) VaccinateCA(56:12) Reflections on Effective Altruism and ethics(01:03:52) The role of Twitter in modern coordination(01:05:40) Final thoughts–Complex Systems is part of the Turpentine podcast network.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Roundup #20: July 2024, published by Zvi on July 24, 2024 on LessWrong. It is monthly roundup time. I invite readers who want to hang out and get lunch in NYC later this week to come on Thursday at Bhatti Indian Grill (27th and Lexington) at noon. I plan to cover the UBI study in its own post soon. I cover Nate Silver's evisceration of the 538 presidential election model, because we cover probabilistic modeling and prediction markets here, but excluding any AI discussions I will continue to do my best to stay out of the actual politics. Bad News Jeff Bezos' rocket company Blue Origin files comment suggesting SpaceX Starship launches be capped due to 'impact on local environment.' This is a rather shameful thing for them to be doing, and not for the first time. Alexey Guzey reverses course, realizes at 26 that he was a naive idiot at 20 and finds everything he wrote cringe and everything he did incompetent and Obama was too young. Except, no? None of that? Young Alexey did indeed, as he notes, successfully fund a bunch of science and inspire good thoughts and he stands by most of his work. Alas, now he is insufficiently confident to keep doing it and is in his words 'terrified of old people.' I think Alexey's success came exactly because he saw people acting stupid and crazy and systems not working and did not then think 'oh these old people must have their reasons,' he instead said that's stupid and crazy. Or he didn't even notice that things were so stupid and crazy and tried to just… do stuff. When I look back on the things I did when I was young and foolish and did not know any better, yeah, some huge mistakes, but also tons that would never have worked if I had known better. Also, frankly, Alexey is failing to understand (as he is still only 26) how much cognitive and physical decline hits you, and how early. Your experience and wisdom and increased efficiency is fighting your decreasing clock speed and endurance and physical strength and an increasing set of problems. I could not, back then, have done what I am doing now. But I also could not, now, do what I did then, even if I lacked my current responsibilities. For example, by the end of the first day of a Magic tournament I am now completely wiped. Google short urls are going to stop working. Patrick McKenzie suggests prediction markets on whether various Google services will survive. I'd do it if I was less lazy. Silver Bullet This is moot in some ways now that Biden has dropped out, but being wrong on the internet is always relevant when it impacts our epistemics and future models. Nate Silver, who now writes Silver Bulletin and runs what used to be the old actually good 538 model, eviscerates the new 538 election model. The 'new 538' model had Biden projected to do better in Wisconsin and Ohio than either the fundamentals or his polls, which makes zero sense. It places very little weight on polls, which makes no sense. It has moved towards Biden recently, which makes even less sense. Texas is their third most likely tipping point state, it happens 9.8% of the time, wait what? At best, Kelsey Piper's description here is accurate. Kelsey Piper: Nate Silver is slightly too polite to say it but my takeaway from his thoughtful post is that the 538 model is not usefully distinguishable from a rock with "incumbents win reelection more often than not" painted on it. Gil: worse, I think Elliott's modelling approach is probably something like max_(dem_chance) [incumbency advantage, polls, various other approaches]. Elliott's model in 2020 was more bullish on Biden's chances than Nate and in that case Trump was the incumbent and down in the polls. Nate Silver (on Twitter): Sure, the Titanic might seem like it's capsizing, but what you don't understand is that the White Star Line has an extremely good track re...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why so many "racists" at Manifest?, published by Austin on June 18, 2024 on The Effective Altruism Forum. Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average response to "would you recommend to a friend" was a 9.0/10. Reviewers said nice things like "one of the best weekends of my life" and "dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams" and "I've always found tribalism mysterious, but perhaps that was just because I hadn't yet found my tribe." Arnold Brooks running a session on Aristotle's Metaphysics. More photos of Manifest here. However, a recent post on The Guardian and review on the EA Forum highlight an uncomfortable fact: we invited a handful of controversial speakers to Manifest, whom these authors call out as "racist". Why did we invite these folks? First: our sessions and guests were mostly not controversial - despite what you may have heard Here's the schedule for Manifest on Saturday: (The largest & most prominent talks are on the left. Full schedule here.) And here's the full list of the 57 speakers we featured on our website: Nate Silver, Luana Lopes Lara, Robin Hanson, Scott Alexander, Niraek Jain-sharma, Byrne Hobart, Aella, Dwarkesh Patel, Patrick McKenzie, Chris Best, Ben Mann, Eliezer Yudkowsky, Cate Hall, Paul Gu, John Phillips, Allison Duettmann, Dan Schwarz, Alex Gajewski, Katja Grace, Kelsey Piper, Steve Hsu, Agnes Callard, Joe Carlsmith, Daniel Reeves, Misha Glouberman, Ajeya Cotra, Clara Collier, Samo Burja, Stephen Grugett, James Grugett, Javier Prieto, Simone Collins, Malcolm Collins, Jay Baxter, Tracing Woodgrains, Razib Khan, Max Tabarrok, Brian Chau, Gene Smith, Gavriel Kleinwaks, Niko McCarty, Xander Balwit, Jeremiah Johnson, Ozzie Gooen, Danny Halawi, Regan Arntz-Gray, Sarah Constantin, Frank Lantz, Will Jarvis, Stuart Buck, Jonathan Anomaly, Evan Miyazono, Rob Miles, Richard Hanania, Nate Soares, Holly Elmore, Josh Morrison. Judge for yourself; I hope this gives a flavor of what Manifest was actually like. Our sessions and guests spanned a wide range of topics: prediction markets and forecasting, of course; but also finance, technology, philosophy, AI, video games, politics, journalism and more. We deliberately invited a wide range of speakers with expertise outside of prediction markets; one of the goals of Manifest is to increase adoption of prediction markets via cross-pollination. Okay, but there sure seemed to be a lot of controversial ones… I was the one who invited the majority (~40/60) of Manifest's special guests; if you want to get mad at someone, get mad at me, not Rachel or Saul or Lighthaven; certainly not the other guests and attendees of Manifest. My criteria for inviting a speaker or special guest was roughly, "this person is notable, has something interesting to share, would enjoy Manifest, and many of our attendees would enjoy hearing from them". Specifically: Richard Hanania - I appreciate Hanania's support of prediction markets, including partnering with Manifold to run a forecasting competition on serious geopolitical topics and writing to the CFTC in defense of Kalshi. (In response to backlash last year, I wrote a post on my decision to invite Hanania, specifically) Simone and Malcolm Collins - I've enjoyed their Pragmatist's Guide series, which goes deep into topics like dating, governance, and religion. I think the world would be better with more kids in it, and thus support pronatalism. I also find the two of them to be incredibly energetic and engaging speakers IRL. Jonathan Anomaly - I attended a talk Dr. Anomaly gave about the state-of-the-art on polygenic embryonic screening. I was very impressed that something long-considered scien...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's June 2024 Newsletter, published by Harlan on June 15, 2024 on LessWrong. MIRI updates MIRI Communications Manager Gretta Duleba explains MIRI's current communications strategy. We hope to clearly communicate to policymakers and the general public why there's an urgent need to shut down frontier AI development, and make the case for installing an "off-switch". This will not be easy, and there is a lot of work to be done. Some projects we're currently exploring include a new website, a book, and an online reference resource. Rob Bensinger argues, contra Leopold Aschenbrenner, that the US government should not race to develop artificial superintelligence. "If anyone builds it, everyone dies." Instead, Rob outlines a proposal for the US to spearhead an international alliance to halt progress toward the technology. At the end of June, the Agent Foundations team, including Scott Garrabrant and others, will be parting ways with MIRI to continue their work as independent researchers. The team was originally set up and "sponsored" by Nate Soares and Eliezer Yudkowsky. However, as AI capabilities have progressed rapidly in recent years, Nate and Eliezer have become increasingly pessimistic about this type of work yielding significant results within the relevant timeframes. Consequently, they have shifted their focus to other priorities. Senior MIRI leadership explored various alternatives, including reorienting the Agent Foundations team's focus and transitioning them to an independent group under MIRI fiscal sponsorship with restricted funding, similar to AI Impacts. Ultimately, however, we decided that parting ways made the most sense. The Agent Foundations team has produced some stellar work over the years, and made a true attempt to tackle one of the most crucial challenges humanity faces today. We are deeply grateful for their many years of service and collaboration at MIRI, and we wish them the very best in their future endeavors. The Technical Governance Team responded to NIST's request for comments on draft documents related to the AI Risk Management Framework. The team also sent comments in response to the " Framework for MItigating AI Risks" put forward by U.S. Senators Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME). Brittany Ferrero has joined MIRI's operations team. Previously, she worked on projects such as the Embassy Network and Open Lunar Foundation. We're excited to have her help to execute on our mission. News and links AI alignment researcher Paul Christiano was appointed as head of AI safety at the US AI Safety Institute. Last fall, Christiano published some of his thoughts about AI regulation as well as responsible scaling policies. The Superalignment team at OpenAI has been disbanded following the departure of its co-leaders Ilya Sutskever and Jan Leike. The team was launched last year to try to solve the AI alignment problem in four years. However, Leike says that the team struggled to get the compute it needed and that "safety culture and processes have taken a backseat to shiny products" at OpenAI. This seems extremely concerning from the perspective of evaluating OpenAI's seriousness when it comes to safety and robustness work, particularly given that a similar OpenAI exodus occurred in 2020 in the wake of concerns about OpenAI's commitment to solving the alignment problem. Vox's Kelsey Piper reports that employees who left OpenAI were subject to an extremely restrictive NDA indefinitely preventing them from criticizing the company (or admitting that they were under an NDA), under threat of losing their vested equity in the company. OpenAI executives have since contacted former employees to say that they will not enforce the NDAs. Rob Bensinger comments on these developments here, strongly criticizing OpenAI for...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Fallout, published by Zvi on May 28, 2024 on LessWrong. Previously: OpenAI: Exodus (contains links at top to earlier episodes), Do Not Mess With Scarlett Johansson We have learned more since last week. It's worse than we knew. How much worse? In which ways? With what exceptions? That's what this post is about. The Story So Far For years, employees who left OpenAI consistently had their vested equity explicitly threatened with confiscation and the lack of ability to sell it, and were given short timelines to sign documents or else. Those documents contained highly aggressive NDA and non disparagement (and non interference) clauses, including the NDA preventing anyone from revealing these clauses. No one knew about this until recently, because until Daniel Kokotajlo everyone signed, and then they could not talk about it. Then Daniel refused to sign, Kelsey Piper started reporting, and a lot came out. Here is Altman's statement from May 18, with its new community note. Evidence strongly suggests the above post was, shall we say, 'not consistently candid.' The linked article includes a document dump and other revelations, which I cover. Then there are the other recent matters. Ilya Sutskever and Jan Leike, the top two safety researchers at OpenAI, resigned, part of an ongoing pattern of top safety researchers leaving OpenAI. The team they led, Superalignment, had been publicly promised 20% of secured compute going forward, but that commitment was not honored. Jan Leike expressed concerns that OpenAI was not on track to be ready for even the next generation of models needs for safety. OpenAI created the Sky voice for GPT-4o, which evoked consistent reactions that it sounded like Scarlett Johansson, who voiced the AI in the movie Her, Altman's favorite movie. Altman asked her twice to lend her voice to ChatGPT. Altman tweeted 'her.' Half the articles about GPT-4o mentioned Her as a model. OpenAI executives continue to claim that this was all a coincidence, but have taken down the Sky voice. (Also six months ago the board tried to fire Sam Altman and failed, and all that.) A Note on Documents from OpenAI The source for the documents from OpenAI that are discussed here, and the communications between OpenAI and its employees and ex-employees, is Kelsey Piper in Vox, unless otherwise stated. She went above and beyond, and shares screenshots of the documents. For superior readability and searchability, I have converted those images to text. Some Good News But There is a Catch OpenAI has indeed made a large positive step. They say they are releasing former employees from their nondisparagement agreements and promising not to cancel vested equity under any circumstances. Kelsey Piper: There are some positive signs that change is happening at OpenAI. The company told me, "We are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations." Bloomberg confirms that OpenAI has promised not to cancel vested equity under any circumstances, and to release all employees from one-directional non-disparagement agreements. And we have this confirmation from Andrew Carr. Andrew Carr: I guess that settles that. Tanner Lund: Is this legally binding? Andrew Carr: I notice they are also including the non-solicitation provisions as not enforced. (Note that certain key people, like Dario Amodei, plausibly negotiated two-way agreements, which would mean theirs would still apply. I would encourage anyone in that category who is now free of the clause, even if they have no desire to disparage OpenAI, to simply say 'I am under no legal obligation not to disparage OpenAI.') These actions by OpenAI are helpful. They are necessary. They are no...
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Nolan Church breaks down what is happening over at OpenAI and what we know about three developing stories involving exec departures, equity, and transparency. Kelli is on vacation this week. He touches on:* The departure of Ilya Sutskever, co-founder and chief scientist, along with Jan Lieke* The journalistic investigation by Vox's Kelsey Piper, who broke the story of OpenAI's practice of requiring departing employees to sign life-long non-disclosure and non-disparagement agreements, threatening the loss of vested equity for breach* The incident involving Scarlett Johansson being approached to voice OpenAI's AI, without her consent, raising questions about ethical use of voice likeness in AI technologiesNolan debriefs with lessons for HR leaders and company founders.HR Heretics is a podcast from Turpentine.SPONSOR:Attio is the next generation of CRM. It's powerful, flexible, and easily configures to the unique way your startup runs, whatever your go-to-market motion. The next era deserves a better CRM. Join ElevenLabs, Replicate, Modal, and more at https://bit.ly/AttioHRHereticsKEEP UP WITH NOLAN + KELLI ON LINKEDINNolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovichTIMESTAMPS:(00:00) Intro(00:47) The Big Departures: OpenAI's Executive Shake-Up(03:31) OpenAI's Controversial NDAs(05:12) Sponsors: Attio(08:21) The Scarlett Johansson Voice Saga(10:04) The Future of OpenAI and HR Lessons(10:50) Wrap This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pandemic apathy, published by Matthew Rendall on May 5, 2024 on The Effective Altruism Forum. An article in Vox yesterday by Kelsey Piper notes that after suffering through the whole Covid pandemic, policymakers and publics now seem remarkably unconcerned to prevent another one. 'Repeated efforts to get a serious pandemic prevention program through [the US] Congress', she writes, 'have fizzled.' Writing from Britain, I'm not aware of more serious efforts to prevent a repetition over here. That seems surprising. Both governments and citizens notoriously neglect many catastrophic threats, sometimes because they've never yet materialised (thermonuclear war; misaligned superintelligence), sometimes because they creep up on us slowly (climate change, biodiversity loss), sometimes because it's been a while since the last disaster and memories fade. After an earthquake or a hundred-year flood, more people take out insurance against them; over time, memories fade and take-up declines. None of these mechanisms plausibly explains apathy toward pandemic risk. If anything, you'd think people would exaggerate the threat, as they did the threat of terrorism after 9/11. It's recent and - in contrast to 9/11 - it's something we all personally experienced. What's going on? Cass Sunstein argues that 9/11 prompted a stronger response than global heating in part because people could put a face on a specific villain - Osama bin Laden. Sunstein maintains that this heightens not only outrage but also fear. Covid is like global heating rather than al-Qaeda in this respect. While that could be part of it, my hunch is that at least two other factors are playing a role. First, tracking down and killing terrorists was exciting. Improving ventilation systems or monitoring disease transmission between farmworkers and cows is not. It's a bit like trying to get six-year olds interested in patent infringements. This prompts the worry that we might fail to address some threats because their solutions are too boring to think about. Second, maybe Covid is a bit like Brexit. That issue dominated British politics for so long that even those of us who would like to see Britain rejoin the EU are rather loth to reopen it. Similarly, most of us would rather think about anything else than the pandemic. Unfortunately, that's a recipe for repeating it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Language models surprised us, published by Ajeya on August 30, 2023 on The Effective Altruism Forum. Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Most experts were surprised by progress in language models in 2022 and 2023. There may be more surprises ahead, so experts should register their forecasts now about 2024 and 2025. Kelsey Piper co-drafted this post. Thanks also to Isabel Juniewicz for research help. If you read media coverage of ChatGPT - which called it 'breathtaking', 'dazzling', 'astounding' - you'd get the sense that large language models (LLMs) took the world completely by surprise. Is that impression accurate? Actually, yes. There are a few different ways to attempt to measure the question "Were experts surprised by the pace of LLM progress?" but they broadly point to the same answer: ML researchers, superforecasters, and most others were all surprised by the progress in large language models in 2022 and 2023. Competitions to forecast difficult ML benchmarks ML benchmarks are sets of problems which can be objectively graded, allowing relatively precise comparison across different models. We have data from forecasting competitions done in 2021 and 2022 on two of the most comprehensive and difficult ML benchmarks: the MMLU benchmark and the MATH benchmark. First, what are these benchmarks? The MMLU dataset consists of multiple choice questions in a variety of subjects collected from sources like GRE practice tests and AP tests. It was intended to test subject matter knowledge in a wide variety of professional domains. MMLU questions are legitimately quite difficult: the average person would probably struggle to solve them. At the time of its introduction in September 2020, most models only performed close to random chance on MMLU (~25%), while GPT-3 performed significantly better than chance at 44%. The benchmark was designed to be harder than any that had come before it, and the authors described their motivation as closing the gap between performance on benchmarks and "true language understanding": Natural Language Processing (NLP) models have achieved superhuman performance on a number of recently proposed benchmarks. However, these models are still well below human level performance for language understanding as a whole, suggesting a disconnect between our benchmarks and the actual capabilities of these models. Meanwhile, the MATH dataset consists of free-response questions taken from math contests aimed at the best high school math students in the country. Most college-educated adults would get well under half of these problems right (the authors used computer science undergraduates as human subjects, and their performance ranged from 40% to 90%). At the time of its introduction in January 2021, the best model achieved only about ~7% accuracy on MATH. The authors say: We find that accuracy remains low even for the best models. Furthermore, unlike for most other text-based datasets, we find that accuracy is increasing very slowly with model size. If trends continue, then we will need algorithmic improvements, rather than just scale, to make substantial progress on MATH. So, these are both hard benchmarks - the problems are difficult for humans, the best models got low performance when the benchmarks were introduced, and the authors seemed to imply it would take a while for performance to get really good. In mid-2021, ML professor Jacob Steinhardt ran a contest with superforecasters at Hypermind to predict progress on MATH and MMLU. Superforecasters massively undershot reality in both cases. They predicted that performance on MMLU would improve moderately from 44% in 2021 to 57% by June 2022. The actual performance was 68%, which s...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The costs of caution, published by Kelsey Piper on May 1, 2023 on The Effective Altruism Forum. Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there's a good chance we can do it within years of the advent of AI systems that can do the research work humans can do. Josh Cason on Twitter raised an objection to recent calls for a moratorium on AI development: April 2, 2023 Or raise your hand if you or someone you love has a terminal illness, believes Ai has a chance at accelerating medical work exponentially, and doesn't have til Christmas, to wait on your make believe moratorium. Have a heart man ❤️ I've said that I think we should ideally move a lot slower on developing powerful AI systems. I still believe that. But I think Josh's objection is important and deserves a full airing. Approximately 150,000 people die worldwide every day. Nearly all of those deaths are, in some sense, preventable, with sufficiently advanced medical technology. Every year, five million families bury a child dead before their fifth birthday. Hundreds of millions of people live in extreme poverty. Billions more have far too little money to achieve their dreams and grow into their full potential. Tens of billions of animals are tortured on factory farms. Scientific research and economic progress could make an enormous difference to all these problems. Medical research could cure diseases. Economic progress could make food, shelter, medicine, entertainment and luxury goods accessible to people who can't afford it today. Progress in meat alternatives could allow us to shut down factory farms. There are tens of thousands of scientists, engineers, and policymakers working on fixing these kinds of problems — working on developing vaccines and antivirals, understanding and arresting aging, treating cancer, building cheaper and cleaner energy sources, developing better crops and homes and forms of transportation. But there are only so many people working on each problem. In each field, there are dozens of useful, interesting subproblems that no one is working on, because there aren't enough people to do the work. If we could train AI systems powerful enough to automate everything these scientists and engineers do, they could help. As Tom discussed in a previous post, once we develop AI that does AI research as well as a human expert, it might not be long before we have AI that is way beyond human experts in all domains. That is, AI which is way better than the best humans at all aspects of medical research: thinking of new ideas, designing experiments to test those ideas, building new technologies, and navigating bureaucracies. This means that rather than tens of thousands of top biomedical researchers, we could have hundreds of millions of significantly superhuman biomedical researchers.[1] That's more than a thousand times as much effort going into tackling humanity's biggest killers. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there's a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.[2] All this may be a massive underestimate. This envisions a world that's pretty much like ours except that extraordinary talent is no longer scarce. But that feels, in some senses, like thinking about the advent of electricity purely in terms of ‘torchlight will no longer be scarce'. Electricity did make it very cheap to light our homes at night. But it also enabled vacuum cleaners, washing machines, cars, smartphones, airplanes, video recording, Twitter — entirely new things, not just cheaper access to thi...
The final session of the conference includes some closing words from Eli Nathan, followed by a fireside chat with Matthew Yglesias and Kelsey Piper.Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
Chris Blattman and Kelsey Piper discuss a range of issues in this fireside chat, including Chris's new book, "Why We Fight"."Why We Fight" draws on decades of economics, political science, psychology, and real-world interventions to synthesize the root causes and remedies for war. From warring states to street gangs, ethnic groups and religious sects to political factions, there are common dynamics across all these levels. Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is it time for a pause?, published by Kelsey Piper on April 6, 2023 on The Effective Altruism Forum. Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post. The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up to me, I would slow down AI development starting now — and then later slow down even more. Many of the people building powerful AI systems think they'll stumble on an AI system that forever changes our world fairly soon — three years, five years. I think they're reasonably likely to be wrong about that, but I'm not sure they're wrong about that. If we give them fifteen or twenty years, I start to suspect that they are entirely right. And while I think that the enormous, terrifying challenges of making AI go well are very much solvable, it feels very possible, to me, that we won't solve them in time. It's hard to overstate how much we have to gain from getting this right. It's also hard to overstate how much we have to lose from getting it wrong. When I'm feeling optimistic about having grandchildren, I imagine that our grandchildren will look back in horror at how recklessly we endangered everyone in the world. And I'm much much more optimistic that humanity will figure this whole situation out in the end if we have twenty years than I am if we have five. There's all kinds of AI research being done — at labs, in academia, at nonprofits, and in a distributed fashion all across the internet — that's so diffuse and varied that it would be hard to ‘slow down' by fiat. But there's one kind of AI research — training much larger, much more powerful language models — that it might make sense to try to slow down. If we could agree to hold off on training ever more powerful new models, we might buy more time to do AI alignment research on the models we have. This extra research could make it less likely that misaligned AI eventually seizes control from humans. An open letter released on Wednesday, with signatures from Elon Musk[1], Apple co-founder Steve Wozniak, leading AI researcher Yoshua Bengio, and many other prominent figures, called for a six-month moratorium on training bigger, more dangerous ML models: We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. I tend to think that we are developing and releasing AI systems much faster and much more carelessly than is in our interests. And from talking to people in Silicon Valley and policymakers in DC, I think efforts to change that are rapidly gaining traction. “We should slow down AI capabilities progress” is a much more mainstream view than it was six months ago, and to me that seems like great news. In my ideal world, we absolutely would be pausing after the release of GPT-4. People have been speculating about the alignment problem for decades, but this moment is an obvious golden age for alignment work. We finally have models powerful enough to do useful empirical work on understanding them, changing their behavior, evaluating their capabilities, noticing when they're being deceptive or manipulative, and so on. There are so many open questions in alignment that I expect we can make a lot of progress on in five years, with the benefit of what we've learned from existing models. We'd be in a much better position if we could collectively slow down to give ourselves more time to do this work, and I hope we find a way to do that intelligently and effectively. As I've said above, I ...
Tyler Cowen and Kelsey Piper cover a range of topics in this fireside chat session.Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Planned Obsolescence, published by Ajeya Cotra on March 27, 2023 on LessWrong. Kelsey Piper and I just launched a new blog about AI futurism and AI alignment called Planned Obsolescence. If you're interested, you can check it out here. Both of us have thought a fair bit about what we see as the biggest challenges in technical work and in policy to make AI go well, but a lot of our thinking isn't written up, or is embedded in long technical reports. This is an effort to make our thinking more accessible. That means it's mostly aiming at a broader audience than LessWrong and the EA Forum, although some of you might still find some of the posts interesting. So far we have seven posts: What we're doing here "Aligned" shouldn't be a synonym for "good" Situational awareness Playing the training game Training AIs to help us align AIs Alignment researchers disagree a lot The ethics of AI red-teaming Thanks to ilzolende for formatting these posts for publication. Each post has an accompanying audio version generated by a voice synthesis model trained on the author's voice using Descript Overdub. You can submit questions or comments to mailbox@planned-obsolescence.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
OpenAI last week released its most powerful language model yet: GPT-4, which vastly outperforms its predecessor, GPT-3.5, on a variety of tasks.GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled around in the 10th percentile. GPT-4 scored in the 88th percentile on the LSAT, up from GPT-3.5's 40th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test-takers. (It's predecessor hovered around 46 percent.) These are stunning results — not just what the model can do, but the rapid pace of progress. And Open AI's ChatGPT and other chat bots are just one example of what recent A.I. systems can achieve.Kelsey Piper is a senior writer at Vox, where she's been ahead of the curve covering advanced A.I., its world-changing possibilities, and the people creating it. Her work is informed by her deep knowledge of the handful of companies that arguably have the most influence over the future of A.I.We discuss whether artificial intelligence has coherent “goals” — and whether that matters; whether the disasters ahead in A.I. will be small enough to learn from or “truly catastrophic”; the challenge of building “social technology” fast enough to withstand malicious uses of A.I.; whether we should focus on slowing down A.I. progress — and the specific oversight and regulation that could help us do it; why Piper is more optimistic this year that regulators can be “on the ball' with A.I.; how competition between the U.S. and China shapes A.I. policy; and more.This episode contains stronglanguage.Mentioned:“The Man of Your Dreams” by Sangeeta Singh-Kurtz“The Case for Taking A.I. Seriously as a Threat to Humanity” by Kelsey Piper“The Return of the Magicians” by Ross Douthat“Let's Think About Slowing Down A.I.” by Katja GraceBook Recommendations:“The Making of the Atomic Bomb” by Richard RhodesAsterisk Magazine“The Silmarillion” by J. R. R. TolkienThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.“The Ezra Klein Show” is produced by Emefa Agawu, Annie Galvin, Jeff Geld, Roge Karma and Kristin Lin. Fact-checking by Michelle Harris and Kate Sinclair. Mixing by Jeff Geld. Original music by Isaac Jones. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Carole Sabouraud and Kristina Samulewski.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How oral rehydration therapy was developed, published by Kelsey Piper on March 10, 2023 on The Effective Altruism Forum. This is a link post for "Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century's Biggest Killer of Children" in the second issue of Asterisk Magazine, now out. The question it poses is: oral rehydration therapy, which has saved millions of lives a year since it was developed, is very simple. It uses widely available ingredients. Why did it take until the late 1960s to come up with it? There's sort of a two part answer. The first part is that without a solid theoretical understanding of the problem you're trying to solve, it's (at least in this case) ludicrously difficult to solve it empirically: people kept trying variants on this, and they didn't work, because an important parameter was off and they had no idea which direction to correct in. The second is that the incredible simplicity of the modern formula for oral rehydration therapy is the product of a lot of concerted design effort not just to find something that worked against cholera but to find something dead simple which did only require household ingredients and was hard to get wrong. The fact the final solution is so simple isn't because oral rehydration is a simple problem, but because researchers kept on going until they had a sufficiently simple solution. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Someone should write a detailed history of effective altruism, published by Pete Rowlett on January 14, 2023 on The Effective Altruism Forum. I think that someone should write a detailed history of the effective altruism movement. The history that currently exists on the forum is pretty limited, and I'm not aware of much other material, so I think there's room for substantial improvement. An oral history was already suggested in this post. I tentatively planned to write this post before FTX collapsed, but the reasons for writing this are probably even more compelling now than they were beforehand. I think a comprehensive written history would help. Develop an EA ethos/identity based on a shared intellectual history and provide a launch pad for future developments (e.g. longtermism and an influx of money). I remember reading about a community member who mostly thought about global health getting on board with AI safety when they met a civil rights attorney who was concerned about it. A demonstration of shared values allowed for that development. Build trust within the movement. As the community grows, it can no longer rely on everyone knowing everyone else, and needs external tools to keep everyone on the same page. Aesthetics have been suggested as one option, and I think that may be part of the solution, in concert with a written history. Mitigate existential risk to the EA movement. See EA criticism #6 in Peter Wildeford's post and this post about ways in which EA could fail. Assuming the book would help the movement develop an identity and shared trust, it could lower risk to the movement. Understand the strengths and weaknesses of the movement, and what has historically been done well and what has been done poorly. There are a few ways this could happen. Open Phil (which already has a History of Philanthropy focus area) or CEA could actively seek out someone for the role and fund them for the duration of the project. This process would give the writer the credibility needed to get time with important EA people. A would-be writer could request a grant, perhaps from the EA Infrastructure Fund. An already-established EA journalist like Kelsey Piper could do it. There would be a high opportunity cost associated with this option, of course, since they're already doing valuable work. On the other hand, they would already have the credibility and baseline knowledge required to do a great job. I'd be interested in hearing people's thoughts on this, or if I missed a resource that already exists. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk, published by Pablo on December 30, 2022 on The Effective Altruism Forum. [T]he sun with all the planets will in time grow too cold for life, unless indeed some great body dashes into the sun and thus gives it fresh life. Believing as I do that man in the distant future will be a far more perfect creature than he now is, it is an intolerable thought that he and all other sentient beings are doomed to complete annihilation after such long-continued slow progress. Charles Darwin Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish. A message to our readers Welcome back to Future Matters. We took a break during the autumn, but will now be returning to our previous monthly schedule. Future Matters would like to wish all our readers a happy new year! The most significant development during our hiatus was the collapse of FTX and the fall of Sam Bankman-Fried, until then one of the largest and most prominent supporters of longtermist causes. We were shocked and saddened by these revelations, and appalled by the allegations and admissions of fraud, deceit, and misappropriation of customer funds. As others have stated, fraud in the service of effective altruism is unacceptable, and we condemn these actions unequivocally and support authorities' efforts to investigate and prosecute any crimes that may have been committed. Research A classic argument for existential risk from superintelligent AI goes something like this: (1) superintelligent AIs will be goal-directed; (2) goal-directed superintelligent AIs will likely pursue outcomes that we regard as extremely bad; therefore (3) if we build superintelligent AIs, the future will likely be extremely bad. Katja Grace's Counterarguments to the basic AI x-risk case [] identifies a number of weak points in each of the premises in the argument. We refer interested readers to our conversation with Katja below for more discussion of this post, as well as to Erik Jenner and Johannes Treutlein's Responses to Katja Grace's AI x-risk counterarguments []. The key driver of AI risk is that we are rapidly developing more and more powerful AI systems, while making relatively little progress in ensuring they are safe. Katja Grace's Let's think about slowing down AI [] argues that the AI risk community should consider advocating for slowing down AI progress. She rebuts some of the objections commonly levelled against this strategy: e.g. to the charge of infeasibility, she points out that many technologies (human gene editing, nuclear energy) have been halted or drastically curtailed due to ethical and/or safety concerns. In the comments, Carl Shulman argues that there is not currently enough buy-in from governments or the public to take more modest safety and governance interventions, so it doesn't seem wise to advocate for such a dramatic and costly policy: “It's like climate activists in 1950 responding to difficulties passing funds for renewable energy R&D or a carbon tax by proposing that the sale of automobiles be banned immediately. It took a lot of scientific data, solidification of scientific consensus, and communication/movement-building over time to get current measures on climate change.” We enjoyed Kelsey Piper's review of What We Owe the Future [], not necessarily because we agree with her criticisms, but because we thought the review managed to identify, and articulate very clearly, what we take to be the main c...
Paris Marx is joined by Molly White to discuss the ongoing collapse of the crypto industry, what to make of the implosion of FTX and Alameda Research, and what happens next with Sam Bankman-Fried.Molly White is the creator of Web3 Is Going Just Great and a fellow at the Harvard Library Innovation Lab. You can follow her on Twitter at @molly0xFFF.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, support the show on Patreon, and sign up for the weekly newsletter.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Since recording, Sam Bankman-Fried has been extradited from the Bahamas to the United States, and it's been revealed that Caroline Ellison and FTX co-founder have plead guilty and are cooperating with authorities against Bankman-Fried.Molly has been analyzing the collapse of FTX on her newsletter.Paris wrote about effective altruism and longtermism for the New Statesman.Journalists at Forbes wrote about Caroline Ellison and her history.After Sam Bankman-Fried was arrested, effective altruist Kelsey Piper published a series of direct messages she exchanged with her supposed friend.The Southern District of New York's attorney's office, the Securities and Exchange Commission, and the Commodity Futures Trading Commission have all filed charges against Sam Bankman-Fried.There are rumors that Caroline Ellison is working with authorities against Sam Bankman-Fried.US Justice Department is split on when to charge Binance executives. There are also growing questions about Binance's books.Support the show
Cheap Talk ends the semester with a mailbag episode: State dinners; celebrities and national interest; AI and international relations; sports diplomacy and the World Cup; President Biden's willingness to meet with Vladimir Putin; and Marcus admits he doesn't like soccerThanks to all those who contributed questions. Leave a message for a future podcast at https://www.speakpipe.com/cheaptalkLooking for a holiday gift for the international affairs nerd in your life? We humbly suggest the following:Signing Away the Bomb, by Jeffrey M. Kaplow (Coming out on December 22!) (https://www.amazon.com/Signing-Away-Bomb-Surprising-Nonproliferation/dp/1009216732)Face-to-Face Diplomacy, by Marcus Holmes (https://www.amazon.com/Face-Face-Diplomacy-Marcus-Holmes-ebook/dp/B07952GT58/)AI Links:ChatGPT (https://chat.openai.com)Kelsey Piper. Aug. 13, 2020. “GPT-3, explained: This new language AI is uncanny, funny — and a big deal.” Vox.com (https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language)Ethan Mollick. Dec. 8, 2022. “Four Paths to the Revelation.” One Useful Thing Substack. (https://oneusefulthing.substack.com/p/four-paths-to-the-revelation)Movie and TV Recommendations:Welcome to Wrexham (Streaming on Hulu)Slash/Back (Streaming on AMC+)The Bureau (Streaming on AMC+)Dispatches from Elsewhere (Streaming on AMC+)Pepsi, Where's My Jet? (Streaming on Netflix)We'll be back with new episodes in late January. Subscribe now in your podcast player of choice so you don't miss anything. Just enter this custom URL: http://www.jkaplow.net/cheaptalk?format=rss - In Apple Podcasts, tap “Library” on the bottom row, tap “Edit” in the upper-right corner, and choose “Add a Show by URL...” - In Google Podcasts, tap the activity icon in the lower-right, tap “Subscriptions,” tap the “...” menu in the upper-right, and tap “Add by RSS feed.” - In Overcast, tap the “+” in the upper-right corner, then tap “Add URL.”Best wishes for a happy holiday and new year! We'll see you in 2023.
I wrote an article on whether wine is fake. It's not here, it's at asteriskmag.com, the new rationalist / effective altruist magazine. Congratulations to my friend Clara for making it happen. Stories include: Modeling The End Of Monkeypox: I'm especially excited about this one. The top forecaster (of 7,000) in the 2021 Good Judgment competition explains his predictions for monkeypox. If you've ever rolled your eyes at a column by some overconfident pundit, this is maybe the most opposite-of-that thing ever published. Book Review - What We Owe The Future: You've read mine, this is Kelsey Piper's. Kelsey is always great, and this is a good window into the battle over the word “long-termism”. Making Sense Of Moral Change: Interview with historian Christopher Brown on the end of the slave trade. “There is a false dichotomy between sincere activism and self-interested activism. Abolitionists were quite sincerely horrified by slavery and motivated to end it, but their fight for abolition was not entirely altruistic.” How To Prevent The Next Pandemic: MIT professor Kevin Esvelt talks about applying the security mindset to bioterrorism. “At least 38,000 people can assemble an influenza virus from scratch. If people identify a new [pandemic] virus . . . then you just gave 30,000 people access to an agent that is of nuclear-equivalent lethality.” Rebuilding After The Replication Crisis: This is Stuart Ritchie, hopefully you all know him by now. “Fundamentally, how much more can we trust a study published in 2022 compared to one from 2012?” Why Isn't The Whole World Rich? Professor Dietrich Vollrath's introduction to growth economics. What caused the South Korean miracle, and why can't other countries copy it? Is Wine Fake? By me! How come some people say blinded experts can tell the country, subregion, and year of any wine just by tasting it, but other people say blinded experts get fooled by white wines dyed red? China's Silicon Future: Why does China have so much trouble building advanced microchips? How will the CHIPS act affect its broader economic rise? By Karson Elmgren.
Today's episode further explores topics discussed in this week's essay. In the preamble to that essay I said that there would be no content next week. I am going to reverse that. Next week will be an excerpt from Peter Fader's new book. Stay tuned!Full Transcript:Peter: Ed, I love your piece on strategy versus tactics at Disney, Twitter and Dominion Cards. I love the way that you're weaving together a narrative that's taking three of the super hot, interesting topics and a fourth one that most people don't know about.Edward: It's funny, the whole Dominion Cards thing. I've been, I started playing this card game back in 2011. I went to the national Championships in 2012. And I just really enjoy it. It's like the only game I can think of where you actually need to figure out a strategy at the beginning of every game. I've been sitting on this idea of dominion cards as a way to talk about strategy versus tactics for many, many years now. And I've never felt really found the right kind of hook to put it in. And then when this thing happened at Disney on Sunday, I was like, aha, the hook is here. It's time to pull this out of the filing cabinet.Peter: Love it. Well, as a, reader of the column and as someone who thinks about these issues, there's kind of two natural questions that just has to be asked. I wanna get your take on it. So, first. How do you define or where do you draw the line between strategy and tactics?Edward: I think strategy is figuring out what you should be doing and it's trying to figure out what the end point is of where you're going for, and tactics is all the stuff that gets you there. Strategy can be done a bit in isolation. You can go back into your ivory tower and think about what the dynamics are coming out with your strategy and then tactics are going to be very much based on what's happening on the ground. What's happening at any given moment, how the competition is reacting, how economics is changing what type of people you have on your team and any given moment. Those are all tactical decisions like that a consultant is not going to be able to help you with unless he's actually there on the ground.Peter: So I always have a hard time with it, to be honest. Maybe this is just me being narrow minded or something. It's not just the next move is it the next three or four moves. Be specific about strategy versus tactics in chess, and then let's branch out to these other real world stories.Edward: I'm not an expert in chess. I'm actually teaching my kids how to play now, I'm learning along with them. But I think in chess there is a correct strategy. I think strategy in chess is things like control the center of the board would be a strategy. Be willing to sacrifice your piece in order to gain position in the board, or, move your pieces in such a way that you're able to castle fairly early in the game. Those would all be strategies, things that you're working towards over a longer period of time. Tactics are, given what my opponent has just done, what should I do next? And tactics can, you can look far into the future for tactics. There's nothing that stops you from looking nine moves ahead to the right tactic would be in that particular situation. But I think strategy stays in chess at least. I think strategy stays the same. There are correct strategies into chess and there are incorrect strategies in chess. Whereas tactics are gonna change every given game depending on what your opponent does.Peter: So let's take that, and again, it's still little fuzzy. I mean, you're being more specific, but still, and I'm not gonna press you on exactly where one begins and ends, but Disney. Disney. Disney. Disney. It seems like the narrative as you said is Iger had the strategy. Chapek's job is to come in and execute on it. Few missteps here and there. Expand on that beyond what you've said in the piece about that trade offering strategy and tactics.Edward: I think most people are agreed, even the disgruntled shareholders, is that Iger's strategy was the correct one or is the correct one, which is that the cable bundle is getting hammered and Disney in the past basically had a huge amount of leverage over the cable providers and was able to extract large amounts of money from the cable providers by the fact that they had this differentiated content both the traditional Disney content, but also the sports they had with ESPN. And that was a great place for Disney to be and it still is, frankly, they still extract a huge amount of money from the cable providers, but that is not the future. Clearly we see more and more people, especially young people cutting the cord, not going with cable television and moving into streaming. And it was really a question of when did Disney need to move in that direction and how long could they keep their pound of flesh from the cable companies and hold onto that as long as possible? So the strategy then becomes let's move on. Let's go direct to consumer and scale up our Disney Plus product. There's tactical problems in doing that because, Disney bought Fox, which came with 20th Century Fox, which allowed them to add a whole ton of more content to get like the breadth required to win in a streaming war. They got control over Hulu, but they didn't get full ownership of Hulu. And so Comcast still owns a chunk of Hulu in the US which makes all sorts of challenges for Disney on a tactical level on how to actually get to the place where they wanna be. But I think the strategy is clear. It's we wanna get to the point where we are owning that direct to consumer relationship. We are monetizing through a subscription product. We are monetizing through additional add-ons that people can do on top of that. And we are monetizing through our vast aray of merchandising, theme parks, cruise ships, and everything else to allow people to spend more and more and more with us. That strategy is still where they're going the last two big things Iger did before he left, were launching Disney Plus and buying Fox,Peter: LEt's be clear that Chappek isn't against any of those things. Strategically as you've pointed it out, he's on the same page. It's all just tactics not being quite the same as what Iger might have done or might now do.Edward: And even on tactics, I'm not sure, if you look at the things that have hurt the stock price and where Chap has taken ahead, like first of all, Disney Plus has grown faster than they ever thought they would. He over delivered on that. Whether that was his doing or the, the fact is the metric is much better than anyone expected, but there were mistakes along the way. He has fought. There's been lots of fights with the creative side of the organization. Chapek comes from the theme park side. He came into the CEO role and then immediately Covid hit and the theme parks all went to zero. So he was forced to figure out how to do Disney plus where all their revenues coming from for the foreseeable future. Now things have flipped and the theme parks are just minting money. They're doing really, really, really well. But he's pissed off a lot of people by raising prices dramatically. But again, I'm not sure what Iger would've done differently in that case because the demand for the theme parks has has gone way, way, way up. So in the short term, you can't go and build more theme parks. So supplies is what it is. And so you're left with two choices. Either you are raising prices or you are giving a poor consumer experience, either because the parks are just packed full, and they're just unpleasant, or you're turning away people at the door who have booked a vacation. And so none of those options seem great, and of those options, it feels like raising prices was probably the one that Iger would've done as well.Peter: Exactly. So here's the big question. I agree completely with that. It might be that how things play out now tactically and strategically would be the same regardless of which Bob is at the helm, but just having Iger just seems to have this warm glow that will just make the same tactics, not only more palatable but downright genius because they're coming from Iger instead of Chapek. What do you think?Edward: I think that's absolutely right. They're in such a tough spot right now. There's so much going on and it's super, super, super risky what they're trying to do. I think everyone knows that there's really no choice but to go down this path. But also everyone knows that it's a really hard path to go down. And so not only do you need to have the right strategy, which I think people think that is true. You need to have the right tactics, which frankly I don't think Chapek, if he messed up on tactics, it was on a marginal basis. But where there was a bigger mess up was a bunch of execution of those tactics. And so things like the Black Widow movie early on in the pandemic, they decided to take that out of the theaters and put it onto Disney plus. And I think that was a very rational tactical thing to do given the situation they're in. But in execution, Chapek got into a big fight with Scarlette Johansen, who really came down hard, sued Disney. They hurt their relationship with her. Now. Disney ends up hurting their reputation as a good place to go and work if you're a top tier creative. In the short term, maybe they make a little bit more money on the movie, but in the long run they damage the relationship with the very people that are creating the product that they need to excel with.Peter: Fair point. All right, let's pivot from DS to SBF and FTX. There you say that, or at least you're quoting SBF saying strategy was fine. The tactics were at fault. You don't really mean that, you're just saying that's what he said, but you think otherwise.Edward: I'm no financial expert, but I've been following it as closely as I can and it sure looks to me like there was all sorts of... So Fbf owned two companies. He had ftx, but he also had the trading arm, Alameda Research. And there was money traded back and forth between those two organizations. And what I understand is. So imagine if FTX had, I'm making up a number, 10 million tokens and they're sitting on them and those things are worth whatever someone's willing to pay for them and Alameda comes on and says, Hey, I'll buy one of your tokens for a thousand dollars. So now all of a sudden the stock value of those tokens is a thousand dollars times 10 million, which is a huge amount of money they're sitting on. And then they basically end up using that valuation as collateral to do all sorts of loans and leverage to go and do other things with their money. FTX then takes in a bunch of customer deposits and then loans those customer deposits over to Alameda. Alameda then is then sitting on a bunch of these tokens that they're using as collateral against the borrowing of that real money that people put into ftx. Alameda then loses a bunch of that money and it all comes tumbling down when they realize that their collateral is not worth anything. It's all made up collateral. That's my understanding of what happened. Nothing like that has happened exactly before, but things like that have happened before. It's effectively fraud. It's fraud and theft. SBF, however, went on and interviewed Kelsey Piper over at Vox, and his argument was hey, we were doing was great. We were doing all sorts of awesome things, but our record keeping was terrible. We just made a bunch of like rookie terrible, incompetent mistakes. The new CEO who came in to run the company, Is backing SPF up in that yeah, this whole thing is a mess. That everything here is. What was his quote? I quote, I quoted him on my piece. He said, never in my career have I seen such a complete failure of corporate controls and such a complete absence of trustworthy financial information as occurred here. And this is from the guy who also oversaw the bankruptcy of Enron.So it was a mess. They clearly, clearly, clearly were tactically incompetent and SBF is claiming that they didn't know they were stealing all these funds. It's entirely possible that he's right because they seemed like they didn't really know anything that was going on. And there was no financial backups and no guardrails for anything. But generally the overall strategy was built on a house of cards to begin with. So whether their tactics were correct or not maybe it wouldn't have collapsed as badly if they had great tactics, but it was gonna collapse one way or other.Peter: In this case, it's not strategy versus tactics, as you say in the title of the piece, it's and. They're doing bad and both, and it's hard to, pin the blame on one type of decision or another.Edward: The hard part of writing this piece was that given the fact that their strategy was so unethical and terrible and their tactics were so incompetent, how did they manage to get as big as they did so that they caused this disaster to happen.Peter: It's crazy. But but then speaking of which, it takes us to our third character of the week, Elon Musk. Now, you and I had a conversation a couple weeks back. We were saying generally positive things about verification, badges and just the possibilities of getting the business model right. And of course it's too early to tell for sure. But, these couple of weeks since that conversation, well, things have gone differently.Edward: Specifically the thing that we talked about, which was the Twitter blue, $8 a month to get certified.Peter: Verified.Edward: Verified. Verified. And what happened was that the verification process was effectively just having a credit card. So it wasn't like they matched. Your name that you put on Twitter with the name on your credit card or check the address, or had you send a a driver's license with the verification, it was a matter of pay the $8 and you can name yourself whatever you want. In terms of. Strategic idea, allowing people to pay $8 to get certified seems like a very valid idea and a very, I don't know if it's, it is the right strategy, but arguably, at least we argued a couple of podcasts ago, is that it was a good strategy. In execution what that allowed them to do because they didn't create any of those guardrails, they didn't have any verification process beyond paying the $8 is people impersonated all sorts of companies. They impersonated Elon Musk, they impersonated giant companies and had them say ridiculous things with a certification check next to them, and it became a big joke. And so an example of potentially a good strategy with very weak tactical execution.Peter: And what about the broader issues? The way he's running the company, day to day tactics, strategy, whatever it is, it's not good, but what, which basket would you put it in?Edward: I think there's an overlap. First of all, part of it seems like he's kind of changing his strategy on the fly. He's going back and forth and changing what his strategy is, but I think in general, his thesis going into the company was that this company was mismanaged. We need to eliminate a large number of people at the company and replace them with other people. We need to change the culture of this place from one of working from 10:00 AM until 3:00 PM to one where you're working from 7:00 AM until midnight and coming in on the weekends and turning into a hard driving startup type culture with a much smaller team that's much more dedicated and highly compensated. And it feels like that's his strategy. And he wants to go to create a company that ships product really quickly, makes mistakes, fixes them, and keeps going. That is something that I think most owners of most businesses would want for their companies. The challenge becomes how do you get there from here? And that's where there's been lots of flailing and failing. That doesn't mean the whole process is gonna fail, but there's been lots of mistakes made in that process of getting from A to B. In a situation we're getting from A to A to B is gonna be hard no matter what, even if you did it perfect.Peter: So what's your longer term prognosis? Do you think that he'll get this strategy right and line up the tactics appropriately?Edward: I don't know. It's so hard to know. I think the strategy is right. The question is whether the company will survive the process of getting them there. They're burning through cash. As an example, they laid off a bunch of people via email that work in Europe, and you can't actually do that. It's illegal to do that in Europe, so all those people that they fired in Europe actually aren't fired, they turned off their salaries. They're not making any money anymore, but all those people have a class action lawsuit that's going to go against Twitter and there's going to be a huge fine. That type of stuff matters in a situation where it, if they succeed, it's gonna be by the skin of their teeth. They're the Amazon in 2001 where we need to keep doing everything right and working our butts off to keep this plane flying over the treetops so that we can take off and circle the planet. But before we can circle the planet, we need to get over these trees. If they get over the trees, I think there's a good argument. Twitter's a fantastic, unique product that can do all sorts of incredible things and far more than the old team was doing. But he still has to get over the trees and that's where it's a lot unclear.Peter: Yeah. So it takes us to kind of the bottom line, as you say, and I don't think anyone would disagree. Strategy becomes far more urgent in rapidly changing environments. Who could argue with that yet at the same time, in rapidly changing environments. We start rearranging deck chairs, which is far more tactical.Edward: I think when things are going smoothly, when things are not changing, strategy frankly doesn't matter very much. Tactics matter a little bit, and execution matters a lot. When you're in a place where things are changing rapidly and you need to get to someplace new, all of a sudden strategy matters a lot. But that doesn't mean that tactics and execution matter less. They still matter a lot too. It just becomes like everything matters. It's becomes so easy to fail. You only need one chin in the chain to break and you're not gonna get there.Peter: And I think all three of these cases show that interplay. So again, it's not strategy versus tactics, but focusing on the and, getting to sync up properly and, easier said than done. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit marketingbs.substack.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review: What We Owe The Future, published by Kelsey Piper on November 21, 2022 on The Effective Altruism Forum. For the inaugural edition of Asterisk, I wrote about What We Owe The Future. Some highlights: What is the longtermist worldview? First — that humanity's potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible. Here there's little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren't of moral importance”; it's usually “because I don't think we can predictably affect the lives of future people in the desired direction.” As it happens, I think we can — but not through the pathways outlined in What We Owe the Future. The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical. I think we're in a dangerous world, one with perils ahead for which we're not at all prepared, one where we're likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from What We Owe the Future feels glaring. If we grant MacAskill's premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the first issue of Asterisk, published by Clara Collier on November 21, 2022 on The Effective Altruism Forum. Are you a fan of engaging, epistemically rigorous longform writing about the world's most pressing problems? Interested in in-depth interviews with leading scholars? A reader of taste and discernment? Sick of FTXcourse? Distract yourself with the inaugural issue of Asterisk Magazine, out now! Asterisk is a new quarterly journal of clear writing and clear thinking about things that matter (and, occasionally, things we just think are interesting). In this issue: Kelsey Piper argues that What We Owe The Future can't quite support the weight of its own premises. Kevin Esvelt talks about how we can prevent the next pandemic. Jared Leibowich gives us a superforecaster's approach to modeling monkeypox. Christopher Leslie Brown on the history of abolitionism and the slippery concept of moral progress Stuart Ritchie tries to find out if the replication crisis has really made science better. Dietrich Vollrath explains what economists do and don't know about why some countries become rich and others don't. Scott Alexander asks: is wine fake? Karson Elmgren on the history and future of China's semiconductor industry. Xander Balwit imagines a future where genetic engineering has radically altered the animals we eat. A huge thank you to everyone in the community who helped us make Asterisk a reality. We hope you all enjoy reading it as much as we enjoyed making it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
This week, David Plotz, Emily Bazelon, and John Dickerson discuss Trump's campaign announcement, election denying candidates' failures in the midterms, and guest Matthew Zeitlin on the impact the implosion of Sam Bankman-Fried's crypto exchange FTX may have on the Effective Altruism movement. Here are some notes and references from this week's show: Donie O'Sullivan for CNN: “Facebook Fact-Checkers Will Stop Checking Trump After Presidential Bid Announcement” Matthew Zeitlin for Grid: “Sam Bankman-Fried Gave Millions To Effective Altruism. What Happens Now That The Money Is Gone?” Kelsey Piper for Vox: “Sam Bankman-Fried Tries To Explain Himself” What We Owe the Future, by William MacAskill William MacAskill for Effective Altruism Forum: “EA And The Current Funding Situation” This American Life: “Watching the Watchers” Here are this week's chatters: John: Jason P. Frank for Vulture: “Stephen Colbert, Emma Watson, and More Celebs to Relish in Pickleball Tournament”; Isabel Gonzalez for CBS News: “Mike Tyson, Evander Holyfield Partner To Create Ear-Shaped, Cannabis-Infused Edibles” Emily: William Melhado for The Texas Tribune: “Federal Judge In Texas Rules That Disarming Those Under Protective Orders Violates Their Second Amendment Rights” David: Politics and Prose: City Cast DC Live Taping with Michael Schaffer, David Plotz, and Anton Bogomazov - at Union Market; Justin Jouvenal for The Washington Post: “D.C.'s Bitcoin King: Yachts, Penthouses, A Python — And Tax Dodging?” Listener chatter from Kelly Mills: The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine Who Outwitted America's Enemies, by Jason Fagone For this week's Slate Plus bonus segment Emily, David, and John contemplate the Thanksgiving traditions they would like to adopt or improve. Tweet us your questions and chatters @SlateGabfest or email us at gabfest@slate.com. (Messages may be quoted by name unless the writer stipulates otherwise.) Podcast production by Cheyna Roth. Research by Bridgette Dunlap. Learn more about your ad choices. Visit megaphone.fm/adchoices
The balance sheet contains an apology, the in-house coach is concerned that company executives are “undersexed” and billions in customer funds remain in jeopardy. The wreckage at FTX goes from bad to worse.Plus: Elon's “extremely hardcore” plan for Twitter 2.0.Additional Resources:George K. Lerner, FTX's in-house performance coach, said he was shocked by the collapse of FTX.In an interview with Matt Levine, a Bloomberg columnist, Sam Bankman-Fried described his strategy to restore faith in the crypto ecosystem.Bankman-Fried reflected on his actions as chief executive of FTX in a series of Twitter messages with Kelsey Piper, a Vox reporter.Elon Musk told Twitter employees in an email that the company would become an “extremely hardcore” operation. Employees were asked to click yes to be part of the new Twitter or take severance.Musk's social calendar includes courting comedians and hopping on yachts.We want to hear from you. Email us at hardfork@nytimes.com. Follow “Hard Fork” on TikTok: @hardfork
This week, David Plotz, Emily Bazelon, and John Dickerson discuss Trump's campaign announcement, election denying candidates' failures in the midterms, and guest Matthew Zeitlin on the impact the implosion of Sam Bankman-Fried's crypto exchange FTX may have on the Effective Altruism movement. Here are some notes and references from this week's show: Donie O'Sullivan for CNN: “Facebook Fact-Checkers Will Stop Checking Trump After Presidential Bid Announcement” Matthew Zeitlin for Grid: “Sam Bankman-Fried Gave Millions To Effective Altruism. What Happens Now That The Money Is Gone?” Kelsey Piper for Vox: “Sam Bankman-Fried Tries To Explain Himself” What We Owe the Future, by William MacAskill William MacAskill for Effective Altruism Forum: “EA And The Current Funding Situation” This American Life: “Watching the Watchers” Here are this week's chatters: John: Jason P. Frank for Vulture: “Stephen Colbert, Emma Watson, and More Celebs to Relish in Pickleball Tournament”; Isabel Gonzalez for CBS News: “Mike Tyson, Evander Holyfield Partner To Create Ear-Shaped, Cannabis-Infused Edibles” Emily: William Melhado for The Texas Tribune: “Federal Judge In Texas Rules That Disarming Those Under Protective Orders Violates Their Second Amendment Rights” David: Politics and Prose: City Cast DC Live Taping with Michael Schaffer, David Plotz, and Anton Bogomazov - at Union Market; Justin Jouvenal for The Washington Post: “D.C.'s Bitcoin King: Yachts, Penthouses, A Python — And Tax Dodging?” Listener chatter from Kelly Mills: The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine Who Outwitted America's Enemies, by Jason Fagone For this week's Slate Plus bonus segment Emily, David, and John contemplate the Thanksgiving traditions they would like to adopt or improve. Tweet us your questions and chatters @SlateGabfest or email us at gabfest@slate.com. (Messages may be quoted by name unless the writer stipulates otherwise.) Podcast production by Cheyna Roth. Research by Bridgette Dunlap. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week, David Plotz, Emily Bazelon, and John Dickerson discuss Trump's campaign announcement, election denying candidates' failures in the midterms, and guest Matthew Zeitlin on the impact the implosion of Sam Bankman-Fried's crypto exchange FTX may have on the Effective Altruism movement. Here are some notes and references from this week's show: Donie O'Sullivan for CNN: “Facebook Fact-Checkers Will Stop Checking Trump After Presidential Bid Announcement” Matthew Zeitlin for Grid: “Sam Bankman-Fried Gave Millions To Effective Altruism. What Happens Now That The Money Is Gone?” Kelsey Piper for Vox: “Sam Bankman-Fried Tries To Explain Himself” What We Owe the Future, by William MacAskill William MacAskill for Effective Altruism Forum: “EA And The Current Funding Situation” This American Life: “Watching the Watchers” Here are this week's chatters: John: Jason P. Frank for Vulture: “Stephen Colbert, Emma Watson, and More Celebs to Relish in Pickleball Tournament”; Isabel Gonzalez for CBS News: “Mike Tyson, Evander Holyfield Partner To Create Ear-Shaped, Cannabis-Infused Edibles” Emily: William Melhado for The Texas Tribune: “Federal Judge In Texas Rules That Disarming Those Under Protective Orders Violates Their Second Amendment Rights” David: Politics and Prose: City Cast DC Live Taping with Michael Schaffer, David Plotz, and Anton Bogomazov - at Union Market; Justin Jouvenal for The Washington Post: “D.C.'s Bitcoin King: Yachts, Penthouses, A Python — And Tax Dodging?” Listener chatter from Kelly Mills: The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine Who Outwitted America's Enemies, by Jason Fagone For this week's Slate Plus bonus segment Emily, David, and John contemplate the Thanksgiving traditions they would like to adopt or improve. Tweet us your questions and chatters @SlateGabfest or email us at gabfest@slate.com. (Messages may be quoted by name unless the writer stipulates otherwise.) Podcast production by Cheyna Roth. Research by Bridgette Dunlap. Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kelsey Piper's recent interview of SBF, published by agucova on November 16, 2022 on LessWrong. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kelsey Piper's recent interview of SBF, published by Agustín Covarrubias on November 16, 2022 on The Effective Altruism Forum. Kelsey Piper from Vox's Future Perfect very recently released an interview (made through Twitter DMs) with Sam Bankman-Fried. The interview goes in depth into the events surrounding FTX and Alameda Research. As we messaged, I was trying to make sense of what, behind the PR and the charitable donations and the lobbying, Bankman-Fried actually believes about what's right and what's wrong — and especially the ethics of what he did and the industry he worked in. Looming over our whole conversation was the fact that people who trusted him have lost their savings, and that he's done incalculable damage to everything he proclaimed only a few weeks ago to care about. The grief and pain he has caused is immense, and I came away from our conversation appalled by much of what he said. But if these mistakes haunted him, he largely didn't show it. The interview gives a much-awaited outlet into SBF's thinking, specifically in relation to prior questions in the community regarding whether SBF was practicing some form of naive consequentialism or whether the events surround the crisis largely emerged from incompetence. During the interview, Kelsey asked explicitly about previous statements by SBF agreeing with the existence of strong moral boundaries to maximizing good. His answers seem to suggest he had intentionally misrepresented his views on the issue: This seems to give some credit to the theory that SBF could have been acting like a naive utilitarian, choosing to engage in morally objectionable behavior to maximize his positive impact, while explicitly misrepresenting his views to others. However, Kelsey also asked directly regarding the lending out of customer deposits alongside Alameda Research: All of his claims are at least consistent with the view of SBF acting like an incompetent investor. FTX and Alameda Research seems to have had serious governance and accounting problems, and SBF seems to have taken several decisions which to him sounded individually reasonable, all based on bad information. He repeatedly doubled down, instead of cutting his losses. I'm still not sure what to take out of this interview, especially because Sam seems, at best, somewhat incoherent regarding his moral views and previous mistakes. This might have to do with his emotional state at the time of the interview, or even be a sign that he's blatantly lying, but I still think there is a lot of stuff to update from. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Covid 10/27/22: Another Origin Story, published by Zvi on October 27, 2022 on LessWrong. The big story this week was a new preprint claiming to show that Covid-19 had an unnatural origin. For several days, this was a big story with lots of arguing about it, lots of long threads, lots of people accusing others of bad faith or being idiots or not understanding undergraduate microbiology, and for some reason someone impersonating a virologist to spy on Kelsey Piper. Then a few days later all discussion of it seemed to vanish. It wasn't that everyone suddenly came to an agreement to move on. All sides simply decided that this was no longer the Current Thing. See the section for further discussion. In the end I did not update much, so I am mostly fine with this null result. There's also more Gain of Function research looking to create a new pandemic. There was a lot of consensus among the comments and those I know that this work must stop, yet little in the way of good ways to stop it. Several people gave versions of ‘have you considered violence or otherwise going outside the law?' and my answer is no. While the dangers here are real, they are not at anything like the levels that would potentially justify such actions. Note on Deleted Post from This Week Finally, I need to address the post that got taken down in a bit more detail. I want to thank Saloni in particular for quickly and efficiently making some of my mistakes clear to me both quickly and clearly, with links, so I could within about an hour realize I'd made a huge mistake and the whole post structure and conclusions no longer made sense, so I took the post down. Please disregard it. Everyone has been great about understanding that mistakes happen, and I want you to know I appreciate it, and hope it helps myself and others similarly address errors in the future. How did the mistakes happen? Ultimately, it is 100% my fault, on multiple counts, no excuses. What are some of the things I did wrong, so I can hopefully minimize chances they happen again? My logic was flawed. I wasn't thinking about the power of the study properly. I let the truly awful takes and absence of good takes defending colonoscopies make me too confident in the lack of available good takes doing so, and let that bias my thinking. I got feedback before posting, but I did not get enough or get it from the right sources. I heard everyone talking about ‘first RCT' in various forms and failed to notice it was only the first to look at all-cause mortality rather than the first RCT. The authors of this one made the mistake of trying to measure all-cause mortality as primary endpoint despite lacking the power to do so, in a way that my brain didn't properly process, compounding the errors. I didn't properly consider the possibility that the main result of a published paper was plausibly highly ‘unlucky' in part due to training on decades of publication bias. I didn't fully appreciate the magnitude of the healthy patient bias, which made certain extrapolations sound patently absurd – I'm still super skeptical of those claims but they're not actually obviously crazy on reflection. And I messed up a few small technical details. In general, the whole thing is really complicated. There is no question that the study was a disappointing result for the effectiveness of colonoscopies, well below what the researchers expected to find. However, there is a lot of room for ‘disappointing but still worthwhile' and a lot of additional past data to incorporate. I genuinely don't know what I am going to think when I am finished thinking about it. Executive Summary New preprint on potential origins of Covid-19, not updating much. Gain of Function research continues. Please disregard this week's earlier post until I can properly fix it. Let's run the numbers. The Numbe...
Scientists at Boston University recently created in a lab a new Covid virus that had the transmissibility of the Omicron variant and was also more likely to cause severe disease. They called it the Omicron S-bearing virus. The study found that the engineered virus had a mortality rate of 80%. The experiment has once again called into question the purpose of so-called “gain of function” research and also oversight on such projects. Kelsey Piper, senior writer at Vox's Future Perfect, joins us for why labs keep making dangerous viruses. Next, AI art generators have just been unleashed on the public. These new text-to-image generators let you type in almost any phrase, and it will return you an image in various art styles. Dall-E 2 by OpenAI and DreamStudio by Stability AI are now open for anyone to use and the result is a lot of fun! The artificial intelligence interprets your words and creates fully original images, but there are still a lot of questions over how it works, copyright and who owns the images? Then there are concerns about real artists and graphic designers. Joanna Stern, senior personal tech columnist at the WSJ, joins us for what the future of AI art may hold.See omnystudio.com/listener for privacy information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overreacting to current events can be very costly, published by Kelsey Piper on October 4, 2022 on The Effective Altruism Forum. epistemic status: I am fairly confident that the overall point is underrated right now, but am writing quickly and think it's reasonably likely the comments will identify a factual error somewhere in the post. Risk seems unusually elevated right now of a serious nuclear incident, as a result of Russia badly losing the war in Ukraine. Various markets put the risk at about 5-10%, and various forecasters seem to estimate something similar. The general consensus is that Russia, if they used a nuclear weapon, would probably deploy a tactical nuclear weapon on the battlefield in Ukraine, probably in a way with a small number of direct casualties but profoundly destabilizing effects. A lot of effective altruists have made plans to leave major cities if Russia uses a nuclear weapon, at least until it becomes clear whether the situation is destabilizing. I think if that happens we'll be in a scary situation, but based on how we as a community collectively reacted to Covid, I predict an overreaction -- that is, I predict that if there's a nuclear use in Ukraine, EAs will incur more costs in avoiding the risk of dying in a nuclear war than the actual expected costs of dying in a nuclear war, more costs than necessary to reduce the risks of dying in a nuclear war, and more costs than we'll endorse in hindsight. With respect to Covid, I am pretty sure the EA community and related communities incurred more costs in avoiding the risk of dying of Covid than was warranted. In my own social circles, I don't know anyone who died of Covid, but I know of a healthy person in their 20s or 30s who died of failing to seek medical attention because they were scared of Covid. A lot of people incurred hits to their productivity and happiness that were quite large. This is especially true for people doing EA work they consider directly important: being 10% less impactful at an EA direct work job has a cost measured in many human or animal or future-digital-mind lives, and I think few people explicitly calculated how that cost measured up against the benefit of reduced risk of Covid. If Russia uses a nuclear weapon in Ukraine, here is what I expect to happen: a lot of people will be terrified (correctly assessing this as a significant change in the equilibrium around nuclear weapon use which makes a further nuclear exchange much more likely.) Many people will flee major cities in the US and Europe. They will spend a lot of money, take a large productivity hit from being somewhere with worse living conditions and worse internet, and spend a ton of their time obsessively monitoring the nuclear situation. A bunch of very talented ops people will work incredibly hard to get reliable fast internet in remote parts of Northern California or northern Britain. There won't be much EAs not already in nuclear policy and national security can do, but there'll be a lot of discussion and a lot of people trying to get up to speed on the situation/feeling a lot of need to know what's going on constantly. The stuff we do is important, and much less of it will get done. It will take a long time for it to become obvious if the situation is stable, but eventually people will mostly go back to cities (possibly leaving again if there are further destabilizing events). The recent Samotsvety forecast estimates that a person staying in London will lose 3-100 hours to nuclear risk in expectation (edit: which goes up by a factor of 6 in the case of actual tactical nuke use in Ukraine.) I think it is really easy for that person to waste more than 3-100 hours by being panicked, and possible to waste more than 20 - 600 hours on extreme response measures. And that's the life-hour costs of never fleein...
This week we are joined by Kelsey Piper to discuss effective altruism, its popularity, and if it succeeds or fails as an institution. Recommendations: Reasons and Persons by Daron Parfit In Shifra's Arms The Precipice by Toby Ord
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open EA Global, published by Scott Alexander on September 1, 2022 on The Effective Altruism Forum. I think EA Global should be open access. No admissions process. Whoever wants to go can. I'm very grateful for the work that everyone does to put together EA Global. I know this would add much more work for them. I know it is easy for me, a person who doesn't do the work now and won't have to do the extra work, to say extra work should be done to make it bigger. But 1,500 people attended last EAG. Compare this to the 10,000 people at the last American Psychiatric Association conference, or the 13,000 at NeurIPS. EAG isn't small because we haven't discovered large-conference-holding technology. It's small as a design choice. When I talk to people involved, they say they want to project an exclusive atmosphere, or make sure that promising people can find and network with each other. I think this is a bad tradeoff. ...because it makes people upset This comment (seen on Kerry Vaughan's Twitter) hit me hard: A friend describes volunteering at EA Global for several years. Then one year they were told that not only was their help not needed, but they weren't impressive enough to be allowed admission at all. Then later something went wrong and the organizers begged them to come and help after all. I am not sure that they became less committed to EA because of the experience, but based on the look of delight in their eyes when they described rejecting the organizers' plea, it wouldn't surprise me if they did. Not everyone rejected from EAG feels vengeful. Some people feel miserable. This year I came across the Very Serious Guide To Surviving EAG FOMO: Part of me worries that, despite its name, it may not really be Very Serious... ...but you can learn a lot about what people are thinking by what they joke about, and I think a lot of EAs are sad because they can't go to EAG. ...because you can't identify promising people. In early 2020 Kelsey Piper and I gave a talk to an EA student group. Most of the people there were young overachievers who had their entire lives planned out, people working on optimizing which research labs they would intern at in which order throughout their early 20s. They expected us to have useful tips on how to do this. Meanwhile, in my early 20s, I was making $20,000/year as an intro-level English teacher at a Japanese conglomerate that went bankrupt six months after I joined. In her early 20s, Kelsey was taking leave from college for mental health reasons and babysitting her friends' kid for room and board. If either of us had been in the student group, we would have been the least promising of the lot. And here we were, being asked to advise! I mumbled something about optionality or something, but the real lesson I took away from this is that I don't trust anyone to identify promising people reliably. ...because people will refuse to apply out of scrupulosity. I do this. I'm not a very good conference attendee. Faced with the challenge of getting up early on a Saturday to go to San Francisco, I drag my feet and show up an hour late. After a few talks and meetings, I'm exhausted and go home early. I'm unlikely to change my career based on anything anyone says at EA Global, and I don't have any special wisdom that would convince other people to change theirs. So when I consider applying to EAG, I ask myself whether it's worth taking up a slot that would otherwise go to some bright-eyed college student who has been dreaming of going to EAG for years and is going to consider it the highlight of their life. Then I realize I can't justify bumping that college student, and don't apply. I used to think I was the only person who felt this way. But a few weeks ago, I brought it up in a group of five people, and two of them said they had also stopped applying to EA...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Dedicates, published by ozymandias on June 23, 2022 on The Effective Altruism Forum. I've noticed a persistent competing need in effective altruist communities. On one hand, many people want permission not to only value effective altruism. They care about doing good in the world, but they also care about other things: their community, their friendships, their children, a hobby they feel passionate about like art or sports or RPGs or programming, a cause that's personal to them like free speech or cancer, even just spending time vegging out and watching TV. So they emphasize work/life balance and that effective altruism doesn't have to be your only life goal. On the other hand, some people do strive to only care about effective altruism. Of course, they still have hobbies and friendships and take time to rest; effective altruists are not ascetics. But ultimately everything they do is justified by the fact that it strengthens them to continue the work. The discourse about work/life balance can be very alienating to them. It can feel like the effective altruism community isn't honoring the significant personal sacrifices they're making to improve the world. In some cases, people feel like there's a certain crab bucket mentality—you should limit how much good you do so that other people don't feel bad—which is very toxic. Conversely, people who have work/life balance can feel threatened by people who only care about effective altruism. If those people exist, does that mean you have to be one? Are you evil, or a failure, or personally responsible for dozens of counterfactual deaths, because you care about more than one thing? I propose that this conversation would be improved by naming the second group. I suggest calling them “EA dedicates.” In thinking about EA dedicates, I was inspired by thinking about monks. Monks play an important role in religions with monks. They're very admirable people who do a lot of good. The religion wouldn't function without them. And most people are not supposed to be monks. Why We Need Both Dedicates and Non-Dedicates There are two reasons that the effective altruism movement should be open to people who aren't dedicates. First, people who care about more than one thing still do an enormous amount of good. Many of the best effective altruists aren't dedicates, such as journalist Kelsey Piper and CEA community liaison Julia Wise (as well as, of course, many people whose contributions don't succeed in making them EA famous). It would be a tremendous mistake to expel Kelsey Piper for insufficient devotion. Quite frankly, the bednets don't care if the person who buys them also donates to cancer research. Second, most people caring about multiple things is good for the health of the effective altruist community. If the effective altruist community is totally wrongheaded, it's psychologically easier to admit if that doesn't mean losing literally everything you care about and have spent your life working for. There's a certain comfort in being able to say “at least I still have my kids” or “at least I still have my art.” Similarly, the effective altruist movement is already quite insular. People who care about multiple things are more likely to have friends outside the community, and therefore get an outside reality check and views from outside the EA bubble. (An EA dedicate could have outside-community friends and many of them do, but it certainly seems less common.) These are merely two of the ways that having a lot of non-dedicates makes the EA community more resilient. The advantages of being open to EA dedicates, conversely, are pretty obvious. In general, if you care about multiple things, you're going to split your time, energy, and resources across them and have less time, energy, and resources for any particular goal. If you're donatin...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why so little AI risk on rationalist-adjacent blogs?, published by Grant Demaree on June 13, 2022 on LessWrong. I read a lot of rationalist-adjacents. Outside of LessWrong and ACX, I hardly ever see posts on AI risk. Tyler Cowen of Marginal Revolution writes that, "it makes my head hurt" but hasn't engaged with the issue. Even Zvi spends very few posts on AI risk. This is surprising, and I wonder what to make of it. Why do the folks most exposed to MIRI-style arguments have so little to say about them? Here's a few possibilities Some of the writers disagree that AGI is a major near-term threat It's unusually hard to think and write about AI risk The best rationalist-adjacent writers don't feel like they have a deep enough understanding to write about AI risk There's not much demand for these posts, and LessWrong/Alignment Forum/ACX are already filling it. Even a great essay wouldn't be that popular Folks engaged in AI risk are a challenging audience. Eliezer might get mad at you When you write about AGI for a mainstream audience, you look weird. I don't think this is as true it used to be, since Ezra Klein did it in the New York Times and Kelsey Piper in Vox Some of these writers are heavily specialized. The mathematicians want to write about pure math. The pharmacologists want to write about drug development. The historians want to argue that WWII strategic bombing was based on a false theory of popular support for the enemy regime, and present-day sanctions are making the same mistake Some of the writers are worried that they'll present the arguments badly, inoculating their readers against a better future argument What they wrote I'll treat Scott Alexander's blogroll as the canonical list of rationalist-adjacent writers. I've grouped them by their stance on the following statement: Misaligned AGI is among the most important existential risks to humanity Explicitly agrees and provides original gears-level analysis (2) Zvi Mowshowitz “The default outcome, if we do not work hard and carefully now on AGI safety, is for AGI to wipe out all value in the universe.” (Dec 2017) Zvi gives a detailed analysis here followed by his own model in response to the 2021 MIRI conversations. Holden Karnofsky of OpenPhil and GiveWell In his Most Important Century series (Jul 2021 to present), Holden explains AGI risk to mainstream audiences. Ezra Klein featured Holden's work in the New York Times. This series had a high impact on me, because Holden used to have specific and detailed objections to MIRI's arguments (2012). Ten years later, he's changed his mind. Explicitly agrees (4) Jacob Falkovich of Putanumonit “Misaligned AI is an existential threat to humanity, and I will match $5,000 of your donations to prevent it.” (Dec 2017) Jacob doesn't make the case himself, but he links to external sources. Kelsey Piper of Vox “the big overarching challenge is making sure that as our systems get more powerful, we design them so their goals are aligned with those of humans — that is, humanity doesn't construct scaled-up superintelligent AI that overwhelms human intentions and leads to catastrophe.” (Apr 2021) Steve Hsu of Information Processing “I do think that the top 3 risks to humanity, in my view, are AI, bio, and unknown unknowns.” (Apr 2020) Alexey Guzey of Applied Divinity Studies “AI risk seems to be about half of all possible existential risk.” The above quote is from a May 2021 PDF, rather than a direct post. I can't find a frontpage post that makes the AI risk case directly Explicitly ambivalent (2) Tyler Cowen of Marginal Revolution “As for Rogue AI... For now I will just say that it makes my head hurt. It makes my head hurt because the topic is so complicated. I see nuclear war as the much greater large-scale risk, by far” (Feb 2022). Julia Galef of Rationally Speaking Julia int...
Resources for Ukraine and trans people and families in Texas:“How You can Help Ukranians” by Kelsey Piper for VoxTransgender Education Network of TexasTrans Kids and Families of TexasThread of resources to make use of if you're a caregiver or educator in TexasGet Oh, I Like That merch here! In the last episode, we share our top-level thoughts about choosing games to play and how to think about teaching them to others. This week we dive into our recommendations for exactly which games to play. We cover games you play solo, games that are fun to play with one other person, and games that are good for groups and social situations.This episode was produced by Rachel and Sally and edited by Lucas Nguyen. Our logo was designed by Amber Seger (@rocketorca). Our theme music is by Tiny Music. MJ Brodie transcribed this episode. Follow us on Twitter @OhILikeThatPod.Things we talked about:9 Things You Probably Don't Know About Daylight Saving Time by Rachel for BuzzFeedOne-player tabletop roleplaying games like: Thousand Year Old Vampire, The Wretched, Ironsworn, Red SnowThe 1974 board game Anti-MonopolyAnimal Crossing MonopolyMonopoly Was Designed to Teach the 99% About Income Inequality by Mary Pilon for Smithsonian MagazineHow to Solve the New York Times Crossword by Deb Amlen for the New York TimesSETTussie MussiePARKSTrails: A Parks GameWhy is it called Mexican Train Dominoes?MastermindSushi Go!AnomiaHunt a Killer mystery subscription boxi'm sorry did you say street magicDevotions: The Selected Poems of Mary OliverThe Good Luck Girls by Charlotte Nicole Davis
Resources for Ukraine and trans people and families in Texas:“How You can Help Ukranians” by Kelsey Piper for VoxTransgender Education Network of TexasTrans Kids and Families of TexasThread of resources to make use of if you're a caregiver or educator in TexasHave you ever thought about how many games are available to us these days? Card games, board games, roleplaying games. Games of chance, games of strategy, games where you win by buying up all the property and then charging people to use it. This is the first installment of a two-part series about games and gaming. In this first episode, we talk about the art of getting people into games, the science of teaching complicated games to newbies, resources for finding games you want to play, and also hot tubs.Get Oh, I Like That merch here! This episode was produced by Rachel and Sally and edited by Lucas Nguyen. Our logo was designed by Amber Seger (@rocketorca). Our theme music is by Tiny Music. MJ Brodie transcribed this episode. Follow us on Twitter @OhILikeThatPod.Things we talked about:“Iceland's Water Cure” by Dan Kois for the New York TimesThe tarot card The TowerHello Wordl, off-brand WordleWordle spinoffs Quordle, Semantle, Poeltl, and WorldleShut Up & Sit Down's game pickerGeek & Sundry's Game the Game seriesTwo tweets that perfectly sum up the agony of teaching a game and being taught a new gameHow to Teach Board Games Like a ProThe Slate podcast Decoder Ring