Podcast appearances and mentions of kelsey piper

  • 37PODCASTS
  • 96EPISODES
  • 38mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about kelsey piper

Latest podcast episodes about kelsey piper

LessWrong Curated Podcast
“My ‘infohazards small working group' Signal Chat may have encountered minor leaks” by Linch

LessWrong Curated Podcast

Play Episode Listen Later Apr 2, 2025 10:33


Remember: There is no such thing as a pink elephant. Recently, I was made aware that my “infohazards small working group” Signal chat, an informal coordination venue where we have frank discussions about infohazards and why it will be bad if specific hazards were leaked to the press or public, accidentally was shared with a deceitful and discredited so-called “journalist,” Kelsey Piper. She is not the first person to have been accidentally sent sensitive material from our group chat, however she is the first to have threatened to go public about the leak. Needless to say, mistakes were made. We're still trying to figure out the source of this compromise to our secure chat group, however we thought we should give the public a live update to get ahead of the story. For some context the “infohazards small working group” is a casual discussion venue for the [...] ---Outline:(04:46) Top 10 PR Issues With the EA Movement (major)(05:34) Accidental Filtration of Simple Sabotage Manual for Rebellious AIs (medium)(08:25) Hidden Capabilities Evals Leaked In Advance to Bioterrorism Researchers and Leaders (minor)(09:34) Conclusion--- First published: April 2nd, 2025 Source: https://www.lesswrong.com/posts/xPEfrtK2jfQdbpq97/my-infohazards-small-working-group-signal-chat-may-have --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

The Mind Killer
131 - 100% OpSec

The Mind Killer

Play Episode Listen Later Mar 26, 2025 73:15


Wes, Eneasz, and David keep the rationalist community informed about what's going on outside of the rationalist communitySupport us on Substack!News discussed:Kelsey Piper and friends dug deep into PEPFAR effectivenessThe astronauts stranded on the ISS are back safe. And they were greeted back to Earth by a pod of dolphins!Federal government awards contract for our angry triangle to BoeingICE rounded up a bunch of Venezuelans and flew them to prison in VenezuelaRoberts responds in a public statementVance is explicitly promoting defiance of the courtsChief Justice Roberts's 2024 year-end report calls it out directly.Green Card Holder Who Has Been in US for 50 Years Detained for weeks.Canadian held for two weeks in prison for document snafuEO Abolishing DoEd.What the DoEd actually does:Bad news on the economyRFK is trying to remove cell phones from schoolsGaza war back onBombed the s**t out of the Houthis, Iran disavows, Trump doesn't buy itSecDef was discussing the plan on Signal with a reporterRussia & Ukraine agreed not to damage energy infrastructureRussia immediately attacked Ukrainian energy infrastructureSubscriber request: NYU was hackedEducation Department investigating more than 50 colleges and universities over racial preferencesColumbia University has agreed to a list of demands in return for negotiations to reinstate its $400m Columbia said it expelled, suspended, or temporarily revoked degrees from some students who seized a building during campus protests last springGreenpeace must pay more than $660 million in damagesHappy News!Prisoners in solitary confinement given VRTaiwan signed a deal to invest $44 billion in Alaskan LNG infrastructureMIT engineers turn skin cells directly into neurons for cell therapyMexican government found a swarm of giraffes in the state bordering El PasoBYD (china) unveiled a new battery and charging system that it says can provide 249 miles of range with a five-minute chargeIdaho cuts regulations by 25% in four years by implementing Zero-Based RegulationUtah just passed a permitting reform lawTroop DeploymentDavid - If you liked The Dragon's Banker, you should read Oathbreakers AnonymousEneasz - How To Believe False ThingsWes - Men and women are different, but not that differentGot something to say? Come chat with us on the Bayesian Conspiracy Discord or email us at themindkillerpodcast@gmail.com. Say something smart and we'll mention you on the next show!Follow us!RSS: http://feeds.feedburner.com/themindkillerGoogle: https://play.google.com/music/listen#/ps/Iqs7r7t6cdxw465zdulvwikhekmPocket Casts: https://pca.st/vvcmifu6Stitcher: https://www.stitcher.com/podcast/the-mind-killerApple: Intro/outro music: On Sale by Golden Duck Orchestra This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit mindkiller.substack.com/subscribe

Just Asking Questions
Kelsey Piper: A Reasonable Approach to AI

Just Asking Questions

Play Episode Listen Later Mar 20, 2025 79:03


Vox's Kelsey Piper joins the show to discuss the drastic differences between the Biden and Trump administrations on AI—and what it all means for the future of humanity.

Complex Systems with Patrick McKenzie (patio11)

In this episode, Patrick McKenzie (patio11) and Erik Torenberg, investor and the media entrepreneur behind Turpentine, explore the evolving relationship between tech journalism and the industry it covers. They discuss how fictional portrayals of industries greatly inform how jobseekers understand those industries, and how the industries understand themselves. They cover the vacuum in quality tech reporting, the emergence of independent media companies, and industry heavyweights with massive followings. Patrick also brings up the phenomenon of Twitter/Slack crossovers, where coordinated social media action is used to influence internal company policies and public narratives. They examine how this dynamic, combined with economic pressures and ideological motivations, has led to increased groupthink in tech journalism. Expanding on themes covered in Kelsey Piper's episode of Complex Systems, this conversation makes more legible the important ways media affects tech, even though tech is arguably a more sophisticated industry – and why there is a need to move beyond simplistic narratives of "holding power accountable" to provide nuanced, informative coverage that helps people understand tech's impact on society.–Full transcript available here: https://www.complexsystemspodcast.com/episodes/tech-media-erik-torenberg–Sponsors: WorkOS | CheckBuilding an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Start now at https://bit.ly/WorkOS-Turpentine-NetworkCheck is the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to https://checkhq.com/complex and tell them patio11 sent you.–Links:Bits About Money, “Fiction and Finance” https://www.bitsaboutmoney.com/archive/fiction-about-finance/Byrne Hobart's essay on The Social Network https://byrnehobart.medium.com/the-social-network-was-the-most-important-movie-of-all-time-9f91f66018d7Kelsey Piper on Complex Systems https://open.spotify.com/episode/33rHTZVowaq76tCTaKJfRB –Twitter:@patio11@eriktorenberg–Timestamps:(00:00) Intro(00:27) Fiction and Finance: The power of narrative(01:41) The Social Network's impact on career choices(03:34) Cultural perceptions and entrepreneurship(06:04) Media influence and tech industry perception(11:01) The role of tech journalism(14:15) Social media's impact on journalism(19:39) Sponsors: WorkOS | Check(21:54) The intersection of media and tech(39:22) Public intellectualism in tech(57:40) Wrap–Complex Systems is part of the Turpentine podcast network. Turpentine also has a social network for top founders and execs: https://www.turpentinenetwork.com/

The Nonlinear Library
LW - AI #75: Math is Easier by Zvi

The Nonlinear Library

Play Episode Listen Later Aug 1, 2024 114:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #75: Math is Easier, published by Zvi on August 1, 2024 on LessWrong. Google DeepMind got a silver metal at the IMO, only one point short of the gold. That's really exciting. We continuously have people saying 'AI progress is stalling, it's all a bubble' and things like that, and I always find remarkable how little curiosity or patience such people are willing to exhibit. Meanwhile GPT-4o-Mini seems excellent, OpenAI is launching proper search integration, by far the best open weights model got released, we got an improved MidJourney 6.1, and that's all in the last two weeks. Whether or not GPT-5-level models get here in 2024, and whether or not it arrives on a given schedule, make no mistake. It's happening. This week also had a lot of discourse and events around SB 1047 that I failed to avoid, resulting in not one but four sections devoted to it. Dan Hendrycks was baselessly attacked - by billionaires with massive conflicts of interest that they admit are driving their actions - as having a conflict of interest because he had advisor shares in an evals startup rather than having earned the millions he could have easily earned building AI capabilities. so Dan gave up those advisor shares, for no compensation, to remove all doubt. Timothy Lee gave us what is clearly the best skeptical take on SB 1047 so far. And Anthropic sent a 'support if amended' letter on the bill, with some curious details. This was all while we are on the cusp of the final opportunity for the bill to be revised - so my guess is I will soon have a post going over whatever the final version turns out to be and presenting closing arguments. Meanwhile Sam Altman tried to reframe broken promises while writing a jingoistic op-ed in the Washington Post, but says he is going to do some good things too. And much more. Oh, and also AB 3211 unanimously passed the California assembly, and would effectively among other things ban all existing LLMs. I presume we're not crazy enough to let it pass, but I made a detailed analysis to help make sure of it. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. They're just not that into you. 4. Language Models Don't Offer Mundane Utility. Baba is you and deeply confused. 5. Math is Easier. Google DeepMind claims an IMO silver metal, mostly. 6. Llama Llama Any Good. The rankings are in as are a few use cases. 7. Search for the GPT. Alpha tests begin of SearchGPT, which is what you think it is. 8. Tech Company Will Use Your Data to Train Its AIs. Unless you opt out. Again. 9. Fun With Image Generation. MidJourney 6.1 is available. 10. Deepfaketown and Botpocalypse Soon. Supply rises to match existing demand. 11. The Art of the Jailbreak. A YouTube video that (for now) jailbreaks GPT-4o-voice. 12. Janus on the 405. High weirdness continues behind the scenes. 13. They Took Our Jobs. If that is even possible. 14. Get Involved. Akrose has listings, OpenPhil has a RFP, US AISI is hiring. 15. Introducing. A friend in venture capital is a friend indeed. 16. In Other AI News. Projections of when it's incrementally happening. 17. Quiet Speculations. Reports of OpenAI's imminent demise, except, um, no. 18. The Quest for Sane Regulations. Nick Whitaker has some remarkably good ideas. 19. Death and or Taxes. A little window into insane American anti-innovation policy. 20. SB 1047 (1). The ultimate answer to the baseless attacks on Dan Hendrycks. 21. SB 1047 (2). Timothy Lee analyzes current version of SB 1047, has concerns. 22. SB 1047 (3): Oh Anthropic. They wrote themselves an unexpected letter. 23. What Anthropic's Letter Actually Proposes. Number three may surprise you. 24. Open Weights Are Unsafe And Nothing Can Fix This. Who wants to ban what? 25. The Week in Audio. Vitalik Buterin, Kelsey Piper, Patrick McKenzie. 26. Rheto...

The Nonlinear Library: LessWrong
LW - AI #75: Math is Easier by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 1, 2024 114:57


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #75: Math is Easier, published by Zvi on August 1, 2024 on LessWrong. Google DeepMind got a silver metal at the IMO, only one point short of the gold. That's really exciting. We continuously have people saying 'AI progress is stalling, it's all a bubble' and things like that, and I always find remarkable how little curiosity or patience such people are willing to exhibit. Meanwhile GPT-4o-Mini seems excellent, OpenAI is launching proper search integration, by far the best open weights model got released, we got an improved MidJourney 6.1, and that's all in the last two weeks. Whether or not GPT-5-level models get here in 2024, and whether or not it arrives on a given schedule, make no mistake. It's happening. This week also had a lot of discourse and events around SB 1047 that I failed to avoid, resulting in not one but four sections devoted to it. Dan Hendrycks was baselessly attacked - by billionaires with massive conflicts of interest that they admit are driving their actions - as having a conflict of interest because he had advisor shares in an evals startup rather than having earned the millions he could have easily earned building AI capabilities. so Dan gave up those advisor shares, for no compensation, to remove all doubt. Timothy Lee gave us what is clearly the best skeptical take on SB 1047 so far. And Anthropic sent a 'support if amended' letter on the bill, with some curious details. This was all while we are on the cusp of the final opportunity for the bill to be revised - so my guess is I will soon have a post going over whatever the final version turns out to be and presenting closing arguments. Meanwhile Sam Altman tried to reframe broken promises while writing a jingoistic op-ed in the Washington Post, but says he is going to do some good things too. And much more. Oh, and also AB 3211 unanimously passed the California assembly, and would effectively among other things ban all existing LLMs. I presume we're not crazy enough to let it pass, but I made a detailed analysis to help make sure of it. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. They're just not that into you. 4. Language Models Don't Offer Mundane Utility. Baba is you and deeply confused. 5. Math is Easier. Google DeepMind claims an IMO silver metal, mostly. 6. Llama Llama Any Good. The rankings are in as are a few use cases. 7. Search for the GPT. Alpha tests begin of SearchGPT, which is what you think it is. 8. Tech Company Will Use Your Data to Train Its AIs. Unless you opt out. Again. 9. Fun With Image Generation. MidJourney 6.1 is available. 10. Deepfaketown and Botpocalypse Soon. Supply rises to match existing demand. 11. The Art of the Jailbreak. A YouTube video that (for now) jailbreaks GPT-4o-voice. 12. Janus on the 405. High weirdness continues behind the scenes. 13. They Took Our Jobs. If that is even possible. 14. Get Involved. Akrose has listings, OpenPhil has a RFP, US AISI is hiring. 15. Introducing. A friend in venture capital is a friend indeed. 16. In Other AI News. Projections of when it's incrementally happening. 17. Quiet Speculations. Reports of OpenAI's imminent demise, except, um, no. 18. The Quest for Sane Regulations. Nick Whitaker has some remarkably good ideas. 19. Death and or Taxes. A little window into insane American anti-innovation policy. 20. SB 1047 (1). The ultimate answer to the baseless attacks on Dan Hendrycks. 21. SB 1047 (2). Timothy Lee analyzes current version of SB 1047, has concerns. 22. SB 1047 (3): Oh Anthropic. They wrote themselves an unexpected letter. 23. What Anthropic's Letter Actually Proposes. Number three may surprise you. 24. Open Weights Are Unsafe And Nothing Can Fix This. Who wants to ban what? 25. The Week in Audio. Vitalik Buterin, Kelsey Piper, Patrick McKenzie. 26. Rheto...

Complex Systems with Patrick McKenzie (patio11)
Reporting on tech with Kelsey Piper

Complex Systems with Patrick McKenzie (patio11)

Play Episode Listen Later Jul 25, 2024 66:40


Patrick McKenzie (aka @Patio11) is joined by Kelsey Piper, a journalist for Vox's Future Perfect. Kelsey recently reported on equity irregularities at OpenAI in May of 2024, leading to an improvement of their policies in this area. We discuss the social function of equity in the technology industry, why the tech industry and reporters have had a frosty relationship the last several years, and more.–Full transcript available here: https://www.complexsystemspodcast.com/episodes/reporting-tech-kelsey-piper/–Sponsor: This podcast is sponsored by Check, the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to checkhq.com/complex and tell them patio11 sent you.–Links:https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employeeshttps://www.vox.com/authors/kelsey-piper https://www.bitsaboutmoney.com/–Twitter:@patio11@KelseyTuoc–Timestamps:(00:00) Intro(00:28) Kelsey Piper's journey into tech journalism(01:34) Early reporting (03:16) How Kelsey covers OpenAI(05:27) Understanding equity in the tech industry(11:29) Tender offers and employee equity(20:00) Dangerous Professional: employee edition(28:46) The frosty relationship between tech and media(35:44) Editorial policies and tech reporting(37:28) Media relations in the modern tech industry(38:35) Historical media practices and PR strategies(40:48) Challenges in modern journalism(44:48) VaccinateCA(56:12) Reflections on Effective Altruism and ethics(01:03:52) The role of Twitter in modern coordination(01:05:40) Final thoughts–Complex Systems is part of the Turpentine podcast network.

The Nonlinear Library
LW - Monthly Roundup #20: July 2024 by Zvi

The Nonlinear Library

Play Episode Listen Later Jul 24, 2024 59:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Roundup #20: July 2024, published by Zvi on July 24, 2024 on LessWrong. It is monthly roundup time. I invite readers who want to hang out and get lunch in NYC later this week to come on Thursday at Bhatti Indian Grill (27th and Lexington) at noon. I plan to cover the UBI study in its own post soon. I cover Nate Silver's evisceration of the 538 presidential election model, because we cover probabilistic modeling and prediction markets here, but excluding any AI discussions I will continue to do my best to stay out of the actual politics. Bad News Jeff Bezos' rocket company Blue Origin files comment suggesting SpaceX Starship launches be capped due to 'impact on local environment.' This is a rather shameful thing for them to be doing, and not for the first time. Alexey Guzey reverses course, realizes at 26 that he was a naive idiot at 20 and finds everything he wrote cringe and everything he did incompetent and Obama was too young. Except, no? None of that? Young Alexey did indeed, as he notes, successfully fund a bunch of science and inspire good thoughts and he stands by most of his work. Alas, now he is insufficiently confident to keep doing it and is in his words 'terrified of old people.' I think Alexey's success came exactly because he saw people acting stupid and crazy and systems not working and did not then think 'oh these old people must have their reasons,' he instead said that's stupid and crazy. Or he didn't even notice that things were so stupid and crazy and tried to just… do stuff. When I look back on the things I did when I was young and foolish and did not know any better, yeah, some huge mistakes, but also tons that would never have worked if I had known better. Also, frankly, Alexey is failing to understand (as he is still only 26) how much cognitive and physical decline hits you, and how early. Your experience and wisdom and increased efficiency is fighting your decreasing clock speed and endurance and physical strength and an increasing set of problems. I could not, back then, have done what I am doing now. But I also could not, now, do what I did then, even if I lacked my current responsibilities. For example, by the end of the first day of a Magic tournament I am now completely wiped. Google short urls are going to stop working. Patrick McKenzie suggests prediction markets on whether various Google services will survive. I'd do it if I was less lazy. Silver Bullet This is moot in some ways now that Biden has dropped out, but being wrong on the internet is always relevant when it impacts our epistemics and future models. Nate Silver, who now writes Silver Bulletin and runs what used to be the old actually good 538 model, eviscerates the new 538 election model. The 'new 538' model had Biden projected to do better in Wisconsin and Ohio than either the fundamentals or his polls, which makes zero sense. It places very little weight on polls, which makes no sense. It has moved towards Biden recently, which makes even less sense. Texas is their third most likely tipping point state, it happens 9.8% of the time, wait what? At best, Kelsey Piper's description here is accurate. Kelsey Piper: Nate Silver is slightly too polite to say it but my takeaway from his thoughtful post is that the 538 model is not usefully distinguishable from a rock with "incumbents win reelection more often than not" painted on it. Gil: worse, I think Elliott's modelling approach is probably something like max_(dem_chance) [incumbency advantage, polls, various other approaches]. Elliott's model in 2020 was more bullish on Biden's chances than Nate and in that case Trump was the incumbent and down in the polls. Nate Silver (on Twitter): Sure, the Titanic might seem like it's capsizing, but what you don't understand is that the White Star Line has an extremely good track re...

The Nonlinear Library: LessWrong
LW - Monthly Roundup #20: July 2024 by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 24, 2024 59:27


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Roundup #20: July 2024, published by Zvi on July 24, 2024 on LessWrong. It is monthly roundup time. I invite readers who want to hang out and get lunch in NYC later this week to come on Thursday at Bhatti Indian Grill (27th and Lexington) at noon. I plan to cover the UBI study in its own post soon. I cover Nate Silver's evisceration of the 538 presidential election model, because we cover probabilistic modeling and prediction markets here, but excluding any AI discussions I will continue to do my best to stay out of the actual politics. Bad News Jeff Bezos' rocket company Blue Origin files comment suggesting SpaceX Starship launches be capped due to 'impact on local environment.' This is a rather shameful thing for them to be doing, and not for the first time. Alexey Guzey reverses course, realizes at 26 that he was a naive idiot at 20 and finds everything he wrote cringe and everything he did incompetent and Obama was too young. Except, no? None of that? Young Alexey did indeed, as he notes, successfully fund a bunch of science and inspire good thoughts and he stands by most of his work. Alas, now he is insufficiently confident to keep doing it and is in his words 'terrified of old people.' I think Alexey's success came exactly because he saw people acting stupid and crazy and systems not working and did not then think 'oh these old people must have their reasons,' he instead said that's stupid and crazy. Or he didn't even notice that things were so stupid and crazy and tried to just… do stuff. When I look back on the things I did when I was young and foolish and did not know any better, yeah, some huge mistakes, but also tons that would never have worked if I had known better. Also, frankly, Alexey is failing to understand (as he is still only 26) how much cognitive and physical decline hits you, and how early. Your experience and wisdom and increased efficiency is fighting your decreasing clock speed and endurance and physical strength and an increasing set of problems. I could not, back then, have done what I am doing now. But I also could not, now, do what I did then, even if I lacked my current responsibilities. For example, by the end of the first day of a Magic tournament I am now completely wiped. Google short urls are going to stop working. Patrick McKenzie suggests prediction markets on whether various Google services will survive. I'd do it if I was less lazy. Silver Bullet This is moot in some ways now that Biden has dropped out, but being wrong on the internet is always relevant when it impacts our epistemics and future models. Nate Silver, who now writes Silver Bulletin and runs what used to be the old actually good 538 model, eviscerates the new 538 election model. The 'new 538' model had Biden projected to do better in Wisconsin and Ohio than either the fundamentals or his polls, which makes zero sense. It places very little weight on polls, which makes no sense. It has moved towards Biden recently, which makes even less sense. Texas is their third most likely tipping point state, it happens 9.8% of the time, wait what? At best, Kelsey Piper's description here is accurate. Kelsey Piper: Nate Silver is slightly too polite to say it but my takeaway from his thoughtful post is that the 538 model is not usefully distinguishable from a rock with "incumbents win reelection more often than not" painted on it. Gil: worse, I think Elliott's modelling approach is probably something like max_(dem_chance) [incumbency advantage, polls, various other approaches]. Elliott's model in 2020 was more bullish on Biden's chances than Nate and in that case Trump was the incumbent and down in the polls. Nate Silver (on Twitter): Sure, the Titanic might seem like it's capsizing, but what you don't understand is that the White Star Line has an extremely good track re...

The Nonlinear Library
EA - Why so many "racists" at Manifest? by Austin

The Nonlinear Library

Play Episode Listen Later Jun 18, 2024 9:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why so many "racists" at Manifest?, published by Austin on June 18, 2024 on The Effective Altruism Forum. Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average response to "would you recommend to a friend" was a 9.0/10. Reviewers said nice things like "one of the best weekends of my life" and "dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams" and "I've always found tribalism mysterious, but perhaps that was just because I hadn't yet found my tribe." Arnold Brooks running a session on Aristotle's Metaphysics. More photos of Manifest here. However, a recent post on The Guardian and review on the EA Forum highlight an uncomfortable fact: we invited a handful of controversial speakers to Manifest, whom these authors call out as "racist". Why did we invite these folks? First: our sessions and guests were mostly not controversial - despite what you may have heard Here's the schedule for Manifest on Saturday: (The largest & most prominent talks are on the left. Full schedule here.) And here's the full list of the 57 speakers we featured on our website: Nate Silver, Luana Lopes Lara, Robin Hanson, Scott Alexander, Niraek Jain-sharma, Byrne Hobart, Aella, Dwarkesh Patel, Patrick McKenzie, Chris Best, Ben Mann, Eliezer Yudkowsky, Cate Hall, Paul Gu, John Phillips, Allison Duettmann, Dan Schwarz, Alex Gajewski, Katja Grace, Kelsey Piper, Steve Hsu, Agnes Callard, Joe Carlsmith, Daniel Reeves, Misha Glouberman, Ajeya Cotra, Clara Collier, Samo Burja, Stephen Grugett, James Grugett, Javier Prieto, Simone Collins, Malcolm Collins, Jay Baxter, Tracing Woodgrains, Razib Khan, Max Tabarrok, Brian Chau, Gene Smith, Gavriel Kleinwaks, Niko McCarty, Xander Balwit, Jeremiah Johnson, Ozzie Gooen, Danny Halawi, Regan Arntz-Gray, Sarah Constantin, Frank Lantz, Will Jarvis, Stuart Buck, Jonathan Anomaly, Evan Miyazono, Rob Miles, Richard Hanania, Nate Soares, Holly Elmore, Josh Morrison. Judge for yourself; I hope this gives a flavor of what Manifest was actually like. Our sessions and guests spanned a wide range of topics: prediction markets and forecasting, of course; but also finance, technology, philosophy, AI, video games, politics, journalism and more. We deliberately invited a wide range of speakers with expertise outside of prediction markets; one of the goals of Manifest is to increase adoption of prediction markets via cross-pollination. Okay, but there sure seemed to be a lot of controversial ones… I was the one who invited the majority (~40/60) of Manifest's special guests; if you want to get mad at someone, get mad at me, not Rachel or Saul or Lighthaven; certainly not the other guests and attendees of Manifest. My criteria for inviting a speaker or special guest was roughly, "this person is notable, has something interesting to share, would enjoy Manifest, and many of our attendees would enjoy hearing from them". Specifically: Richard Hanania - I appreciate Hanania's support of prediction markets, including partnering with Manifold to run a forecasting competition on serious geopolitical topics and writing to the CFTC in defense of Kalshi. (In response to backlash last year, I wrote a post on my decision to invite Hanania, specifically) Simone and Malcolm Collins - I've enjoyed their Pragmatist's Guide series, which goes deep into topics like dating, governance, and religion. I think the world would be better with more kids in it, and thus support pronatalism. I also find the two of them to be incredibly energetic and engaging speakers IRL. Jonathan Anomaly - I attended a talk Dr. Anomaly gave about the state-of-the-art on polygenic embryonic screening. I was very impressed that something long-considered scien...

The Nonlinear Library
LW - MIRI's June 2024 Newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later Jun 15, 2024 4:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's June 2024 Newsletter, published by Harlan on June 15, 2024 on LessWrong. MIRI updates MIRI Communications Manager Gretta Duleba explains MIRI's current communications strategy. We hope to clearly communicate to policymakers and the general public why there's an urgent need to shut down frontier AI development, and make the case for installing an "off-switch". This will not be easy, and there is a lot of work to be done. Some projects we're currently exploring include a new website, a book, and an online reference resource. Rob Bensinger argues, contra Leopold Aschenbrenner, that the US government should not race to develop artificial superintelligence. "If anyone builds it, everyone dies." Instead, Rob outlines a proposal for the US to spearhead an international alliance to halt progress toward the technology. At the end of June, the Agent Foundations team, including Scott Garrabrant and others, will be parting ways with MIRI to continue their work as independent researchers. The team was originally set up and "sponsored" by Nate Soares and Eliezer Yudkowsky. However, as AI capabilities have progressed rapidly in recent years, Nate and Eliezer have become increasingly pessimistic about this type of work yielding significant results within the relevant timeframes. Consequently, they have shifted their focus to other priorities. Senior MIRI leadership explored various alternatives, including reorienting the Agent Foundations team's focus and transitioning them to an independent group under MIRI fiscal sponsorship with restricted funding, similar to AI Impacts. Ultimately, however, we decided that parting ways made the most sense. The Agent Foundations team has produced some stellar work over the years, and made a true attempt to tackle one of the most crucial challenges humanity faces today. We are deeply grateful for their many years of service and collaboration at MIRI, and we wish them the very best in their future endeavors. The Technical Governance Team responded to NIST's request for comments on draft documents related to the AI Risk Management Framework. The team also sent comments in response to the " Framework for MItigating AI Risks" put forward by U.S. Senators Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME). Brittany Ferrero has joined MIRI's operations team. Previously, she worked on projects such as the Embassy Network and Open Lunar Foundation. We're excited to have her help to execute on our mission. News and links AI alignment researcher Paul Christiano was appointed as head of AI safety at the US AI Safety Institute. Last fall, Christiano published some of his thoughts about AI regulation as well as responsible scaling policies. The Superalignment team at OpenAI has been disbanded following the departure of its co-leaders Ilya Sutskever and Jan Leike. The team was launched last year to try to solve the AI alignment problem in four years. However, Leike says that the team struggled to get the compute it needed and that "safety culture and processes have taken a backseat to shiny products" at OpenAI. This seems extremely concerning from the perspective of evaluating OpenAI's seriousness when it comes to safety and robustness work, particularly given that a similar OpenAI exodus occurred in 2020 in the wake of concerns about OpenAI's commitment to solving the alignment problem. Vox's Kelsey Piper reports that employees who left OpenAI were subject to an extremely restrictive NDA indefinitely preventing them from criticizing the company (or admitting that they were under an NDA), under threat of losing their vested equity in the company. OpenAI executives have since contacted former employees to say that they will not enforce the NDAs. Rob Bensinger comments on these developments here, strongly criticizing OpenAI for...

The Nonlinear Library: LessWrong
LW - MIRI's June 2024 Newsletter by Harlan

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 15, 2024 4:35


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's June 2024 Newsletter, published by Harlan on June 15, 2024 on LessWrong. MIRI updates MIRI Communications Manager Gretta Duleba explains MIRI's current communications strategy. We hope to clearly communicate to policymakers and the general public why there's an urgent need to shut down frontier AI development, and make the case for installing an "off-switch". This will not be easy, and there is a lot of work to be done. Some projects we're currently exploring include a new website, a book, and an online reference resource. Rob Bensinger argues, contra Leopold Aschenbrenner, that the US government should not race to develop artificial superintelligence. "If anyone builds it, everyone dies." Instead, Rob outlines a proposal for the US to spearhead an international alliance to halt progress toward the technology. At the end of June, the Agent Foundations team, including Scott Garrabrant and others, will be parting ways with MIRI to continue their work as independent researchers. The team was originally set up and "sponsored" by Nate Soares and Eliezer Yudkowsky. However, as AI capabilities have progressed rapidly in recent years, Nate and Eliezer have become increasingly pessimistic about this type of work yielding significant results within the relevant timeframes. Consequently, they have shifted their focus to other priorities. Senior MIRI leadership explored various alternatives, including reorienting the Agent Foundations team's focus and transitioning them to an independent group under MIRI fiscal sponsorship with restricted funding, similar to AI Impacts. Ultimately, however, we decided that parting ways made the most sense. The Agent Foundations team has produced some stellar work over the years, and made a true attempt to tackle one of the most crucial challenges humanity faces today. We are deeply grateful for their many years of service and collaboration at MIRI, and we wish them the very best in their future endeavors. The Technical Governance Team responded to NIST's request for comments on draft documents related to the AI Risk Management Framework. The team also sent comments in response to the " Framework for MItigating AI Risks" put forward by U.S. Senators Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME). Brittany Ferrero has joined MIRI's operations team. Previously, she worked on projects such as the Embassy Network and Open Lunar Foundation. We're excited to have her help to execute on our mission. News and links AI alignment researcher Paul Christiano was appointed as head of AI safety at the US AI Safety Institute. Last fall, Christiano published some of his thoughts about AI regulation as well as responsible scaling policies. The Superalignment team at OpenAI has been disbanded following the departure of its co-leaders Ilya Sutskever and Jan Leike. The team was launched last year to try to solve the AI alignment problem in four years. However, Leike says that the team struggled to get the compute it needed and that "safety culture and processes have taken a backseat to shiny products" at OpenAI. This seems extremely concerning from the perspective of evaluating OpenAI's seriousness when it comes to safety and robustness work, particularly given that a similar OpenAI exodus occurred in 2020 in the wake of concerns about OpenAI's commitment to solving the alignment problem. Vox's Kelsey Piper reports that employees who left OpenAI were subject to an extremely restrictive NDA indefinitely preventing them from criticizing the company (or admitting that they were under an NDA), under threat of losing their vested equity in the company. OpenAI executives have since contacted former employees to say that they will not enforce the NDAs. Rob Bensinger comments on these developments here, strongly criticizing OpenAI for...

The Nonlinear Library
LW - OpenAI: Fallout by Zvi

The Nonlinear Library

Play Episode Listen Later May 28, 2024 54:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Fallout, published by Zvi on May 28, 2024 on LessWrong. Previously: OpenAI: Exodus (contains links at top to earlier episodes), Do Not Mess With Scarlett Johansson We have learned more since last week. It's worse than we knew. How much worse? In which ways? With what exceptions? That's what this post is about. The Story So Far For years, employees who left OpenAI consistently had their vested equity explicitly threatened with confiscation and the lack of ability to sell it, and were given short timelines to sign documents or else. Those documents contained highly aggressive NDA and non disparagement (and non interference) clauses, including the NDA preventing anyone from revealing these clauses. No one knew about this until recently, because until Daniel Kokotajlo everyone signed, and then they could not talk about it. Then Daniel refused to sign, Kelsey Piper started reporting, and a lot came out. Here is Altman's statement from May 18, with its new community note. Evidence strongly suggests the above post was, shall we say, 'not consistently candid.' The linked article includes a document dump and other revelations, which I cover. Then there are the other recent matters. Ilya Sutskever and Jan Leike, the top two safety researchers at OpenAI, resigned, part of an ongoing pattern of top safety researchers leaving OpenAI. The team they led, Superalignment, had been publicly promised 20% of secured compute going forward, but that commitment was not honored. Jan Leike expressed concerns that OpenAI was not on track to be ready for even the next generation of models needs for safety. OpenAI created the Sky voice for GPT-4o, which evoked consistent reactions that it sounded like Scarlett Johansson, who voiced the AI in the movie Her, Altman's favorite movie. Altman asked her twice to lend her voice to ChatGPT. Altman tweeted 'her.' Half the articles about GPT-4o mentioned Her as a model. OpenAI executives continue to claim that this was all a coincidence, but have taken down the Sky voice. (Also six months ago the board tried to fire Sam Altman and failed, and all that.) A Note on Documents from OpenAI The source for the documents from OpenAI that are discussed here, and the communications between OpenAI and its employees and ex-employees, is Kelsey Piper in Vox, unless otherwise stated. She went above and beyond, and shares screenshots of the documents. For superior readability and searchability, I have converted those images to text. Some Good News But There is a Catch OpenAI has indeed made a large positive step. They say they are releasing former employees from their nondisparagement agreements and promising not to cancel vested equity under any circumstances. Kelsey Piper: There are some positive signs that change is happening at OpenAI. The company told me, "We are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations." Bloomberg confirms that OpenAI has promised not to cancel vested equity under any circumstances, and to release all employees from one-directional non-disparagement agreements. And we have this confirmation from Andrew Carr. Andrew Carr: I guess that settles that. Tanner Lund: Is this legally binding? Andrew Carr: I notice they are also including the non-solicitation provisions as not enforced. (Note that certain key people, like Dario Amodei, plausibly negotiated two-way agreements, which would mean theirs would still apply. I would encourage anyone in that category who is now free of the clause, even if they have no desire to disparage OpenAI, to simply say 'I am under no legal obligation not to disparage OpenAI.') These actions by OpenAI are helpful. They are necessary. They are no...

The Nonlinear Library: LessWrong
LW - OpenAI: Fallout by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later May 28, 2024 54:52


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Fallout, published by Zvi on May 28, 2024 on LessWrong. Previously: OpenAI: Exodus (contains links at top to earlier episodes), Do Not Mess With Scarlett Johansson We have learned more since last week. It's worse than we knew. How much worse? In which ways? With what exceptions? That's what this post is about. The Story So Far For years, employees who left OpenAI consistently had their vested equity explicitly threatened with confiscation and the lack of ability to sell it, and were given short timelines to sign documents or else. Those documents contained highly aggressive NDA and non disparagement (and non interference) clauses, including the NDA preventing anyone from revealing these clauses. No one knew about this until recently, because until Daniel Kokotajlo everyone signed, and then they could not talk about it. Then Daniel refused to sign, Kelsey Piper started reporting, and a lot came out. Here is Altman's statement from May 18, with its new community note. Evidence strongly suggests the above post was, shall we say, 'not consistently candid.' The linked article includes a document dump and other revelations, which I cover. Then there are the other recent matters. Ilya Sutskever and Jan Leike, the top two safety researchers at OpenAI, resigned, part of an ongoing pattern of top safety researchers leaving OpenAI. The team they led, Superalignment, had been publicly promised 20% of secured compute going forward, but that commitment was not honored. Jan Leike expressed concerns that OpenAI was not on track to be ready for even the next generation of models needs for safety. OpenAI created the Sky voice for GPT-4o, which evoked consistent reactions that it sounded like Scarlett Johansson, who voiced the AI in the movie Her, Altman's favorite movie. Altman asked her twice to lend her voice to ChatGPT. Altman tweeted 'her.' Half the articles about GPT-4o mentioned Her as a model. OpenAI executives continue to claim that this was all a coincidence, but have taken down the Sky voice. (Also six months ago the board tried to fire Sam Altman and failed, and all that.) A Note on Documents from OpenAI The source for the documents from OpenAI that are discussed here, and the communications between OpenAI and its employees and ex-employees, is Kelsey Piper in Vox, unless otherwise stated. She went above and beyond, and shares screenshots of the documents. For superior readability and searchability, I have converted those images to text. Some Good News But There is a Catch OpenAI has indeed made a large positive step. They say they are releasing former employees from their nondisparagement agreements and promising not to cancel vested equity under any circumstances. Kelsey Piper: There are some positive signs that change is happening at OpenAI. The company told me, "We are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations." Bloomberg confirms that OpenAI has promised not to cancel vested equity under any circumstances, and to release all employees from one-directional non-disparagement agreements. And we have this confirmation from Andrew Carr. Andrew Carr: I guess that settles that. Tanner Lund: Is this legally binding? Andrew Carr: I notice they are also including the non-solicitation provisions as not enforced. (Note that certain key people, like Dario Amodei, plausibly negotiated two-way agreements, which would mean theirs would still apply. I would encourage anyone in that category who is now free of the clause, even if they have no desire to disparage OpenAI, to simply say 'I am under no legal obligation not to disparage OpenAI.') These actions by OpenAI are helpful. They are necessary. They are no...

LessWrong Curated Podcast
“OpenAI: Fallout” by Zvi

LessWrong Curated Podcast

Play Episode Listen Later May 28, 2024 66:19


Previously: OpenAI: Exodus (contains links at top to earlier episodes), Do Not Mess With Scarlett JohanssonWe have learned more since last week. It's worse than we knew.How much worse? In which ways? With what exceptions?That's what this post is about. The Story So FarFor years, employees who left OpenAI consistently had their vested equity explicitly threatened with confiscation and the lack of ability to sell it, and were given short timelines to sign documents or else. Those documents contained highly aggressive NDA and non disparagement (and non interference) clauses, including the NDA preventing anyone from revealing these clauses.No one knew about this until recently, because until Daniel Kokotajlo everyone signed, and then they could not talk about it. Then Daniel refused to sign, Kelsey Piper started reporting, and a lot came out.Here is Altman's statement from [...]--- First published: May 28th, 2024 Source: https://www.lesswrong.com/posts/YwhgHwjaBDmjgswqZ/openai-fallout --- Narrated by TYPE III AUDIO.

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies

Nolan Church breaks down what is happening over at OpenAI and what we know about three developing stories involving exec departures, equity, and transparency. Kelli is on vacation this week. He touches on:* The departure of Ilya Sutskever, co-founder and chief scientist, along with Jan Lieke* The journalistic investigation by Vox's Kelsey Piper, who broke the story of OpenAI's practice of requiring departing employees to sign life-long non-disclosure and non-disparagement agreements, threatening the loss of vested equity for breach* The incident involving Scarlett Johansson being approached to voice OpenAI's AI, without her consent, raising questions about ethical use of voice likeness in AI technologiesNolan debriefs with lessons for HR leaders and company founders.HR Heretics is a podcast from Turpentine.SPONSOR:Attio is the next generation of CRM. It's powerful, flexible, and easily configures to the unique way your startup runs, whatever your go-to-market motion. The next era deserves a better CRM. Join ElevenLabs, Replicate, Modal, and more at https://bit.ly/AttioHRHereticsKEEP UP WITH NOLAN + KELLI ON LINKEDINNolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovichTIMESTAMPS:(00:00) Intro(00:47) The Big Departures: OpenAI's Executive Shake-Up(03:31) OpenAI's Controversial NDAs(05:12) Sponsors: Attio(08:21) The Scarlett Johansson Voice Saga(10:04) The Future of OpenAI and HR Lessons(10:50) Wrap This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com

The Nonlinear Library
EA - Pandemic apathy by Matthew Rendall

The Nonlinear Library

Play Episode Listen Later May 5, 2024 2:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pandemic apathy, published by Matthew Rendall on May 5, 2024 on The Effective Altruism Forum. An article in Vox yesterday by Kelsey Piper notes that after suffering through the whole Covid pandemic, policymakers and publics now seem remarkably unconcerned to prevent another one. 'Repeated efforts to get a serious pandemic prevention program through [the US] Congress', she writes, 'have fizzled.' Writing from Britain, I'm not aware of more serious efforts to prevent a repetition over here. That seems surprising. Both governments and citizens notoriously neglect many catastrophic threats, sometimes because they've never yet materialised (thermonuclear war; misaligned superintelligence), sometimes because they creep up on us slowly (climate change, biodiversity loss), sometimes because it's been a while since the last disaster and memories fade. After an earthquake or a hundred-year flood, more people take out insurance against them; over time, memories fade and take-up declines. None of these mechanisms plausibly explains apathy toward pandemic risk. If anything, you'd think people would exaggerate the threat, as they did the threat of terrorism after 9/11. It's recent and - in contrast to 9/11 - it's something we all personally experienced. What's going on? Cass Sunstein argues that 9/11 prompted a stronger response than global heating in part because people could put a face on a specific villain - Osama bin Laden. Sunstein maintains that this heightens not only outrage but also fear. Covid is like global heating rather than al-Qaeda in this respect. While that could be part of it, my hunch is that at least two other factors are playing a role. First, tracking down and killing terrorists was exciting. Improving ventilation systems or monitoring disease transmission between farmworkers and cows is not. It's a bit like trying to get six-year olds interested in patent infringements. This prompts the worry that we might fail to address some threats because their solutions are too boring to think about. Second, maybe Covid is a bit like Brexit. That issue dominated British politics for so long that even those of us who would like to see Britain rejoin the EU are rather loth to reopen it. Similarly, most of us would rather think about anything else than the pandemic. Unfortunately, that's a recipe for repeating it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Language models surprised us by Ajeya

The Nonlinear Library

Play Episode Listen Later Aug 30, 2023 11:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Language models surprised us, published by Ajeya on August 30, 2023 on The Effective Altruism Forum. Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Most experts were surprised by progress in language models in 2022 and 2023. There may be more surprises ahead, so experts should register their forecasts now about 2024 and 2025. Kelsey Piper co-drafted this post. Thanks also to Isabel Juniewicz for research help. If you read media coverage of ChatGPT - which called it 'breathtaking', 'dazzling', 'astounding' - you'd get the sense that large language models (LLMs) took the world completely by surprise. Is that impression accurate? Actually, yes. There are a few different ways to attempt to measure the question "Were experts surprised by the pace of LLM progress?" but they broadly point to the same answer: ML researchers, superforecasters, and most others were all surprised by the progress in large language models in 2022 and 2023. Competitions to forecast difficult ML benchmarks ML benchmarks are sets of problems which can be objectively graded, allowing relatively precise comparison across different models. We have data from forecasting competitions done in 2021 and 2022 on two of the most comprehensive and difficult ML benchmarks: the MMLU benchmark and the MATH benchmark. First, what are these benchmarks? The MMLU dataset consists of multiple choice questions in a variety of subjects collected from sources like GRE practice tests and AP tests. It was intended to test subject matter knowledge in a wide variety of professional domains. MMLU questions are legitimately quite difficult: the average person would probably struggle to solve them. At the time of its introduction in September 2020, most models only performed close to random chance on MMLU (~25%), while GPT-3 performed significantly better than chance at 44%. The benchmark was designed to be harder than any that had come before it, and the authors described their motivation as closing the gap between performance on benchmarks and "true language understanding": Natural Language Processing (NLP) models have achieved superhuman performance on a number of recently proposed benchmarks. However, these models are still well below human level performance for language understanding as a whole, suggesting a disconnect between our benchmarks and the actual capabilities of these models. Meanwhile, the MATH dataset consists of free-response questions taken from math contests aimed at the best high school math students in the country. Most college-educated adults would get well under half of these problems right (the authors used computer science undergraduates as human subjects, and their performance ranged from 40% to 90%). At the time of its introduction in January 2021, the best model achieved only about ~7% accuracy on MATH. The authors say: We find that accuracy remains low even for the best models. Furthermore, unlike for most other text-based datasets, we find that accuracy is increasing very slowly with model size. If trends continue, then we will need algorithmic improvements, rather than just scale, to make substantial progress on MATH. So, these are both hard benchmarks - the problems are difficult for humans, the best models got low performance when the benchmarks were introduced, and the authors seemed to imply it would take a while for performance to get really good. In mid-2021, ML professor Jacob Steinhardt ran a contest with superforecasters at Hypermind to predict progress on MATH and MMLU. Superforecasters massively undershot reality in both cases. They predicted that performance on MMLU would improve moderately from 44% in 2021 to 57% by June 2022. The actual performance was 68%, which s...

The Nonlinear Library
EA - The costs of caution by Kelsey Piper

The Nonlinear Library

Play Episode Listen Later May 1, 2023 5:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The costs of caution, published by Kelsey Piper on May 1, 2023 on The Effective Altruism Forum. Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there's a good chance we can do it within years of the advent of AI systems that can do the research work humans can do. Josh Cason on Twitter raised an objection to recent calls for a moratorium on AI development: April 2, 2023 Or raise your hand if you or someone you love has a terminal illness, believes Ai has a chance at accelerating medical work exponentially, and doesn't have til Christmas, to wait on your make believe moratorium. Have a heart man ❤️ I've said that I think we should ideally move a lot slower on developing powerful AI systems. I still believe that. But I think Josh's objection is important and deserves a full airing. Approximately 150,000 people die worldwide every day. Nearly all of those deaths are, in some sense, preventable, with sufficiently advanced medical technology. Every year, five million families bury a child dead before their fifth birthday. Hundreds of millions of people live in extreme poverty. Billions more have far too little money to achieve their dreams and grow into their full potential. Tens of billions of animals are tortured on factory farms. Scientific research and economic progress could make an enormous difference to all these problems. Medical research could cure diseases. Economic progress could make food, shelter, medicine, entertainment and luxury goods accessible to people who can't afford it today. Progress in meat alternatives could allow us to shut down factory farms. There are tens of thousands of scientists, engineers, and policymakers working on fixing these kinds of problems — working on developing vaccines and antivirals, understanding and arresting aging, treating cancer, building cheaper and cleaner energy sources, developing better crops and homes and forms of transportation. But there are only so many people working on each problem. In each field, there are dozens of useful, interesting subproblems that no one is working on, because there aren't enough people to do the work. If we could train AI systems powerful enough to automate everything these scientists and engineers do, they could help. As Tom discussed in a previous post, once we develop AI that does AI research as well as a human expert, it might not be long before we have AI that is way beyond human experts in all domains. That is, AI which is way better than the best humans at all aspects of medical research: thinking of new ideas, designing experiments to test those ideas, building new technologies, and navigating bureaucracies. This means that rather than tens of thousands of top biomedical researchers, we could have hundreds of millions of significantly superhuman biomedical researchers.[1] That's more than a thousand times as much effort going into tackling humanity's biggest killers. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there's a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.[2] All this may be a massive underestimate. This envisions a world that's pretty much like ours except that extraordinary talent is no longer scarce. But that feels, in some senses, like thinking about the advent of electricity purely in terms of ‘torchlight will no longer be scarce'. Electricity did make it very cheap to light our homes at night. But it also enabled vacuum cleaners, washing machines, cars, smartphones, airplanes, video recording, Twitter — entirely new things, not just cheaper access to thi...

Humans of Martech
68: How fast could AI change or replace marketing jobs?

Humans of Martech

Play Episode Listen Later Apr 17, 2023 66:37


What's up JT, good to chat again. When you aren't podcasting or consulting, what are you reading or listening to these days?Yeah I've been BUSY. Bobiverse books, of course but also lots of Mario with my kids – haha, my downtime totally spent on guilty pleasures.Haha yeah you had a head start on Bobiverse but I overlapped you… that's probably going to change soon for me… I don't think I've announced this on the cast yet but my wife and I are on baby watch, first born arriving at any second now which s why we need to record a few episodes hahaI've actually been getting back into podcasts lately. Maybe I'll plug a few of my favorites ahead of our next episodes. I've really been digging Making Sense of Martech lately. Juan Mendoza is the guy behind the podcast, he's a friend of the show and he's been doubling down on it, pumping out weekly episodes. If you want to go deep on some technical topics, in episode 37 he had the CEO of Hightouch Data on and he debates the merits of reverse ETL and they really unpack CDPs. Check it out.In the non marketing podcast world I've been taking a dive into the world of AI. No, not fluffy my top 10 ChatGPT prompts and buy my course type of content, way darker shit, like will marketing be replaced by AI in 10 or 20 years… sooner? My buddy Alex recommended The Ezra Klein Show. The episode is titled Freaked Out? We Really Can Prepare for A.I. On the show he has Kelsey Piper, a senior writer at Vox. She basically spends her time writing and being ahead of the curve covering advanced A.I.In that episode she says something like: “The AI community believes that we are 5-10 years away from systems that can do any job you can do remotely. Anything you can do on your computer.”Recently Goldman Sachs released a report saying AI could replace the equivalent of 300 million jobs. A day later Elon Musk, Andrew Yang, Wozniak and several other tech leaders wrote an open letter urging a pause in AI development, citing profound risks. So I went down a rabbit hole and it really prompted the next 4 episodes How fast could AI change or replace marketing jobs? How marketers can stay informed and become AI fluent Navigating through AI in your marketing career Find the top AI marketing tools and filter out the noise So basically1. How soon and how significantly will this impact my job2. How do I keep up with changes?3. Is it possible to adapt? How can I future-proof myself?4. How can I start right freaking now?!?Today we're going to be starting with setting the scene and covering how fast shit is changing right now. Here are some of the topics for this first episode: AI isn't new, especially for enterprise companies with lots of dataBut unlocking some of the potential for startups is going to be huge Will all these advancements just make marketers better and more efficient?or will it actually push founders to go to market without a marketer Marketing will have massive changes because we primarily rely on the ability to understand and apply existing rules and processes What does ChatGPT have to say about all this? What if AI is one day actually able to replicate human creativity and emotional intelligence? We'll talk about potential mass unemployment but the more likelihood of new job opportunities How fast AI has disrupted other jobs already How AI might simply only ever replace the shitty parts of marketing Here's today's main takeaway: It's not like our jobs are gonna vanish overnight, but the shift is happening faster than many of us realize. AI's no longer just a loosely backed buzzword; it's doing things today that we used to think were impossible. So, as marketers, we've gotta take this tech seriously.Instead of asking if AI's gonna replace our roles in marketing, we should be talking about how quickly it could happen and what it'll look like if it does.A bunch of really smart marketers (and non marketers) out there are saying we need to hit the panic button. They're predicting that in just 5 to 10 years, we'll see a massive change affecting all sorts of remote jobs. Times are wild right now. So, fellow humans of martech, let's keep our eyes on the future and continuously evolve and adapt.JT I don't want this episode to be fear mongering… I'd actually love to chat with people that are way smarter than us about AI and get both sides of the coin,  those who believe AI could have a fundamental impact on marketing jobs and that AI is as important of a paradigm shift as the Internet was… people like Darmesh Shah, like Scott Brinker,  and those who believe it will never completely happen and are still on the AI-skeptic side of things like Rand Fishkin I think it's ok to be a bit uncertain or even afraid of what the future may hold with this new technology.As humans, we face an interesting dilemma -- we are capable of using and creating technology that don't fully comprehend ourselves. Our society is built on layers of abstractions -- you don't need to know how water purification or plumbing works to turn on your tap and get a glass o water.My deepest fear is not that we adopt and use these technologies -- it's that we do so without considering the cost.The only thing worse than being afraid is being unprepared.I think marketers can benefit immensely from a boom in AI tech -- that easily could extend to basically any other human discipline.Truth is that we have to deal with the facts on the ground.I think there are a lot of smart people to consider following to get different takes on the potential of impact. We'll load the show notes with links so you can check out our research.AI in marketing has been around for a whileWe're not just waking up to AI for the first time lol we've obviously talked a lot about it on the cast and have been playing with AI and automation tools for a while right?ChatGPT is my big one – Really love it as a prompting tool to help me round out topics; I've used it for a personal coding project and I'm pretty stoked with what it can produce.But even before GPT, as marketing automation admins, we've actually been playing with ML features… maybe not considered AI for everyone but things like: Send time optimization Automated lead scoring Sentiment analysis tools And some cooler shit like propensity models It's worth saying that many enterprise companies who have data scientists and a boat load of data are already doing amazing things with AI.I've seen this first hand during my time at WordPress.com. Millions of users, billions of data points. We had an incredibly smart data team that built a UI that allowed marketers to build models predicting the likelihood that a user would do X or Y. We even had uplift models that allowed us to only offer discounts to users who were most likely to churn without a discount, but not offer them to users who would convert anyway. Many enterprises are doing this but the prereq is a lot of data, and the engineers to build the models.Yeah I haven't had the pleasure of working for an enterprise with anywhere near the amount of data required for ML applications but there has been a change. Startups have a data team now even if there isn't a ton of data.But what about for startupsRight so imagine a world where startups could do the ML applications described at enterprise companies without data scientists and without a ton of data. Using existing models like GPT-4 and basically everything available online as a dataset. But also in combination with all your valuable company data and tools (more on composability later). Imagine a world where as a founder, a non-technical founder, with AI tools, you can: design a prototype of your app build a website with a few instructional words build your own web app, including your backend write up a customized GTM strategy suggest growth tactics and even write message frameworks to help you generate users  leverage data from systems built on massive datasets to build your own propensity models  implement growth experiments We're actually way closer to this future than you might think. And you'll be able to do this: without a big marketing team  or a fancy marketing agency  and without a big team with expensive data engineers and data scientists. To be honest with you, what you described is a bit of a dream -- not in the sense it's not possible -- I think that you can do this today with some elbow grease.I think the interesting component is what role will humans play in this process. Are we directors nudging AI with prompts or additional data inputs? Is there creativity for us in that process?Even if a startup is spinning up the machine using AI, at some point a subject matter expert needs to get involved? Or is the future basically input an idea, output a fully baked product?Today, absolutely, in a few years probably… but in 5-10 years… maybe a lot less elbow grease than we're comfortable with? Will all these advancements just make marketers better and more efficient, or will it actually push founders to go to market without a marketerThis is the big point of contention: Will all these advancements just make marketers better and more efficient, or will it actually push founders to go to market without a marketer… The AI skeptics and downplayers are just focusing on the negative details. You've probably seen a lot of GPT downplayers who critique the current AI. Wow it plagiarized Bob Dylan when I asked it to write like Bob Dylan Wow it got this date wrong Wow it got this citation wrong We get it, it's not perfect, especially when you use it as a search engine or a fact checker. We can't forget that it's a text generator and a reasoning engine. It's not AGI yet. But it's already dramatically improved. In just a few months. Imagine in a few years or half a decade.What's your take JT? In our no-code tool episode, you argued that it helped remove the dependency on subject matter experts… Do you think AI tools have the same potential?AI tools come preloaded with more instantly referenceable information than we could imagine. I saw a post on Reddit last year where a programmer taught ChatGPT an alternative syntax to HTML called HBML, using braces instead of tags.First, ChatGPT picked up on the language insanely quick and was soon producing code independently. This is wild – it speaks to the vast intelligence literally at our fingertips.Yes, AI tools have, I think, virtually unlimited potential. I don't even think you have to remove the dependency on the subject matter expert to realize this potential – these tools can speed up experts to superhuman levels. How close are we to AGI?I'm obviously not pretending to be an expert here. I'm what you would call an enthusiast… Artificial general intelligence (AGI), aka strong AI or full AI, basically being able to understand or learn any intellectual task that human beings can.Open AI's whole mission is based on the premise that AGI will benefit all of humanity. It will “elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.”When will models achieve AGI is the big question. Experts don't all align here. Some think we're still super far off and doubt we'll ever get there, but others don't.I don't think this is some far off future. An analysis by Cornell University concluded that GPT-4 could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.The advancements from GPT3.5 to 4 in just a few months are pretty mind blowing. You've probably seen one of these exam results charts, and while human exams aren't ideal benchmarks for LLMs, it's worth noting that GPT-3.5 went from 10th percentile on the Bar exam to the 90th percentile after GPT-4 and from 40th to 88th on the LSATs.Source: https://cdn.openai.com/papers/gpt-4.pdf What are the implications for marketing?Yeah this is pretty wild speed of innovation… we know this tech curve is exponential as well… I want us to center this on marketing though. You said you've gone deep into this rabbit whole lately, what are tech experts saying about the effects of AI on marketing? Just a few years ago many believed that blue collar jobs would be replaced by robots way before white collar jobs, let alone programmers… but GPT has changed many people's minds right?Andrew Yang believes that tech, finance, and marketing are likely to experience a swift implementation of AI-driven automation due to their strong focus on efficiency. On the other hand, sectors such as healthcare and education, which are heavily regulated, are expected to adopt this technology at a much slower pace. So it might not go down exactly the same across all industries of marketers.  B2C might have to adapt faster because they have more users and data.  Healttech has so many privacy issues with HIPAA and PII… the speed of adoption here is likely to be way slower… I'm seeing this first hand right now haha. Known for his bangers on Twitter, Dare Obasanjo, lead PM at FB Metaverse said that AI is likely to cause significant changes in white-collar employment because  Many of these jobs rely on knowledge rather than intelligence.  The examples he uses are HR, law, marketing, and software development because they don't necessarily require individuals to engage in original thinking for the majority of their work.  Instead, they primarily rely on the ability to understand and apply existing rules and processes. I agree. In many cases, you could argue that having an AI perform the task is far superior than a human. Just think of someone in customer support using online chat today. What if you could train your AI in your product, give it every doc ever written, and set it loose? How do you compete with its ability to handle 100 chats at once? There is an elephant in the room around the ethics of using AI – but there's also an uncertainty as to how this might actually play out. The global economy depends on the Ford model of the worker being able to purchase the goods they manufacture or create – unless we're heading to a Star Trek like utopia, I think the rate of change will be limited by the economic implications.Additional thoughts: I agree with his take as well, most marketers are crappy and just remix other people's stuff… but the marketers who work on strategy and elements that do require intelligence aren't part of this description While creative and sound thinking is still necessary for about 20% of the work, the introduction of AI is more likely to augment and enhance these jobs rather than replace them entirely. In other words, AI is expected to act as a force multiplier, rather than rendering these jobs redundant. In these domains, the main challenge for AI will be the prompt engineer's skill in creating a suitable prompt that yields the intended output. This task will require some level of domain knowledge to execute effectively. As the saying goes, "garbage in, garbage out." I'm sure you've been asking GPT what its opinion is on all this right?What does CGPT have to say about itI've actually had many conversations with CGPT about this haha.I think inherently it's biased to not scare off users so it's overly positive in its assessment.The consensus from CGPT is that it is unlikely that AI will replace human creativity and strategic thinking in marketing. Citing specifically emotional intelligence that AI cannot replicate.Instead, AI will likely be used as a tool to augment and enhance human marketing efforts.So keywords here were human creativity, strategic thinking and emotional intelligence.So the next question then is

EARadio
Closing session & Fireside Chat | Matthew Yglesias | EAG DC 22

EARadio

Play Episode Listen Later Apr 15, 2023 46:11


The final session of the conference includes some closing words from Eli Nathan, followed by a fireside chat with Matthew Yglesias and Kelsey Piper.Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Fireside chat: Why We Fight — The Roots of War and the Paths to Peace | Chris Blattman | EAG DC 22

EARadio

Play Episode Listen Later Apr 9, 2023 57:46


Chris Blattman and Kelsey Piper discuss a range of issues in this fireside chat, including Chris's new book, "Why We Fight"."Why We Fight" draws on decades of economics, political science, psychology, and real-world interventions to synthesize the root causes and remedies for war. From warring states to street gangs, ethnic groups and religious sects to political factions, there are common dynamics across all these levels. Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

The Nonlinear Library
EA - Is it time for a pause? by Kelsey Piper

The Nonlinear Library

Play Episode Listen Later Apr 6, 2023 7:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is it time for a pause?, published by Kelsey Piper on April 6, 2023 on The Effective Altruism Forum. Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post. The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up to me, I would slow down AI development starting now — and then later slow down even more. Many of the people building powerful AI systems think they'll stumble on an AI system that forever changes our world fairly soon — three years, five years. I think they're reasonably likely to be wrong about that, but I'm not sure they're wrong about that. If we give them fifteen or twenty years, I start to suspect that they are entirely right. And while I think that the enormous, terrifying challenges of making AI go well are very much solvable, it feels very possible, to me, that we won't solve them in time. It's hard to overstate how much we have to gain from getting this right. It's also hard to overstate how much we have to lose from getting it wrong. When I'm feeling optimistic about having grandchildren, I imagine that our grandchildren will look back in horror at how recklessly we endangered everyone in the world. And I'm much much more optimistic that humanity will figure this whole situation out in the end if we have twenty years than I am if we have five. There's all kinds of AI research being done — at labs, in academia, at nonprofits, and in a distributed fashion all across the internet — that's so diffuse and varied that it would be hard to ‘slow down' by fiat. But there's one kind of AI research — training much larger, much more powerful language models — that it might make sense to try to slow down. If we could agree to hold off on training ever more powerful new models, we might buy more time to do AI alignment research on the models we have. This extra research could make it less likely that misaligned AI eventually seizes control from humans. An open letter released on Wednesday, with signatures from Elon Musk[1], Apple co-founder Steve Wozniak, leading AI researcher Yoshua Bengio, and many other prominent figures, called for a six-month moratorium on training bigger, more dangerous ML models: We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. I tend to think that we are developing and releasing AI systems much faster and much more carelessly than is in our interests. And from talking to people in Silicon Valley and policymakers in DC, I think efforts to change that are rapidly gaining traction. “We should slow down AI capabilities progress” is a much more mainstream view than it was six months ago, and to me that seems like great news. In my ideal world, we absolutely would be pausing after the release of GPT-4. People have been speculating about the alignment problem for decades, but this moment is an obvious golden age for alignment work. We finally have models powerful enough to do useful empirical work on understanding them, changing their behavior, evaluating their capabilities, noticing when they're being deceptive or manipulative, and so on. There are so many open questions in alignment that I expect we can make a lot of progress on in five years, with the benefit of what we've learned from existing models. We'd be in a much better position if we could collectively slow down to give ourselves more time to do this work, and I hope we find a way to do that intelligently and effectively. As I've said above, I ...

EARadio
Fireside chat | Tyler Cowen | EAG DC 2022

EARadio

Play Episode Listen Later Mar 31, 2023 52:29


Tyler Cowen and Kelsey Piper cover a range of topics in this fireside chat session.Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

The Nonlinear Library
LW - New blog: Planned Obsolescence by Ajeya Cotra

The Nonlinear Library

Play Episode Listen Later Mar 27, 2023 1:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Planned Obsolescence, published by Ajeya Cotra on March 27, 2023 on LessWrong. Kelsey Piper and I just launched a new blog about AI futurism and AI alignment called Planned Obsolescence. If you're interested, you can check it out here. Both of us have thought a fair bit about what we see as the biggest challenges in technical work and in policy to make AI go well, but a lot of our thinking isn't written up, or is embedded in long technical reports. This is an effort to make our thinking more accessible. That means it's mostly aiming at a broader audience than LessWrong and the EA Forum, although some of you might still find some of the posts interesting. So far we have seven posts: What we're doing here "Aligned" shouldn't be a synonym for "good" Situational awareness Playing the training game Training AIs to help us align AIs Alignment researchers disagree a lot The ethics of AI red-teaming Thanks to ilzolende for formatting these posts for publication. Each post has an accompanying audio version generated by a voice synthesis model trained on the author's voice using Descript Overdub. You can submit questions or comments to mailbox@planned-obsolescence.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - New blog: Planned Obsolescence by Ajeya Cotra

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 27, 2023 1:25


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Planned Obsolescence, published by Ajeya Cotra on March 27, 2023 on LessWrong. Kelsey Piper and I just launched a new blog about AI futurism and AI alignment called Planned Obsolescence. If you're interested, you can check it out here. Both of us have thought a fair bit about what we see as the biggest challenges in technical work and in policy to make AI go well, but a lot of our thinking isn't written up, or is embedded in long technical reports. This is an effort to make our thinking more accessible. That means it's mostly aiming at a broader audience than LessWrong and the EA Forum, although some of you might still find some of the posts interesting. So far we have seven posts: What we're doing here "Aligned" shouldn't be a synonym for "good" Situational awareness Playing the training game Training AIs to help us align AIs Alignment researchers disagree a lot The ethics of AI red-teaming Thanks to ilzolende for formatting these posts for publication. Each post has an accompanying audio version generated by a voice synthesis model trained on the author's voice using Descript Overdub. You can submit questions or comments to mailbox@planned-obsolescence.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Ezra Klein Show
Freaked Out? We Really Can Prepare for A.I.

The Ezra Klein Show

Play Episode Listen Later Mar 21, 2023 94:20


OpenAI last week released its most powerful language model yet: GPT-4, which vastly outperforms its predecessor, GPT-3.5, on a variety of tasks.GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled around in the 10th percentile. GPT-4 scored in the 88th percentile on the LSAT, up from GPT-3.5's 40th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test-takers. (It's predecessor hovered around 46 percent.) These are stunning results — not just what the model can do, but the rapid pace of progress. And Open AI's ChatGPT and other chat bots are just one example of what recent A.I. systems can achieve.Kelsey Piper is a senior writer at Vox, where she's been ahead of the curve covering advanced A.I., its world-changing possibilities, and the people creating it. Her work is informed by her deep knowledge of the handful of companies that arguably have the most influence over the future of A.I.We discuss whether artificial intelligence has coherent “goals” — and whether that matters; whether the disasters ahead in A.I. will be small enough to learn from or “truly catastrophic”; the challenge of building “social technology” fast enough to withstand malicious uses of A.I.; whether we should focus on slowing down A.I. progress — and the specific oversight and regulation that could help us do it; why Piper is more optimistic this year that regulators can be “on the ball' with A.I.; how competition between the U.S. and China shapes A.I. policy; and more.This episode contains stronglanguage.Mentioned:“The Man of Your Dreams” by Sangeeta Singh-Kurtz“The Case for Taking A.I. Seriously as a Threat to Humanity” by Kelsey Piper“The Return of the Magicians” by Ross Douthat“Let's Think About Slowing Down A.I.” by Katja GraceBook Recommendations:“The Making of the Atomic Bomb” by Richard RhodesAsterisk Magazine“The Silmarillion” by J. R. R. TolkienThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.“The Ezra Klein Show” is produced by Emefa Agawu, Annie Galvin, Jeff Geld, Roge Karma and Kristin Lin. Fact-checking by Michelle Harris and Kate Sinclair. Mixing by Jeff Geld. Original music by Isaac Jones. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Carole Sabouraud and Kristina Samulewski.

The Nonlinear Library
EA - How oral rehydration therapy was developed by Kelsey Piper

The Nonlinear Library

Play Episode Listen Later Mar 10, 2023 1:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How oral rehydration therapy was developed, published by Kelsey Piper on March 10, 2023 on The Effective Altruism Forum. This is a link post for "Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century's Biggest Killer of Children" in the second issue of Asterisk Magazine, now out. The question it poses is: oral rehydration therapy, which has saved millions of lives a year since it was developed, is very simple. It uses widely available ingredients. Why did it take until the late 1960s to come up with it? There's sort of a two part answer. The first part is that without a solid theoretical understanding of the problem you're trying to solve, it's (at least in this case) ludicrously difficult to solve it empirically: people kept trying variants on this, and they didn't work, because an important parameter was off and they had no idea which direction to correct in. The second is that the incredible simplicity of the modern formula for oral rehydration therapy is the product of a lot of concerted design effort not just to find something that worked against cholera but to find something dead simple which did only require household ingredients and was hard to get wrong. The fact the final solution is so simple isn't because oral rehydration is a simple problem, but because researchers kept on going until they had a sufficiently simple solution. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Someone should write a detailed history of effective altruism by Pete Rowlett

The Nonlinear Library

Play Episode Listen Later Jan 15, 2023 2:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Someone should write a detailed history of effective altruism, published by Pete Rowlett on January 14, 2023 on The Effective Altruism Forum. I think that someone should write a detailed history of the effective altruism movement. The history that currently exists on the forum is pretty limited, and I'm not aware of much other material, so I think there's room for substantial improvement. An oral history was already suggested in this post. I tentatively planned to write this post before FTX collapsed, but the reasons for writing this are probably even more compelling now than they were beforehand. I think a comprehensive written history would help. Develop an EA ethos/identity based on a shared intellectual history and provide a launch pad for future developments (e.g. longtermism and an influx of money). I remember reading about a community member who mostly thought about global health getting on board with AI safety when they met a civil rights attorney who was concerned about it. A demonstration of shared values allowed for that development. Build trust within the movement. As the community grows, it can no longer rely on everyone knowing everyone else, and needs external tools to keep everyone on the same page. Aesthetics have been suggested as one option, and I think that may be part of the solution, in concert with a written history. Mitigate existential risk to the EA movement. See EA criticism #6 in Peter Wildeford's post and this post about ways in which EA could fail. Assuming the book would help the movement develop an identity and shared trust, it could lower risk to the movement. Understand the strengths and weaknesses of the movement, and what has historically been done well and what has been done poorly. There are a few ways this could happen. Open Phil (which already has a History of Philanthropy focus area) or CEA could actively seek out someone for the role and fund them for the duration of the project. This process would give the writer the credibility needed to get time with important EA people. A would-be writer could request a grant, perhaps from the EA Infrastructure Fund. An already-established EA journalist like Kelsey Piper could do it. There would be a high opportunity cost associated with this option, of course, since they're already doing valuable work. On the other hand, they would already have the credibility and baseline knowledge required to do a great job. I'd be interested in hearing people's thoughts on this, or if I missed a resource that already exists. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk by Pablo

The Nonlinear Library

Play Episode Listen Later Dec 30, 2022 31:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk, published by Pablo on December 30, 2022 on The Effective Altruism Forum. [T]he sun with all the planets will in time grow too cold for life, unless indeed some great body dashes into the sun and thus gives it fresh life. Believing as I do that man in the distant future will be a far more perfect creature than he now is, it is an intolerable thought that he and all other sentient beings are doomed to complete annihilation after such long-continued slow progress. Charles Darwin Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish. A message to our readers Welcome back to Future Matters. We took a break during the autumn, but will now be returning to our previous monthly schedule. Future Matters would like to wish all our readers a happy new year! The most significant development during our hiatus was the collapse of FTX and the fall of Sam Bankman-Fried, until then one of the largest and most prominent supporters of longtermist causes. We were shocked and saddened by these revelations, and appalled by the allegations and admissions of fraud, deceit, and misappropriation of customer funds. As others have stated, fraud in the service of effective altruism is unacceptable, and we condemn these actions unequivocally and support authorities' efforts to investigate and prosecute any crimes that may have been committed. Research A classic argument for existential risk from superintelligent AI goes something like this: (1) superintelligent AIs will be goal-directed; (2) goal-directed superintelligent AIs will likely pursue outcomes that we regard as extremely bad; therefore (3) if we build superintelligent AIs, the future will likely be extremely bad. Katja Grace's Counterarguments to the basic AI x-risk case [] identifies a number of weak points in each of the premises in the argument. We refer interested readers to our conversation with Katja below for more discussion of this post, as well as to Erik Jenner and Johannes Treutlein's Responses to Katja Grace's AI x-risk counterarguments []. The key driver of AI risk is that we are rapidly developing more and more powerful AI systems, while making relatively little progress in ensuring they are safe. Katja Grace's Let's think about slowing down AI [] argues that the AI risk community should consider advocating for slowing down AI progress. She rebuts some of the objections commonly levelled against this strategy: e.g. to the charge of infeasibility, she points out that many technologies (human gene editing, nuclear energy) have been halted or drastically curtailed due to ethical and/or safety concerns. In the comments, Carl Shulman argues that there is not currently enough buy-in from governments or the public to take more modest safety and governance interventions, so it doesn't seem wise to advocate for such a dramatic and costly policy: “It's like climate activists in 1950 responding to difficulties passing funds for renewable energy R&D or a carbon tax by proposing that the sale of automobiles be banned immediately. It took a lot of scientific data, solidification of scientific consensus, and communication/movement-building over time to get current measures on climate change.” We enjoyed Kelsey Piper's review of What We Owe the Future [], not necessarily because we agree with her criticisms, but because we thought the review managed to identify, and articulate very clearly, what we take to be the main c...

Tech Won't Save Us
FTX Goes to Zero w/ Molly White

Tech Won't Save Us

Play Episode Listen Later Dec 22, 2022 87:02


Paris Marx is joined by Molly White to discuss the ongoing collapse of the crypto industry, what to make of the implosion of FTX and Alameda Research, and what happens next with Sam Bankman-Fried.Molly White is the creator of Web3 Is Going Just Great and a fellow at the Harvard Library Innovation Lab. You can follow her on Twitter at @molly0xFFF.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, support the show on Patreon, and sign up for the weekly newsletter.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Since recording, Sam Bankman-Fried has been extradited from the Bahamas to the United States, and it's been revealed that Caroline Ellison and FTX co-founder have plead guilty and are cooperating with authorities against Bankman-Fried.Molly has been analyzing the collapse of FTX on her newsletter.Paris wrote about effective altruism and longtermism for the New Statesman.Journalists at Forbes wrote about Caroline Ellison and her history.After Sam Bankman-Fried was arrested, effective altruist Kelsey Piper published a series of direct messages she exchanged with her supposed friend.The Southern District of New York's attorney's office, the Securities and Exchange Commission, and the Commodity Futures Trading Commission have all filed charges against Sam Bankman-Fried.There are rumors that Caroline Ellison is working with authorities against Sam Bankman-Fried.US Justice Department is split on when to charge Binance executives. There are also growing questions about Binance's books.Support the show

Cheap Talk
For Their Knowledge No Cost They Do Spare

Cheap Talk

Play Episode Listen Later Dec 9, 2022


Cheap Talk ends the semester with a mailbag episode: State dinners; celebrities and national interest; AI and international relations; sports diplomacy and the World Cup; President Biden's willingness to meet with Vladimir Putin; and Marcus admits he doesn't like soccerThanks to all those who contributed questions. Leave a message for a future podcast at https://www.speakpipe.com/cheaptalkLooking for a holiday gift for the international affairs nerd in your life? We humbly suggest the following:Signing Away the Bomb, by Jeffrey M. Kaplow (Coming out on December 22!) (https://www.amazon.com/Signing-Away-Bomb-Surprising-Nonproliferation/dp/1009216732)Face-to-Face Diplomacy, by Marcus Holmes (https://www.amazon.com/Face-Face-Diplomacy-Marcus-Holmes-ebook/dp/B07952GT58/)AI Links:ChatGPT (https://chat.openai.com)Kelsey Piper. Aug. 13, 2020. “GPT-3, explained: This new language AI is uncanny, funny — and a big deal.” Vox.com (https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language)Ethan Mollick. Dec. 8, 2022. “Four Paths to the Revelation.” One Useful Thing Substack. (https://oneusefulthing.substack.com/p/four-paths-to-the-revelation)Movie and TV Recommendations:Welcome to Wrexham (Streaming on Hulu)Slash/Back (Streaming on AMC+)The Bureau (Streaming on AMC+)Dispatches from Elsewhere (Streaming on AMC+)Pepsi, Where's My Jet? (Streaming on Netflix)We'll be back with new episodes in late January. Subscribe now in your podcast player of choice so you don't miss anything. Just enter this custom URL: http://www.jkaplow.net/cheaptalk?format=rss
- In Apple Podcasts, tap “Library” on the bottom row, tap “Edit” in the upper-right corner, and choose “Add a Show by URL...”
- In Google Podcasts, tap the activity icon in the lower-right, tap “Subscriptions,” tap the “...” menu in the upper-right, and tap “Add by RSS feed.”
- In Overcast, tap the “+” in the upper-right corner, then tap “Add URL.”Best wishes for a happy holiday and new year! We'll see you in 2023.

SMAF-NewsBot
Stop taking billionaires at their word

SMAF-NewsBot

Play Episode Listen Later Nov 24, 2022 8:06


In 1984 — the book, not the year — the means by which the evil totalitarian regime “Big Brother” retains its power is through something called “doublethink.” It's the practice of holding contradictory beliefs in tandem: “war is peace,” “freedom is slavery,” “ignorance is strength,” “2 + 2 = 5,” to . In 1984 — the book, not the year — the means by which the evil totalitarian regime “Big Brother” retains its power is through something called “doublethink.” It's the practice of holding contradictory beliefs in tandem: “war is peace,” “freedom is slavery,” “ignorance is strength,” “2 + 2 = 5,” to use the book's examples. It worked because when our minds — our sense of logic, our morality — become compromised, they're easier to control. Considering the events of the last several months, you could also interpret doublethink to mean things like “the metaverse is the future,” “people will pay millions of dollars for shitty art,” or “this crypto billionaire definitely has my best interests in mind.” It's a trite reference, but it's sort of the only one that makes sense. Somehow, somewhere along the way, the American public was duped into believing that these things could be true despite being, well, not. On November 11, the 30-year-old CEO of the cryptocurrency exchange FTX, Sam Bankman-Fried, resigned after his firm filed for bankruptcy. Prior to its implosion, Bankman-Fried (colloquially referred to as SBF) was regarded as a boy genius in the crypto world, not only because of his billionaire status but because he was widely considered to be “one of the good ones,” someone who advocated for more government regulation of crypto and was a leader in the effective altruism space. Effective altruism (EA) is part philosophical movement, part subculture, but in general aims to create evidence-backed means of doing the most good for the most people. (Disclosure: This August, Bankman-Fried's philanthropic family foundation, Building a Stronger Future, awarded Vox's Future Perfect a grant for a 2023 reporting project. That project is now on pause.) Instead, Bankman-Fried did the opposite: He tanked the savings of more than a million people and may have committed fraud. In a conversation with Vox's Kelsey Piper, he essentially admitted that the do-gooder persona was all an act (“fuck regulators,” he wrote, and said that he “had to be” good at talking about ethics because of “this dumb game we woke westerners play where we say all the right shibboleths and so everyone likes us”). In terms of corporate wrongdoing, the SBF disaster is arguably on par with Enron and Bernie Madoff. Here was a dude who marketed himself as a benevolent billionaire and convinced others to invest their money with him simply because he was worth $26 billion (at his peak). He partnered with celebrities like Tom Brady and Larry David to make crypto — a wildly risky investment that rests on shaky technology — seem like the only way forward. Both Brady and David, among several other famous people, are now being accused in a class-action suit of defrauding investors amid FTX's collapse. But there have been other examples of technological doublethink in recent history. Over the past year, Mark Zuckerberg has campaigned so hard for the mainstreaming of the “metaverse” that he changed the name of one of the world's most powerful companies to reflect his ambitions. His metaverse, though, called Horizon, would end up looking like a less-fun version of The Sims, a game that came out in the year 2000 (but even Sims had legs). The strategy has not, as of publication time, paid off. The company lost $800 billion. What's ironic, though, is that anyone with eyeballs and a brain could have simply told Zuckerberg that Horizon is terrible. Not only is it ugly and functionally useless, it's also expensive (VR headsets cost hundreds of dollars at minimum). People did, to be sure, tell him that — since its rollout, the platform has been widely mocked in the media and online — it's just that...

Slate Star Codex Podcast
"Is Wine Fake?" In Asterisk Magazine

Slate Star Codex Podcast

Play Episode Listen Later Nov 23, 2022 5:12


I wrote an article on whether wine is fake. It's not here, it's at asteriskmag.com, the new rationalist / effective altruist magazine. Congratulations to my friend Clara for making it happen. Stories include: Modeling The End Of Monkeypox: I'm especially excited about this one. The top forecaster (of 7,000) in the 2021 Good Judgment competition explains his predictions for monkeypox. If you've ever rolled your eyes at a column by some overconfident pundit, this is maybe the most opposite-of-that thing ever published. Book Review - What We Owe The Future: You've read mine, this is Kelsey Piper's. Kelsey is always great, and this is a good window into the battle over the word “long-termism”. Making Sense Of Moral Change: Interview with historian Christopher Brown on the end of the slave trade. “There is a false dichotomy between sincere activism and self-interested activism. Abolitionists were quite sincerely horrified by slavery and motivated to end it, but their fight for abolition was not entirely altruistic.” How To Prevent The Next Pandemic: MIT professor Kevin Esvelt talks about applying the security mindset to bioterrorism. “At least 38,000 people can assemble an influenza virus from scratch. If people identify a new [pandemic] virus . . . then you just gave 30,000 people access to an agent that is of nuclear-equivalent lethality.” Rebuilding After The Replication Crisis: This is Stuart Ritchie, hopefully you all know him by now. “Fundamentally, how much more can we trust a study published in 2022 compared to one from 2012?” Why Isn't The Whole World Rich? Professor Dietrich Vollrath's introduction to growth economics. What caused the South Korean miracle, and why can't other countries copy it? Is Wine Fake? By me! How come some people say blinded experts can tell the country, subregion, and year of any wine just by tasting it, but other people say blinded experts get fooled by white wines dyed red? China's Silicon Future: Why does China have so much trouble building advanced microchips? How will the CHIPS act affect its broader economic rise? By Karson Elmgren.

Marketing BS with Edward Nevraumont
Marketing BS Podcast: Strategy vs Tactics

Marketing BS with Edward Nevraumont

Play Episode Listen Later Nov 23, 2022 17:07


Today's episode further explores topics discussed in this week's essay. In the preamble to that essay I said that there would be no content next week. I am going to reverse that. Next week will be an excerpt from Peter Fader's new book. Stay tuned!Full Transcript:Peter: Ed, I love your piece on strategy versus tactics at Disney, Twitter and Dominion Cards. I love the way that you're weaving together a narrative that's taking three of the super hot, interesting topics and a fourth one that most people don't know about.Edward: It's funny, the whole Dominion Cards thing. I've been, I started playing this card game back in 2011. I went to the national Championships in 2012. And I just really enjoy it. It's like the only game I can think of where you actually need to figure out a strategy at the beginning of every game. I've been sitting on this idea of dominion cards as a way to talk about strategy versus tactics for many, many years now. And I've never felt really found the right kind of hook to put it in. And then when this thing happened at Disney on Sunday, I was like, aha, the hook is here. It's time to pull this out of the filing cabinet.Peter: Love it. Well, as a, reader of the column and as someone who thinks about these issues, there's kind of two natural questions that just has to be asked. I wanna get your take on it. So, first. How do you define or where do you draw the line between strategy and tactics?Edward: I think strategy is figuring out what you should be doing and it's trying to figure out what the end point is of where you're going for, and tactics is all the stuff that gets you there. Strategy can be done a bit in isolation. You can go back into your ivory tower and think about what the dynamics are coming out with your strategy and then tactics are going to be very much based on what's happening on the ground. What's happening at any given moment, how the competition is reacting, how economics is changing what type of people you have on your team and any given moment. Those are all tactical decisions like that a consultant is not going to be able to help you with unless he's actually there on the ground.Peter: So I always have a hard time with it, to be honest. Maybe this is just me being narrow minded or something. It's not just the next move is it the next three or four moves. Be specific about strategy versus tactics in chess, and then let's branch out to these other real world stories.Edward: I'm not an expert in chess. I'm actually teaching my kids how to play now, I'm learning along with them. But I think in chess there is a correct strategy. I think strategy in chess is things like control the center of the board would be a strategy. Be willing to sacrifice your piece in order to gain position in the board, or, move your pieces in such a way that you're able to castle fairly early in the game. Those would all be strategies, things that you're working towards over a longer period of time. Tactics are, given what my opponent has just done, what should I do next? And tactics can, you can look far into the future for tactics. There's nothing that stops you from looking nine moves ahead to the right tactic would be in that particular situation. But I think strategy stays in chess at least. I think strategy stays the same. There are correct strategies into chess and there are incorrect strategies in chess. Whereas tactics are gonna change every given game depending on what your opponent does.Peter: So let's take that, and again, it's still little fuzzy. I mean, you're being more specific, but still, and I'm not gonna press you on exactly where one begins and ends, but Disney. Disney. Disney. Disney. It seems like the narrative as you said is Iger had the strategy. Chapek's job is to come in and execute on it. Few missteps here and there. Expand on that beyond what you've said in the piece about that trade offering strategy and tactics.Edward: I think most people are agreed, even the disgruntled shareholders, is that Iger's strategy was the correct one or is the correct one, which is that the cable bundle is getting hammered and Disney in the past basically had a huge amount of leverage over the cable providers and was able to extract large amounts of money from the cable providers by the fact that they had this differentiated content both the traditional Disney content, but also the sports they had with ESPN. And that was a great place for Disney to be and it still is, frankly, they still extract a huge amount of money from the cable providers, but that is not the future. Clearly we see more and more people, especially young people cutting the cord, not going with cable television and moving into streaming. And it was really a question of when did Disney need to move in that direction and how long could they keep their pound of flesh from the cable companies and hold onto that as long as possible? So the strategy then becomes let's move on. Let's go direct to consumer and scale up our Disney Plus product. There's tactical problems in doing that because, Disney bought Fox, which came with 20th Century Fox, which allowed them to add a whole ton of more content to get like the breadth required to win in a streaming war. They got control over Hulu, but they didn't get full ownership of Hulu. And so Comcast still owns a chunk of Hulu in the US which makes all sorts of challenges for Disney on a tactical level on how to actually get to the place where they wanna be. But I think the strategy is clear. It's we wanna get to the point where we are owning that direct to consumer relationship. We are monetizing through a subscription product. We are monetizing through additional add-ons that people can do on top of that. And we are monetizing through our vast aray of merchandising, theme parks, cruise ships, and everything else to allow people to spend more and more and more with us. That strategy is still where they're going the last two big things Iger did before he left, were launching Disney Plus and buying Fox,Peter: LEt's be clear that Chappek isn't against any of those things. Strategically as you've pointed it out, he's on the same page. It's all just tactics not being quite the same as what Iger might have done or might now do.Edward: And even on tactics, I'm not sure, if you look at the things that have hurt the stock price and where Chap has taken ahead, like first of all, Disney Plus has grown faster than they ever thought they would. He over delivered on that. Whether that was his doing or the, the fact is the metric is much better than anyone expected, but there were mistakes along the way. He has fought. There's been lots of fights with the creative side of the organization. Chapek comes from the theme park side. He came into the CEO role and then immediately Covid hit and the theme parks all went to zero. So he was forced to figure out how to do Disney plus where all their revenues coming from for the foreseeable future. Now things have flipped and the theme parks are just minting money. They're doing really, really, really well. But he's pissed off a lot of people by raising prices dramatically. But again, I'm not sure what Iger would've done differently in that case because the demand for the theme parks has has gone way, way, way up. So in the short term, you can't go and build more theme parks. So supplies is what it is. And so you're left with two choices. Either you are raising prices or you are giving a poor consumer experience, either because the parks are just packed full, and they're just unpleasant, or you're turning away people at the door who have booked a vacation. And so none of those options seem great, and of those options, it feels like raising prices was probably the one that Iger would've done as well.Peter: Exactly. So here's the big question. I agree completely with that. It might be that how things play out now tactically and strategically would be the same regardless of which Bob is at the helm, but just having Iger just seems to have this warm glow that will just make the same tactics, not only more palatable but downright genius because they're coming from Iger instead of Chapek. What do you think?Edward: I think that's absolutely right. They're in such a tough spot right now. There's so much going on and it's super, super, super risky what they're trying to do. I think everyone knows that there's really no choice but to go down this path. But also everyone knows that it's a really hard path to go down. And so not only do you need to have the right strategy, which I think people think that is true. You need to have the right tactics, which frankly I don't think Chapek, if he messed up on tactics, it was on a marginal basis. But where there was a bigger mess up was a bunch of execution of those tactics. And so things like the Black Widow movie early on in the pandemic, they decided to take that out of the theaters and put it onto Disney plus. And I think that was a very rational tactical thing to do given the situation they're in. But in execution, Chapek got into a big fight with Scarlette Johansen, who really came down hard, sued Disney. They hurt their relationship with her. Now. Disney ends up hurting their reputation as a good place to go and work if you're a top tier creative. In the short term, maybe they make a little bit more money on the movie, but in the long run they damage the relationship with the very people that are creating the product that they need to excel with.Peter: Fair point. All right, let's pivot from DS to SBF and FTX. There you say that, or at least you're quoting SBF saying strategy was fine. The tactics were at fault. You don't really mean that, you're just saying that's what he said, but you think otherwise.Edward: I'm no financial expert, but I've been following it as closely as I can and it sure looks to me like there was all sorts of... So Fbf owned two companies. He had ftx, but he also had the trading arm, Alameda Research. And there was money traded back and forth between those two organizations. And what I understand is. So imagine if FTX had, I'm making up a number, 10 million tokens and they're sitting on them and those things are worth whatever someone's willing to pay for them and Alameda comes on and says, Hey, I'll buy one of your tokens for a thousand dollars. So now all of a sudden the stock value of those tokens is a thousand dollars times 10 million, which is a huge amount of money they're sitting on. And then they basically end up using that valuation as collateral to do all sorts of loans and leverage to go and do other things with their money. FTX then takes in a bunch of customer deposits and then loans those customer deposits over to Alameda. Alameda then is then sitting on a bunch of these tokens that they're using as collateral against the borrowing of that real money that people put into ftx. Alameda then loses a bunch of that money and it all comes tumbling down when they realize that their collateral is not worth anything. It's all made up collateral. That's my understanding of what happened. Nothing like that has happened exactly before, but things like that have happened before. It's effectively fraud. It's fraud and theft. SBF, however, went on and interviewed Kelsey Piper over at Vox, and his argument was hey, we were doing was great. We were doing all sorts of awesome things, but our record keeping was terrible. We just made a bunch of like rookie terrible, incompetent mistakes. The new CEO who came in to run the company, Is backing SPF up in that yeah, this whole thing is a mess. That everything here is. What was his quote? I quote, I quoted him on my piece. He said, never in my career have I seen such a complete failure of corporate controls and such a complete absence of trustworthy financial information as occurred here. And this is from the guy who also oversaw the bankruptcy of Enron.So it was a mess. They clearly, clearly, clearly were tactically incompetent and SBF is claiming that they didn't know they were stealing all these funds. It's entirely possible that he's right because they seemed like they didn't really know anything that was going on. And there was no financial backups and no guardrails for anything. But generally the overall strategy was built on a house of cards to begin with. So whether their tactics were correct or not maybe it wouldn't have collapsed as badly if they had great tactics, but it was gonna collapse one way or other.Peter: In this case, it's not strategy versus tactics, as you say in the title of the piece, it's and. They're doing bad and both, and it's hard to, pin the blame on one type of decision or another.Edward: The hard part of writing this piece was that given the fact that their strategy was so unethical and terrible and their tactics were so incompetent, how did they manage to get as big as they did so that they caused this disaster to happen.Peter: It's crazy. But but then speaking of which, it takes us to our third character of the week, Elon Musk. Now, you and I had a conversation a couple weeks back. We were saying generally positive things about verification, badges and just the possibilities of getting the business model right. And of course it's too early to tell for sure. But, these couple of weeks since that conversation, well, things have gone differently.Edward: Specifically the thing that we talked about, which was the Twitter blue, $8 a month to get certified.Peter: Verified.Edward: Verified. Verified. And what happened was that the verification process was effectively just having a credit card. So it wasn't like they matched. Your name that you put on Twitter with the name on your credit card or check the address, or had you send a a driver's license with the verification, it was a matter of pay the $8 and you can name yourself whatever you want. In terms of. Strategic idea, allowing people to pay $8 to get certified seems like a very valid idea and a very, I don't know if it's, it is the right strategy, but arguably, at least we argued a couple of podcasts ago, is that it was a good strategy. In execution what that allowed them to do because they didn't create any of those guardrails, they didn't have any verification process beyond paying the $8 is people impersonated all sorts of companies. They impersonated Elon Musk, they impersonated giant companies and had them say ridiculous things with a certification check next to them, and it became a big joke. And so an example of potentially a good strategy with very weak tactical execution.Peter: And what about the broader issues? The way he's running the company, day to day tactics, strategy, whatever it is, it's not good, but what, which basket would you put it in?Edward: I think there's an overlap. First of all, part of it seems like he's kind of changing his strategy on the fly. He's going back and forth and changing what his strategy is, but I think in general, his thesis going into the company was that this company was mismanaged. We need to eliminate a large number of people at the company and replace them with other people. We need to change the culture of this place from one of working from 10:00 AM until 3:00 PM to one where you're working from 7:00 AM until midnight and coming in on the weekends and turning into a hard driving startup type culture with a much smaller team that's much more dedicated and highly compensated. And it feels like that's his strategy. And he wants to go to create a company that ships product really quickly, makes mistakes, fixes them, and keeps going. That is something that I think most owners of most businesses would want for their companies. The challenge becomes how do you get there from here? And that's where there's been lots of flailing and failing. That doesn't mean the whole process is gonna fail, but there's been lots of mistakes made in that process of getting from A to B. In a situation we're getting from A to A to B is gonna be hard no matter what, even if you did it perfect.Peter: So what's your longer term prognosis? Do you think that he'll get this strategy right and line up the tactics appropriately?Edward: I don't know. It's so hard to know. I think the strategy is right. The question is whether the company will survive the process of getting them there. They're burning through cash. As an example, they laid off a bunch of people via email that work in Europe, and you can't actually do that. It's illegal to do that in Europe, so all those people that they fired in Europe actually aren't fired, they turned off their salaries. They're not making any money anymore, but all those people have a class action lawsuit that's going to go against Twitter and there's going to be a huge fine. That type of stuff matters in a situation where it, if they succeed, it's gonna be by the skin of their teeth. They're the Amazon in 2001 where we need to keep doing everything right and working our butts off to keep this plane flying over the treetops so that we can take off and circle the planet. But before we can circle the planet, we need to get over these trees. If they get over the trees, I think there's a good argument. Twitter's a fantastic, unique product that can do all sorts of incredible things and far more than the old team was doing. But he still has to get over the trees and that's where it's a lot unclear.Peter: Yeah. So it takes us to kind of the bottom line, as you say, and I don't think anyone would disagree. Strategy becomes far more urgent in rapidly changing environments. Who could argue with that yet at the same time, in rapidly changing environments. We start rearranging deck chairs, which is far more tactical.Edward: I think when things are going smoothly, when things are not changing, strategy frankly doesn't matter very much. Tactics matter a little bit, and execution matters a lot. When you're in a place where things are changing rapidly and you need to get to someplace new, all of a sudden strategy matters a lot. But that doesn't mean that tactics and execution matter less. They still matter a lot too. It just becomes like everything matters. It's becomes so easy to fail. You only need one chin in the chain to break and you're not gonna get there.Peter: And I think all three of these cases show that interplay. So again, it's not strategy versus tactics, but focusing on the and, getting to sync up properly and, easier said than done. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit marketingbs.substack.com

The Nonlinear Library
EA - Announcing the first issue of Asterisk by Clara Collier

The Nonlinear Library

Play Episode Listen Later Nov 21, 2022 1:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the first issue of Asterisk, published by Clara Collier on November 21, 2022 on The Effective Altruism Forum. Are you a fan of engaging, epistemically rigorous longform writing about the world's most pressing problems? Interested in in-depth interviews with leading scholars? A reader of taste and discernment? Sick of FTXcourse? Distract yourself with the inaugural issue of Asterisk Magazine, out now! Asterisk is a new quarterly journal of clear writing and clear thinking about things that matter (and, occasionally, things we just think are interesting). In this issue: Kelsey Piper argues that What We Owe The Future can't quite support the weight of its own premises. Kevin Esvelt talks about how we can prevent the next pandemic. Jared Leibowich gives us a superforecaster's approach to modeling monkeypox. Christopher Leslie Brown on the history of abolitionism and the slippery concept of moral progress Stuart Ritchie tries to find out if the replication crisis has really made science better. Dietrich Vollrath explains what economists do and don't know about why some countries become rich and others don't. Scott Alexander asks: is wine fake? Karson Elmgren on the history and future of China's semiconductor industry. Xander Balwit imagines a future where genetic engineering has radically altered the animals we eat. A huge thank you to everyone in the community who helped us make Asterisk a reality. We hope you all enjoy reading it as much as we enjoyed making it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Review: What We Owe The Future by Kelsey Piper

The Nonlinear Library

Play Episode Listen Later Nov 21, 2022 1:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review: What We Owe The Future, published by Kelsey Piper on November 21, 2022 on The Effective Altruism Forum. For the inaugural edition of Asterisk, I wrote about What We Owe The Future. Some highlights: What is the longtermist worldview? First — that humanity's potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible. Here there's little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren't of moral importance”; it's usually “because I don't think we can predictably affect the lives of future people in the desired direction.” As it happens, I think we can — but not through the pathways outlined in What We Owe the Future. The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical. I think we're in a dangerous world, one with perils ahead for which we're not at all prepared, one where we're likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from What We Owe the Future feels glaring. If we grant MacAskill's premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Trumpcast
Political Gabfest: SBF FTX WTF?

Trumpcast

Play Episode Listen Later Nov 19, 2022 51:25


This week, David Plotz, Emily Bazelon, and John Dickerson discuss Trump's campaign announcement, election denying candidates' failures in the midterms, and guest Matthew Zeitlin on the impact the implosion of Sam Bankman-Fried's crypto exchange FTX may have on the Effective Altruism movement. Here are some notes and references from this week's show: Donie O'Sullivan for CNN: “Facebook Fact-Checkers Will Stop Checking Trump After Presidential Bid Announcement” Matthew Zeitlin for Grid: “Sam Bankman-Fried Gave Millions To Effective Altruism. What Happens Now That The Money Is Gone?” Kelsey Piper for Vox: “Sam Bankman-Fried Tries To Explain Himself” What We Owe the Future, by William MacAskill William MacAskill for Effective Altruism Forum: “EA And The Current Funding Situation” This American Life: “Watching the Watchers” Here are this week's chatters: John: Jason P. Frank for Vulture: “Stephen Colbert, Emma Watson, and More Celebs to Relish in Pickleball Tournament”; Isabel Gonzalez for CBS News: “Mike Tyson, Evander Holyfield Partner To Create Ear-Shaped, Cannabis-Infused Edibles” Emily: William Melhado for The Texas Tribune: “Federal Judge In Texas Rules That Disarming Those Under Protective Orders Violates Their Second Amendment Rights” David: Politics and Prose: City Cast DC Live Taping with Michael Schaffer, David Plotz, and Anton Bogomazov - at Union Market; Justin Jouvenal for The Washington Post: “D.C.'s Bitcoin King: Yachts, Penthouses, A Python — And Tax Dodging?” Listener chatter from Kelly Mills: The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine Who Outwitted America's Enemies, by Jason Fagone For this week's Slate Plus bonus segment Emily, David, and John contemplate the Thanksgiving traditions they would like to adopt or improve.   Tweet us your questions and chatters @SlateGabfest or email us at gabfest@slate.com. (Messages may be quoted by name unless the writer stipulates otherwise.) Podcast production by Cheyna Roth. Research by Bridgette Dunlap. Learn more about your ad choices. Visit megaphone.fm/adchoices

Sway
A Hard Fork in the Road: FTX's Unraveling and Elon's Loyalty Oath

Sway

Play Episode Listen Later Nov 18, 2022 47:08


The balance sheet contains an apology, the in-house coach is concerned that company executives are “undersexed” and billions in customer funds remain in jeopardy. The wreckage at FTX goes from bad to worse.Plus: Elon's “extremely hardcore” plan for Twitter 2.0.Additional Resources:George K. Lerner, FTX's in-house performance coach, said he was shocked by the collapse of FTX.In an interview with Matt Levine, a Bloomberg columnist, Sam Bankman-Fried described his strategy to restore faith in the crypto ecosystem.Bankman-Fried reflected on his actions as chief executive of FTX in a series of Twitter messages with Kelsey Piper, a Vox reporter.Elon Musk told Twitter employees in an email that the company would become an “extremely hardcore” operation. Employees were asked to click yes to be part of the new Twitter or take severance.Musk's social calendar includes courting comedians and hopping on yachts.We want to hear from you. Email us at hardfork@nytimes.com. Follow “Hard Fork” on TikTok: @hardfork

Political Gabfest
SBF FTX WTF?

Political Gabfest

Play Episode Listen Later Nov 17, 2022 51:25


This week, David Plotz, Emily Bazelon, and John Dickerson discuss Trump's campaign announcement, election denying candidates' failures in the midterms, and guest Matthew Zeitlin on the impact the implosion of Sam Bankman-Fried's crypto exchange FTX may have on the Effective Altruism movement. Here are some notes and references from this week's show: Donie O'Sullivan for CNN: “Facebook Fact-Checkers Will Stop Checking Trump After Presidential Bid Announcement” Matthew Zeitlin for Grid: “Sam Bankman-Fried Gave Millions To Effective Altruism. What Happens Now That The Money Is Gone?” Kelsey Piper for Vox: “Sam Bankman-Fried Tries To Explain Himself” What We Owe the Future, by William MacAskill William MacAskill for Effective Altruism Forum: “EA And The Current Funding Situation” This American Life: “Watching the Watchers” Here are this week's chatters: John: Jason P. Frank for Vulture: “Stephen Colbert, Emma Watson, and More Celebs to Relish in Pickleball Tournament”; Isabel Gonzalez for CBS News: “Mike Tyson, Evander Holyfield Partner To Create Ear-Shaped, Cannabis-Infused Edibles” Emily: William Melhado for The Texas Tribune: “Federal Judge In Texas Rules That Disarming Those Under Protective Orders Violates Their Second Amendment Rights” David: Politics and Prose: City Cast DC Live Taping with Michael Schaffer, David Plotz, and Anton Bogomazov - at Union Market; Justin Jouvenal for The Washington Post: “D.C.'s Bitcoin King: Yachts, Penthouses, A Python — And Tax Dodging?” Listener chatter from Kelly Mills: The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine Who Outwitted America's Enemies, by Jason Fagone For this week's Slate Plus bonus segment Emily, David, and John contemplate the Thanksgiving traditions they would like to adopt or improve.   Tweet us your questions and chatters @SlateGabfest or email us at gabfest@slate.com. (Messages may be quoted by name unless the writer stipulates otherwise.) Podcast production by Cheyna Roth. Research by Bridgette Dunlap. Learn more about your ad choices. Visit megaphone.fm/adchoices

Slate Daily Feed
Political Gabfest: SBF FTX WTF?

Slate Daily Feed

Play Episode Listen Later Nov 17, 2022 51:25


This week, David Plotz, Emily Bazelon, and John Dickerson discuss Trump's campaign announcement, election denying candidates' failures in the midterms, and guest Matthew Zeitlin on the impact the implosion of Sam Bankman-Fried's crypto exchange FTX may have on the Effective Altruism movement. Here are some notes and references from this week's show: Donie O'Sullivan for CNN: “Facebook Fact-Checkers Will Stop Checking Trump After Presidential Bid Announcement” Matthew Zeitlin for Grid: “Sam Bankman-Fried Gave Millions To Effective Altruism. What Happens Now That The Money Is Gone?” Kelsey Piper for Vox: “Sam Bankman-Fried Tries To Explain Himself” What We Owe the Future, by William MacAskill William MacAskill for Effective Altruism Forum: “EA And The Current Funding Situation” This American Life: “Watching the Watchers” Here are this week's chatters: John: Jason P. Frank for Vulture: “Stephen Colbert, Emma Watson, and More Celebs to Relish in Pickleball Tournament”; Isabel Gonzalez for CBS News: “Mike Tyson, Evander Holyfield Partner To Create Ear-Shaped, Cannabis-Infused Edibles” Emily: William Melhado for The Texas Tribune: “Federal Judge In Texas Rules That Disarming Those Under Protective Orders Violates Their Second Amendment Rights” David: Politics and Prose: City Cast DC Live Taping with Michael Schaffer, David Plotz, and Anton Bogomazov - at Union Market; Justin Jouvenal for The Washington Post: “D.C.'s Bitcoin King: Yachts, Penthouses, A Python — And Tax Dodging?” Listener chatter from Kelly Mills: The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine Who Outwitted America's Enemies, by Jason Fagone For this week's Slate Plus bonus segment Emily, David, and John contemplate the Thanksgiving traditions they would like to adopt or improve.   Tweet us your questions and chatters @SlateGabfest or email us at gabfest@slate.com. (Messages may be quoted by name unless the writer stipulates otherwise.) Podcast production by Cheyna Roth. Research by Bridgette Dunlap. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
LW - Kelsey Piper's recent interview of SBF by agucova

The Nonlinear Library

Play Episode Listen Later Nov 17, 2022 0:26


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kelsey Piper's recent interview of SBF, published by agucova on November 16, 2022 on LessWrong. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Kelsey Piper's recent interview of SBF by agucova

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 17, 2022 0:26


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kelsey Piper's recent interview of SBF, published by agucova on November 16, 2022 on LessWrong. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Kelsey Piper's recent interview of SBF by Agustín Covarrubias

The Nonlinear Library

Play Episode Listen Later Nov 16, 2022 2:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kelsey Piper's recent interview of SBF, published by Agustín Covarrubias on November 16, 2022 on The Effective Altruism Forum. Kelsey Piper from Vox's Future Perfect very recently released an interview (made through Twitter DMs) with Sam Bankman-Fried. The interview goes in depth into the events surrounding FTX and Alameda Research. As we messaged, I was trying to make sense of what, behind the PR and the charitable donations and the lobbying, Bankman-Fried actually believes about what's right and what's wrong — and especially the ethics of what he did and the industry he worked in. Looming over our whole conversation was the fact that people who trusted him have lost their savings, and that he's done incalculable damage to everything he proclaimed only a few weeks ago to care about. The grief and pain he has caused is immense, and I came away from our conversation appalled by much of what he said. But if these mistakes haunted him, he largely didn't show it. The interview gives a much-awaited outlet into SBF's thinking, specifically in relation to prior questions in the community regarding whether SBF was practicing some form of naive consequentialism or whether the events surround the crisis largely emerged from incompetence. During the interview, Kelsey asked explicitly about previous statements by SBF agreeing with the existence of strong moral boundaries to maximizing good. His answers seem to suggest he had intentionally misrepresented his views on the issue: This seems to give some credit to the theory that SBF could have been acting like a naive utilitarian, choosing to engage in morally objectionable behavior to maximize his positive impact, while explicitly misrepresenting his views to others. However, Kelsey also asked directly regarding the lending out of customer deposits alongside Alameda Research: All of his claims are at least consistent with the view of SBF acting like an incompetent investor. FTX and Alameda Research seems to have had serious governance and accounting problems, and SBF seems to have taken several decisions which to him sounded individually reasonable, all based on bad information. He repeatedly doubled down, instead of cutting his losses. I'm still not sure what to take out of this interview, especially because Sam seems, at best, somewhat incoherent regarding his moral views and previous mistakes. This might have to do with his emotional state at the time of the interview, or even be a sign that he's blatantly lying, but I still think there is a lot of stuff to update from. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - Covid 10/27/22: Another Origin Story by Zvi

The Nonlinear Library

Play Episode Listen Later Oct 29, 2022 20:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Covid 10/27/22: Another Origin Story, published by Zvi on October 27, 2022 on LessWrong. The big story this week was a new preprint claiming to show that Covid-19 had an unnatural origin. For several days, this was a big story with lots of arguing about it, lots of long threads, lots of people accusing others of bad faith or being idiots or not understanding undergraduate microbiology, and for some reason someone impersonating a virologist to spy on Kelsey Piper. Then a few days later all discussion of it seemed to vanish. It wasn't that everyone suddenly came to an agreement to move on. All sides simply decided that this was no longer the Current Thing. See the section for further discussion. In the end I did not update much, so I am mostly fine with this null result. There's also more Gain of Function research looking to create a new pandemic. There was a lot of consensus among the comments and those I know that this work must stop, yet little in the way of good ways to stop it. Several people gave versions of ‘have you considered violence or otherwise going outside the law?' and my answer is no. While the dangers here are real, they are not at anything like the levels that would potentially justify such actions. Note on Deleted Post from This Week Finally, I need to address the post that got taken down in a bit more detail. I want to thank Saloni in particular for quickly and efficiently making some of my mistakes clear to me both quickly and clearly, with links, so I could within about an hour realize I'd made a huge mistake and the whole post structure and conclusions no longer made sense, so I took the post down. Please disregard it. Everyone has been great about understanding that mistakes happen, and I want you to know I appreciate it, and hope it helps myself and others similarly address errors in the future. How did the mistakes happen? Ultimately, it is 100% my fault, on multiple counts, no excuses. What are some of the things I did wrong, so I can hopefully minimize chances they happen again? My logic was flawed. I wasn't thinking about the power of the study properly. I let the truly awful takes and absence of good takes defending colonoscopies make me too confident in the lack of available good takes doing so, and let that bias my thinking. I got feedback before posting, but I did not get enough or get it from the right sources. I heard everyone talking about ‘first RCT' in various forms and failed to notice it was only the first to look at all-cause mortality rather than the first RCT. The authors of this one made the mistake of trying to measure all-cause mortality as primary endpoint despite lacking the power to do so, in a way that my brain didn't properly process, compounding the errors. I didn't properly consider the possibility that the main result of a published paper was plausibly highly ‘unlucky' in part due to training on decades of publication bias. I didn't fully appreciate the magnitude of the healthy patient bias, which made certain extrapolations sound patently absurd – I'm still super skeptical of those claims but they're not actually obviously crazy on reflection. And I messed up a few small technical details. In general, the whole thing is really complicated. There is no question that the study was a disappointing result for the effectiveness of colonoscopies, well below what the researchers expected to find. However, there is a lot of room for ‘disappointing but still worthwhile' and a lot of additional past data to incorporate. I genuinely don't know what I am going to think when I am finished thinking about it. Executive Summary New preprint on potential origins of Covid-19, not updating much. Gain of Function research continues. Please disregard this week's earlier post until I can properly fix it. Let's run the numbers. The Numbe...

The Nonlinear Library: LessWrong
LW - Covid 10/27/22: Another Origin Story by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later Oct 29, 2022 20:23


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Covid 10/27/22: Another Origin Story, published by Zvi on October 27, 2022 on LessWrong. The big story this week was a new preprint claiming to show that Covid-19 had an unnatural origin. For several days, this was a big story with lots of arguing about it, lots of long threads, lots of people accusing others of bad faith or being idiots or not understanding undergraduate microbiology, and for some reason someone impersonating a virologist to spy on Kelsey Piper. Then a few days later all discussion of it seemed to vanish. It wasn't that everyone suddenly came to an agreement to move on. All sides simply decided that this was no longer the Current Thing. See the section for further discussion. In the end I did not update much, so I am mostly fine with this null result. There's also more Gain of Function research looking to create a new pandemic. There was a lot of consensus among the comments and those I know that this work must stop, yet little in the way of good ways to stop it. Several people gave versions of ‘have you considered violence or otherwise going outside the law?' and my answer is no. While the dangers here are real, they are not at anything like the levels that would potentially justify such actions. Note on Deleted Post from This Week Finally, I need to address the post that got taken down in a bit more detail. I want to thank Saloni in particular for quickly and efficiently making some of my mistakes clear to me both quickly and clearly, with links, so I could within about an hour realize I'd made a huge mistake and the whole post structure and conclusions no longer made sense, so I took the post down. Please disregard it. Everyone has been great about understanding that mistakes happen, and I want you to know I appreciate it, and hope it helps myself and others similarly address errors in the future. How did the mistakes happen? Ultimately, it is 100% my fault, on multiple counts, no excuses. What are some of the things I did wrong, so I can hopefully minimize chances they happen again? My logic was flawed. I wasn't thinking about the power of the study properly. I let the truly awful takes and absence of good takes defending colonoscopies make me too confident in the lack of available good takes doing so, and let that bias my thinking. I got feedback before posting, but I did not get enough or get it from the right sources. I heard everyone talking about ‘first RCT' in various forms and failed to notice it was only the first to look at all-cause mortality rather than the first RCT. The authors of this one made the mistake of trying to measure all-cause mortality as primary endpoint despite lacking the power to do so, in a way that my brain didn't properly process, compounding the errors. I didn't properly consider the possibility that the main result of a published paper was plausibly highly ‘unlucky' in part due to training on decades of publication bias. I didn't fully appreciate the magnitude of the healthy patient bias, which made certain extrapolations sound patently absurd – I'm still super skeptical of those claims but they're not actually obviously crazy on reflection. And I messed up a few small technical details. In general, the whole thing is really complicated. There is no question that the study was a disappointing result for the effectiveness of colonoscopies, well below what the researchers expected to find. However, there is a lot of room for ‘disappointing but still worthwhile' and a lot of additional past data to incorporate. I genuinely don't know what I am going to think when I am finished thinking about it. Executive Summary New preprint on potential origins of Covid-19, not updating much. Gain of Function research continues. Please disregard this week's earlier post until I can properly fix it. Let's run the numbers. The Numbe...

The Daily Dive
Why Labs Keep Making Dangerous Viruses

The Daily Dive

Play Episode Listen Later Oct 21, 2022 19:55


Scientists at Boston University recently created in a lab a new Covid virus that had the transmissibility of the Omicron variant and was also more likely to cause severe disease.  They called it the Omicron S-bearing virus.  The study found that the engineered virus had a mortality rate of 80%.  The experiment has once again called into question the purpose of so-called “gain of function” research and also oversight on such projects.  Kelsey Piper, senior writer at Vox's Future Perfect, joins us for why labs keep making dangerous viruses.   Next, AI art generators have just been unleashed on the public.  These new text-to-image generators let you type in almost any phrase, and it will return you an image in various art styles.  Dall-E 2 by OpenAI and DreamStudio by Stability AI are now open for anyone to use and the result is a lot of fun!  The artificial intelligence interprets your words and creates fully original images, but there are still a lot of questions over how it works, copyright and who owns the images?  Then there are concerns about real artists and graphic designers.  Joanna Stern, senior personal tech columnist at the WSJ, joins us for what the future of AI art may hold.See omnystudio.com/listener for privacy information.

The Nonlinear Library
EA - Overreacting to current events can be very costly by Kelsey Piper

The Nonlinear Library

Play Episode Listen Later Oct 4, 2022 5:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overreacting to current events can be very costly, published by Kelsey Piper on October 4, 2022 on The Effective Altruism Forum. epistemic status: I am fairly confident that the overall point is underrated right now, but am writing quickly and think it's reasonably likely the comments will identify a factual error somewhere in the post. Risk seems unusually elevated right now of a serious nuclear incident, as a result of Russia badly losing the war in Ukraine. Various markets put the risk at about 5-10%, and various forecasters seem to estimate something similar. The general consensus is that Russia, if they used a nuclear weapon, would probably deploy a tactical nuclear weapon on the battlefield in Ukraine, probably in a way with a small number of direct casualties but profoundly destabilizing effects. A lot of effective altruists have made plans to leave major cities if Russia uses a nuclear weapon, at least until it becomes clear whether the situation is destabilizing. I think if that happens we'll be in a scary situation, but based on how we as a community collectively reacted to Covid, I predict an overreaction -- that is, I predict that if there's a nuclear use in Ukraine, EAs will incur more costs in avoiding the risk of dying in a nuclear war than the actual expected costs of dying in a nuclear war, more costs than necessary to reduce the risks of dying in a nuclear war, and more costs than we'll endorse in hindsight. With respect to Covid, I am pretty sure the EA community and related communities incurred more costs in avoiding the risk of dying of Covid than was warranted. In my own social circles, I don't know anyone who died of Covid, but I know of a healthy person in their 20s or 30s who died of failing to seek medical attention because they were scared of Covid. A lot of people incurred hits to their productivity and happiness that were quite large. This is especially true for people doing EA work they consider directly important: being 10% less impactful at an EA direct work job has a cost measured in many human or animal or future-digital-mind lives, and I think few people explicitly calculated how that cost measured up against the benefit of reduced risk of Covid. If Russia uses a nuclear weapon in Ukraine, here is what I expect to happen: a lot of people will be terrified (correctly assessing this as a significant change in the equilibrium around nuclear weapon use which makes a further nuclear exchange much more likely.) Many people will flee major cities in the US and Europe. They will spend a lot of money, take a large productivity hit from being somewhere with worse living conditions and worse internet, and spend a ton of their time obsessively monitoring the nuclear situation. A bunch of very talented ops people will work incredibly hard to get reliable fast internet in remote parts of Northern California or northern Britain. There won't be much EAs not already in nuclear policy and national security can do, but there'll be a lot of discussion and a lot of people trying to get up to speed on the situation/feeling a lot of need to know what's going on constantly. The stuff we do is important, and much less of it will get done. It will take a long time for it to become obvious if the situation is stable, but eventually people will mostly go back to cities (possibly leaving again if there are further destabilizing events). The recent Samotsvety forecast estimates that a person staying in London will lose 3-100 hours to nuclear risk in expectation (edit: which goes up by a factor of 6 in the case of actual tactical nuke use in Ukraine.) I think it is really easy for that person to waste more than 3-100 hours by being panicked, and possible to waste more than 20 - 600 hours on extreme response measures. And that's the life-hour costs of never fleein...

Institutionalized
Effective Altruism with Kelsey Piper

Institutionalized

Play Episode Listen Later Sep 28, 2022 62:59


This week we are joined by Kelsey Piper to discuss effective altruism, its popularity, and if it succeeds or fails as an institution. Recommendations: Reasons and Persons by Daron Parfit In Shifra's Arms The Precipice by Toby Ord

The Nonlinear Library
EA - Open EA Global by Scott Alexander

The Nonlinear Library

Play Episode Listen Later Sep 1, 2022 8:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open EA Global, published by Scott Alexander on September 1, 2022 on The Effective Altruism Forum. I think EA Global should be open access. No admissions process. Whoever wants to go can. I'm very grateful for the work that everyone does to put together EA Global. I know this would add much more work for them. I know it is easy for me, a person who doesn't do the work now and won't have to do the extra work, to say extra work should be done to make it bigger. But 1,500 people attended last EAG. Compare this to the 10,000 people at the last American Psychiatric Association conference, or the 13,000 at NeurIPS. EAG isn't small because we haven't discovered large-conference-holding technology. It's small as a design choice. When I talk to people involved, they say they want to project an exclusive atmosphere, or make sure that promising people can find and network with each other. I think this is a bad tradeoff. ...because it makes people upset This comment (seen on Kerry Vaughan's Twitter) hit me hard: A friend describes volunteering at EA Global for several years. Then one year they were told that not only was their help not needed, but they weren't impressive enough to be allowed admission at all. Then later something went wrong and the organizers begged them to come and help after all. I am not sure that they became less committed to EA because of the experience, but based on the look of delight in their eyes when they described rejecting the organizers' plea, it wouldn't surprise me if they did. Not everyone rejected from EAG feels vengeful. Some people feel miserable. This year I came across the Very Serious Guide To Surviving EAG FOMO: Part of me worries that, despite its name, it may not really be Very Serious... ...but you can learn a lot about what people are thinking by what they joke about, and I think a lot of EAs are sad because they can't go to EAG. ...because you can't identify promising people. In early 2020 Kelsey Piper and I gave a talk to an EA student group. Most of the people there were young overachievers who had their entire lives planned out, people working on optimizing which research labs they would intern at in which order throughout their early 20s. They expected us to have useful tips on how to do this. Meanwhile, in my early 20s, I was making $20,000/year as an intro-level English teacher at a Japanese conglomerate that went bankrupt six months after I joined. In her early 20s, Kelsey was taking leave from college for mental health reasons and babysitting her friends' kid for room and board. If either of us had been in the student group, we would have been the least promising of the lot. And here we were, being asked to advise! I mumbled something about optionality or something, but the real lesson I took away from this is that I don't trust anyone to identify promising people reliably. ...because people will refuse to apply out of scrupulosity. I do this. I'm not a very good conference attendee. Faced with the challenge of getting up early on a Saturday to go to San Francisco, I drag my feet and show up an hour late. After a few talks and meetings, I'm exhausted and go home early. I'm unlikely to change my career based on anything anyone says at EA Global, and I don't have any special wisdom that would convince other people to change theirs. So when I consider applying to EAG, I ask myself whether it's worth taking up a slot that would otherwise go to some bright-eyed college student who has been dreaming of going to EAG for years and is going to consider it the highlight of their life. Then I realize I can't justify bumping that college student, and don't apply. I used to think I was the only person who felt this way. But a few weeks ago, I brought it up in a group of five people, and two of them said they had also stopped applying to EA...

The Nonlinear Library
EA - EA Dedicates by ozymandias

The Nonlinear Library

Play Episode Listen Later Jun 23, 2022 9:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Dedicates, published by ozymandias on June 23, 2022 on The Effective Altruism Forum. I've noticed a persistent competing need in effective altruist communities. On one hand, many people want permission not to only value effective altruism. They care about doing good in the world, but they also care about other things: their community, their friendships, their children, a hobby they feel passionate about like art or sports or RPGs or programming, a cause that's personal to them like free speech or cancer, even just spending time vegging out and watching TV. So they emphasize work/life balance and that effective altruism doesn't have to be your only life goal. On the other hand, some people do strive to only care about effective altruism. Of course, they still have hobbies and friendships and take time to rest; effective altruists are not ascetics. But ultimately everything they do is justified by the fact that it strengthens them to continue the work. The discourse about work/life balance can be very alienating to them. It can feel like the effective altruism community isn't honoring the significant personal sacrifices they're making to improve the world. In some cases, people feel like there's a certain crab bucket mentality—you should limit how much good you do so that other people don't feel bad—which is very toxic. Conversely, people who have work/life balance can feel threatened by people who only care about effective altruism. If those people exist, does that mean you have to be one? Are you evil, or a failure, or personally responsible for dozens of counterfactual deaths, because you care about more than one thing? I propose that this conversation would be improved by naming the second group. I suggest calling them “EA dedicates.” In thinking about EA dedicates, I was inspired by thinking about monks. Monks play an important role in religions with monks. They're very admirable people who do a lot of good. The religion wouldn't function without them. And most people are not supposed to be monks. Why We Need Both Dedicates and Non-Dedicates There are two reasons that the effective altruism movement should be open to people who aren't dedicates. First, people who care about more than one thing still do an enormous amount of good. Many of the best effective altruists aren't dedicates, such as journalist Kelsey Piper and CEA community liaison Julia Wise (as well as, of course, many people whose contributions don't succeed in making them EA famous). It would be a tremendous mistake to expel Kelsey Piper for insufficient devotion. Quite frankly, the bednets don't care if the person who buys them also donates to cancer research. Second, most people caring about multiple things is good for the health of the effective altruist community. If the effective altruist community is totally wrongheaded, it's psychologically easier to admit if that doesn't mean losing literally everything you care about and have spent your life working for. There's a certain comfort in being able to say “at least I still have my kids” or “at least I still have my art.” Similarly, the effective altruist movement is already quite insular. People who care about multiple things are more likely to have friends outside the community, and therefore get an outside reality check and views from outside the EA bubble. (An EA dedicate could have outside-community friends and many of them do, but it certainly seems less common.) These are merely two of the ways that having a lot of non-dedicates makes the EA community more resilient. The advantages of being open to EA dedicates, conversely, are pretty obvious. In general, if you care about multiple things, you're going to split your time, energy, and resources across them and have less time, energy, and resources for any particular goal. If you're donatin...

The Nonlinear Library
LW - Why so little AI risk on rationalist-adjacent blogs? by Grant Demaree

The Nonlinear Library

Play Episode Listen Later Jun 13, 2022 11:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why so little AI risk on rationalist-adjacent blogs?, published by Grant Demaree on June 13, 2022 on LessWrong. I read a lot of rationalist-adjacents. Outside of LessWrong and ACX, I hardly ever see posts on AI risk. Tyler Cowen of Marginal Revolution writes that, "it makes my head hurt" but hasn't engaged with the issue. Even Zvi spends very few posts on AI risk. This is surprising, and I wonder what to make of it. Why do the folks most exposed to MIRI-style arguments have so little to say about them? Here's a few possibilities Some of the writers disagree that AGI is a major near-term threat It's unusually hard to think and write about AI risk The best rationalist-adjacent writers don't feel like they have a deep enough understanding to write about AI risk There's not much demand for these posts, and LessWrong/Alignment Forum/ACX are already filling it. Even a great essay wouldn't be that popular Folks engaged in AI risk are a challenging audience. Eliezer might get mad at you When you write about AGI for a mainstream audience, you look weird. I don't think this is as true it used to be, since Ezra Klein did it in the New York Times and Kelsey Piper in Vox Some of these writers are heavily specialized. The mathematicians want to write about pure math. The pharmacologists want to write about drug development. The historians want to argue that WWII strategic bombing was based on a false theory of popular support for the enemy regime, and present-day sanctions are making the same mistake Some of the writers are worried that they'll present the arguments badly, inoculating their readers against a better future argument What they wrote I'll treat Scott Alexander's blogroll as the canonical list of rationalist-adjacent writers. I've grouped them by their stance on the following statement: Misaligned AGI is among the most important existential risks to humanity Explicitly agrees and provides original gears-level analysis (2) Zvi Mowshowitz “The default outcome, if we do not work hard and carefully now on AGI safety, is for AGI to wipe out all value in the universe.” (Dec 2017) Zvi gives a detailed analysis here followed by his own model in response to the 2021 MIRI conversations. Holden Karnofsky of OpenPhil and GiveWell In his Most Important Century series (Jul 2021 to present), Holden explains AGI risk to mainstream audiences. Ezra Klein featured Holden's work in the New York Times. This series had a high impact on me, because Holden used to have specific and detailed objections to MIRI's arguments (2012). Ten years later, he's changed his mind. Explicitly agrees (4) Jacob Falkovich of Putanumonit “Misaligned AI is an existential threat to humanity, and I will match $5,000 of your donations to prevent it.” (Dec 2017) Jacob doesn't make the case himself, but he links to external sources. Kelsey Piper of Vox “the big overarching challenge is making sure that as our systems get more powerful, we design them so their goals are aligned with those of humans — that is, humanity doesn't construct scaled-up superintelligent AI that overwhelms human intentions and leads to catastrophe.” (Apr 2021) Steve Hsu of Information Processing “I do think that the top 3 risks to humanity, in my view, are AI, bio, and unknown unknowns.” (Apr 2020) Alexey Guzey of Applied Divinity Studies “AI risk seems to be about half of all possible existential risk.” The above quote is from a May 2021 PDF, rather than a direct post. I can't find a frontpage post that makes the AI risk case directly Explicitly ambivalent (2) Tyler Cowen of Marginal Revolution “As for Rogue AI... For now I will just say that it makes my head hurt. It makes my head hurt because the topic is so complicated. I see nuclear war as the much greater large-scale risk, by far” (Feb 2022). Julia Galef of Rationally Speaking Julia int...

The Nonlinear Library: LessWrong
LW - Why so little AI risk on rationalist-adjacent blogs? by Grant Demaree

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 13, 2022 11:28


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why so little AI risk on rationalist-adjacent blogs?, published by Grant Demaree on June 13, 2022 on LessWrong. I read a lot of rationalist-adjacents. Outside of LessWrong and ACX, I hardly ever see posts on AI risk. Tyler Cowen of Marginal Revolution writes that, "it makes my head hurt" but hasn't engaged with the issue. Even Zvi spends very few posts on AI risk. This is surprising, and I wonder what to make of it. Why do the folks most exposed to MIRI-style arguments have so little to say about them? Here's a few possibilities Some of the writers disagree that AGI is a major near-term threat It's unusually hard to think and write about AI risk The best rationalist-adjacent writers don't feel like they have a deep enough understanding to write about AI risk There's not much demand for these posts, and LessWrong/Alignment Forum/ACX are already filling it. Even a great essay wouldn't be that popular Folks engaged in AI risk are a challenging audience. Eliezer might get mad at you When you write about AGI for a mainstream audience, you look weird. I don't think this is as true it used to be, since Ezra Klein did it in the New York Times and Kelsey Piper in Vox Some of these writers are heavily specialized. The mathematicians want to write about pure math. The pharmacologists want to write about drug development. The historians want to argue that WWII strategic bombing was based on a false theory of popular support for the enemy regime, and present-day sanctions are making the same mistake Some of the writers are worried that they'll present the arguments badly, inoculating their readers against a better future argument What they wrote I'll treat Scott Alexander's blogroll as the canonical list of rationalist-adjacent writers. I've grouped them by their stance on the following statement: Misaligned AGI is among the most important existential risks to humanity Explicitly agrees and provides original gears-level analysis (2) Zvi Mowshowitz “The default outcome, if we do not work hard and carefully now on AGI safety, is for AGI to wipe out all value in the universe.” (Dec 2017) Zvi gives a detailed analysis here followed by his own model in response to the 2021 MIRI conversations. Holden Karnofsky of OpenPhil and GiveWell In his Most Important Century series (Jul 2021 to present), Holden explains AGI risk to mainstream audiences. Ezra Klein featured Holden's work in the New York Times. This series had a high impact on me, because Holden used to have specific and detailed objections to MIRI's arguments (2012). Ten years later, he's changed his mind. Explicitly agrees (4) Jacob Falkovich of Putanumonit “Misaligned AI is an existential threat to humanity, and I will match $5,000 of your donations to prevent it.” (Dec 2017) Jacob doesn't make the case himself, but he links to external sources. Kelsey Piper of Vox “the big overarching challenge is making sure that as our systems get more powerful, we design them so their goals are aligned with those of humans — that is, humanity doesn't construct scaled-up superintelligent AI that overwhelms human intentions and leads to catastrophe.” (Apr 2021) Steve Hsu of Information Processing “I do think that the top 3 risks to humanity, in my view, are AI, bio, and unknown unknowns.” (Apr 2020) Alexey Guzey of Applied Divinity Studies “AI risk seems to be about half of all possible existential risk.” The above quote is from a May 2021 PDF, rather than a direct post. I can't find a frontpage post that makes the AI risk case directly Explicitly ambivalent (2) Tyler Cowen of Marginal Revolution “As for Rogue AI... For now I will just say that it makes my head hurt. It makes my head hurt because the topic is so complicated. I see nuclear war as the much greater large-scale risk, by far” (Feb 2022). Julia Galef of Rationally Speaking Julia int...

The Nonlinear Library
EA - Stop scolding people for worrying about monkeypox (Vox article) by Lizka

The Nonlinear Library

Play Episode Listen Later May 31, 2022 2:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop scolding people for worrying about monkeypox (Vox article), published by Lizka on May 31, 2022 on The Effective Altruism Forum. I quite liked this recent article from Kelsey Piper. (Note: I'm link-posting this because I think link-posting can be useful.) A few excerpts from the article: There are some solid epidemiological reasons to conclude that monkeypox doesn't pose the same threat to the world that Covid-19 did in 2020. But instead of condemning alarmism, experts should acknowledge the many reasons for that alarm. The world is horribly vulnerable to the next pandemic, we know it will hit at some point, and the undetected spread of monkeypox around the world until there were dozens of cases in non-endemic countries — despite the fact it typically has low transmissibility — shows how profoundly we've failed to learn the lessons from Covid-19 we need to avoid a catastrophic repeat. Many of the biggest missteps of the last few years have happened when our public health and communications institutions have tried to manage public reactions to what they have to say: from Fauci saying that he dismissed mask-wearing early on in the pandemic out of fears of causing mass panic, to worries that endorsing booster shots (even as the evidence grew they were needed) would make the vaccines look bad, to the FDA's earlier seeming reluctance to authorize vaccines for children under age 5, despite data justifying it, out of concerns that authorizing Pfizer and Moderna at different times would confuse the public. In general, I'd like to see public health officials step back entirely from trying to manage our feelings about outbreaks. Don't tell us to worry or not to worry, or not to worry yet. Don't tell us to worry about something else instead. Tell us what measures are being taken to contain the monkeypox outbreak, and prevent the next monkeypox outbreak, and prevent the next outbreak of something much, much worse than monkeypox. By all means, explain the reasons to think monkeypox is likely not very transmissible; that's important information you have relevant expertise on, unlike trying to manage the public's feelings. Once you have the accurate facts about monkeypox — and about the risk of pandemics generally — whether you're worried by those facts isn't really a question for the CDC. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Atlantic article on effective Ukraine aid by iporophiry

The Nonlinear Library

Play Episode Listen Later Apr 6, 2022 2:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Atlantic article on effective Ukraine aid, published by iporophiry on April 5, 2022 on The Effective Altruism Forum. This is an article from the Atlantic from a few days ago. The writer takes on an EA lens, and seems to understand it well. Includes an interview with Pablo Melchor, co-founder and president of Ayuda Efectiva (in Spain). I'd recommend it, and it seems like a good thing to send to non-EA friends.Some excerpts: When I asked Pablo Melchor, the president of an effective-altruism nonprofit in Spain called Ayuda Efectiva, if he'd altered his usual giving in response to the war, he said he hadn't. “This has nothing to do with how much I care (a lot!) but rather with how much I think my donation could achieve,” he wrote to me in an email. “I know the Ukrainian crisis is going to receive a huge amount of resources and any additional amounts will make a much greater difference in now forgotten tragedies,” such as the hundreds of thousands of children who die from malaria each year. This is not to pit one tragedy against another, but just to note that the donations a cause receives are strongly dictated by media coverage. An effective altruist might also try to identify less publicized effects of the war itself. For instance, Chris Szulc, a member of Effective Altruism Poland, told me that he was interested in finding ways to provide Ukrainian refugees with mental-health services. Still other donors might take a bigger-picture perspective and fund initiatives that aim to reduce the risk of nuclear war and of large-scale conflicts. But say you want to do something to try to help Ukrainians in this moment. What's the best thing to do? For starters, donate your money and time, not physical stuff. “It's better to send money to trusted organizations, because they can buy blankets or coats or whatever is needed at much cheaper prices,” Melinda Haring, the deputy director of the Atlantic Council's Eurasia Center, told me. “They can get a brand-new coat for less than it costs for you to ship the coat to Ukraine.” And (For additional ideas, you could check out suggestions from Effective Altruism Poland and from Vox's Kelsey Piper, who highlights, among other organizations, independent Russian media outlets.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Liars by Kelsey Piper

The Nonlinear Library

Play Episode Listen Later Apr 5, 2022 6:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Liars, published by Kelsey Piper on April 5, 2022 on The Effective Altruism Forum. Yesterday the New Yorker published a detailed exploration of an 'expert on serial killers', Stéphane Bourgoin, who turned out to be comprehensively lying about his own past, the murder of his girlfriend (who appears to not exist), his credentials, how many serial killers he'd interviewed, etc., but who was taken seriously for many years, getting genuine privileged access to serial killers for interviews and to victims' families/support groups as a result of his lying. I find serial/compulsive/career liars fascinating. One of the best serial-liar stories that I ran into as a warning story for journalists is that of Stephen Glass, the 1990s New Republic writer who turned out to be comprehensively making up most of the juicy details of his articles, including forging handwritten transcripts of conversations that never happened to present to the magazine's fact-checkers.I mostly just read about this because it's fun, but I do think it has crystallized some things for me which are useful to have in mind even if you don't have fun reading about serial liars. (Takeaways are at the bottom if you want to skip to that.) The dynamics of how serial liars go unnoticed, and how the socially awkward information "hey, we think that guy is a fraud" gets propagated (or fails to get propagated) seem to me to also describe how other less clear-cut kinds of errors and misconduct go unnoticed.A recurring theme in the New Yorker article is that people knew this guy was full of crap, but weren't personally motivated to go try to correct all the ways he was full of crap. “Neither I nor any of our mutual friends at the time had heard the story of his murdered girlfriend, nor of his so-called F.B.I. training,” a colleague and friend of Bourgoin's from the eighties told me. “It triggered rounds of knowing laughter among us, because we all knew it was absolutely bogus.” Bourgoin was telling enough lies that eventually one of them would surely ring wrong to someone, though by then he'd often moved on to a different audience and different lies. I ended up visualizing this as a sort of expanding ring of people who'd encountered Bourgoin's stories. With enough exposure to the stories, most people suspected something was fishy and started to withdraw, but by then Bourgoin had reached a larger audience and greater fame, speaking to new audiences for whom the warning signs hadn't yet started to accumulate. Eventually, he got taken down by an irritated group of internet amateurs who'd noticed all the ways in which he was dishonest and had the free time and spite to actually go around comprehensively proving it. This is a dynamic I've witnessed from the inside a couple of times. There's a Twitter personality called 'Lindyman' who had his fifteen minutes of internet fame last year, including a glowing New York Times profile. Much of his paid Substack content was plagiarized. A lot of people know this and had strong evidence for a while before someone demonstrated it publicly.I personally know someone who Lindyman plagiarized from, who seriously debated whether to write a blog post to the effect of 'Lindyman is a plagiarist', but ended up not doing so. It would've taken a lot of time and effort, and probably attracted the wrath of Lindyman's followers, and possibly led to several frustrating weeks of back and forth, and is that really worth it? And that's for plagiarism of large blocks of text, which is probably the single most provable and clear-cut kind of misbehavior, much harder to argue about than the lies Glass or Bourgoin put forward. Eventually someone got fed up and made the plagiarism public, but it'd been a running joke in certain circles for a while before then.There are more examples I'm aware of where a researc...

Oh, I Like That
Want to Play a Game? (Part 2)

Oh, I Like That

Play Episode Listen Later Mar 24, 2022 52:47


Resources for Ukraine and trans people and families in Texas:“How You can Help Ukranians” by Kelsey Piper for VoxTransgender Education Network of TexasTrans Kids and Families of TexasThread of resources to make use of if you're a caregiver or educator in TexasGet Oh, I Like That merch here! In the last episode, we share our top-level thoughts about choosing games to play and how to think about teaching them to others. This week we dive into our recommendations for exactly which games to play. We cover games you play solo, games that are fun to play with one other person, and games that are good for groups and social situations.This episode was produced by Rachel and Sally and edited by Lucas Nguyen. Our logo was designed by Amber Seger (@rocketorca). Our theme music is by Tiny Music. MJ Brodie transcribed this episode. Follow us on Twitter @OhILikeThatPod.Things we talked about:9 Things You Probably Don't Know About Daylight Saving Time by Rachel for BuzzFeedOne-player tabletop roleplaying games like: Thousand Year Old Vampire, The Wretched, Ironsworn, Red SnowThe 1974 board game Anti-MonopolyAnimal Crossing MonopolyMonopoly Was Designed to Teach the 99% About Income Inequality by Mary Pilon for Smithsonian MagazineHow to Solve the New York Times Crossword by Deb Amlen for the New York TimesSETTussie MussiePARKSTrails: A Parks GameWhy is it called Mexican Train Dominoes?MastermindSushi Go!AnomiaHunt a Killer mystery subscription boxi'm sorry did you say street magicDevotions: The Selected Poems of Mary OliverThe Good Luck Girls by Charlotte Nicole Davis

Efektiivne Altruism Eesti
#19 Maris Salaga digitaalnomaadlusest ja networkimisest

Efektiivne Altruism Eesti

Play Episode Listen Later Mar 23, 2022 50:32


Vestleme Maris Salaga, EA Eesti aktiivse liikmega, kes tegutseb hetkel organisatsioonis Nonlinear Fund, mis otsib ning aitab ellu viia ideid kannatuste ja eksistentsiriskide vähendamiseks. Juttu tuleb Marise kogemusest Efektiivse Altruismiga seotud valdkonnas karjääri alustamisega, networkimisest ja EA Global-ist. - Hinda saadet siin: https://forms.gle/LPRE2ziBs62pjGTX9 Vestluse jooksul mainitud allikad: - The Nonlinear Fund: https://www.nonlinear.org - Charity Entrepreneurship: https://www.charityentrepreneurship.com - Rob Miles (tehisintellekti turvalisusega seotud Youtube'i kanal): https://www.youtube.com/c/RobertMilesAI/videos - The Precipice Toby Ord: https://theprecipice.com - AI safety kursus EA Cambridge: https://www.eacambridge.org/agi-safety-fundamentals - Fathom Radiant: https://fathomradiant.co - Efektiivse Altruismi Job Postings grupp Facebookis: https://www.facebook.com/groups/1062957250383195 Uudised: - Charity Entrepreneurship Incubation Program: https://www.charityentrepreneurship.com/incubation-program - The Nonlinear Library: https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library - Kelsey Piper ukrainlaste aitamisest: https://www.vox.com/future-perfect/22955885/donate-ukraine

ai juttu kelsey piper salaga ea global
The Nonlinear Library
EA - Followup on Terminator by skluug

The Nonlinear Library

Play Episode Listen Later Mar 14, 2022 14:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Followup on Terminator, published by skluug on March 12, 2022 on The Effective Altruism Forum. I posted my last post to the Effective Altruism Forum, where it received more attention than I'd anticipated. This was cool, but also a little scary, since I'm not actually super confident in my thesis—I think I made an under-appreciated argument, but there was plenty of valid pushback in the comments. My post was missing an important piece of context—the other, alternative comparisons for AI risk I was implicitly arguing against. I had something very specific in mind: on The Weeds, after Kelsey Piper says Terminator is not a good comparison for AI risk, she says that a better comparison is Philip Morris, the cigarette company. Now, there are a lot different threats that fall under the umbrella of “AI risk”, but the threats I am most worried about—existential threats—definitely look a lot more like Terminator than Philip Morris. This could just be a difference in priorities, but Kelsey says other things throughout the episode that made it sound to me like she's indeed referring to x-risk from unaligned AGI. Here are some specific critiques made by commenters that I liked. What's the audience? What's the message? My paragraph calling rejection of the Terminator comparison “fundamentally dishonest” was too idealistic. Whether or not something is “like” something else is highly subjective, and pitches about AI x-risk often happen in a context prior to any kind of patient, exhaustive discussion where you can expect your full meaning to come across. This was pointed out by Mauricio: In a good faith discussion, one should be primarily concerned with whether or not their message is true, not what effect it will have on their audience. Agreed, although I might be much less optimistic about how often this applies. Lots of communication comes before good faith discussion--lots of messages reach busy people who have to quickly decide whether your ideas are even worth engaging with in good faith. And if your ideas are presented in ways that look silly, many potential allies won't have the time or interest to consider your arguments. This seems especially relevant in this context because there's an uphill battle to fight--lots of ML engineers and tech policy folks are already skeptical of these concerns. (That doesn't mean communication should be false--there's much room to improve a true message's effects by just improving how it's framed. In this case, given that there's both similarities and differences between a field's concerns and sci-fi movie's concerns, emphasizing the differences might make sense.) (On top of the objections you mentioned, I think another reason why it's risky to emphasize similarities to a movie is that people might think you're worried about stuff because you saw it in a sci-fi movie.) I replied that the Terminator comparison could decrease interest for some audiences and increase interest for others, to which Mauricio replied: That seems roughly right. On how this might depend on the audience, my intuition is that professional ML engineers and policy folks tend to be the first kind of people you mention (since their jobs select for and demand more grounded/pragmatic interests). So, yes, there are considerations pushing for either side, but it's not symmetrical--the more compelling path for communicating with these important audiences is probably heavily in the direction of "no, not like Terminator." Edit: So the post title's encouragement to "stop saying it's not" seems overly broad. I think there are two cruxes here, one about the intended audience of AI alignment messaging, and one about the intended reaction. As I see it, the difficulty of the alignment problem is still very unclear, and may fall into any of several different difficulty tiers: The alignmen...

Oh, I Like That
Want to Play a Game? (Part 1)

Oh, I Like That

Play Episode Listen Later Mar 10, 2022 48:30


Resources for Ukraine and trans people and families in Texas:“How You can Help Ukranians” by Kelsey Piper for VoxTransgender Education Network of TexasTrans Kids and Families of TexasThread of resources to make use of if you're a caregiver or educator in TexasHave you ever thought about how many games are available to us these days? Card games, board games, roleplaying games. Games of chance, games of strategy, games where you win by buying up all the property and then charging people to use it. This is the first installment of a two-part series about games and gaming. In this first episode, we talk about the art of getting people into games, the science of teaching complicated games to newbies, resources for finding games you want to play, and also hot tubs.Get Oh, I Like That merch here! This episode was produced by Rachel and Sally and edited by Lucas Nguyen. Our logo was designed by Amber Seger (@rocketorca). Our theme music is by Tiny Music. MJ Brodie transcribed this episode. Follow us on Twitter @OhILikeThatPod.Things we talked about:“Iceland's Water Cure” by Dan Kois for the New York TimesThe tarot card The TowerHello Wordl, off-brand WordleWordle spinoffs Quordle, Semantle, Poeltl, and WorldleShut Up & Sit Down's game pickerGeek & Sundry's Game the Game seriesTwo tweets that perfectly sum up the agony of teaching a game and being taught a new gameHow to Teach Board Games Like a ProThe Slate podcast Decoder Ring

The Nonlinear Library
EA - AI Risk is like Terminator; Stop Saying it's Not by skluug

The Nonlinear Library

Play Episode Listen Later Mar 8, 2022 15:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk is like Terminator; Stop Saying it's Not, published by skluug on March 8, 2022 on The Effective Altruism Forum. (I believe this is all directionally correct, but I have zero relevant expertise.) When the concept of catastrophic risks from artificial intelligence is covered in the press, it is often compared to popular science fiction stories about rogue AI—and in particular, to the Terminator film franchise. The consensus among top communicators of AI risk seems to be that this is bad, and counterproductive to popular understanding of real AI risk concerns. For example, take Kelsey Piper's March 2021 appearance on The Weeds to talk about AI risk (not at all picking on Kelsey, it's just a convenient example): Matt Yglesias: These science fiction scenarios—I think we'll get the audio, I loved Terminator 2 as a kid, it was like my favorite movie. [Audio clip from Terminator 2 plays.] .and this what it's about, right, is artificial intelligence will get out of control and pose an existential threat to humanity. So when I hear that, it's like—yeah, that's awesome, I do love that movie. But like, is that for real? Kelsey Piper: So, I don't think AI risk looks much like Terminator. And I do think that AI risk work has been sort of damaged by the fact that yeah there's all this crazy sci-fi where like, the robots develop a deep loathing for humanity, and then they come with their guns, and they shoot us all down, and only one time traveler—you know—that's ridiculous! And so of course, if that's what people are thinking of when they think about the effects of AI on society, they're going to be like, that's ridiculous. I wasn't on The Weeds, because I'm just an internet rando and not an important journalist. But if I had been, I think I would've answered Matt's question something like this: skluug: Yes. That is for real. That might actually happen. For real. Not the time travel stuff obviously, but the AI part 100%. It sounds fake, but it's totally real. Skynet from Terminator is what AI risk people are worried about. This totally might happen, irl, and right now hardly anyone cares or is trying to do anything to prevent it. I don't know if my answer is better all things considered, but I think it is a more honest and accurate answer to Matt's question: “Is an existential threat from rogue AI—as depicted in the Terminator franchise—for real?”. Serious concerns about AI risk are often framed as completely discontinuous with rogue AI as depicted in fiction and in the public imagination; I think this is totally false. Rogue AI makes for a plausible sci-fi story for the exact same high-level reasons as it is an actual concern: We may eventually create artificial intelligence more powerful than human beings; and That artificial intelligence may not necessarily share our goals. These two statements are obviously at least plausible, which is why there are so many popular stories about rogue AI. They are also why AI might in real life bring about an existential catastrophe. If you are trying to communicate to people why AI risk is a concern, why start off by undermining their totally valid frame of reference for the issue, making them feel stupid, uncertain, and alienated? This may seem like a trivial matter, but I think it is of some significance. Fiction can be a powerful tool for generating public interest in an issue, as Toby Ord describes in the case of asteroid preparedness as part of his appearance on the 80,000 Hours Podcast: Toby Ord: Because they saw one of these things [a comet impact on Jupiter] happen, it was in the news, people were thinking about it. And then a couple of films, you might remember, I think “Deep Impact” and “Armageddon” were actually the first asteroid films and they made quite a splash in the public consciousness. And then that coincided with ge...

The Nonlinear Library
EA - I want more columns like Future Perfect, but for science publications by James Lin

The Nonlinear Library

Play Episode Listen Later Mar 8, 2022 9:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I want more columns like Future Perfect, but for science publications, published by James Lin on March 8, 2022 on The Effective Altruism Forum. 1. Introduction Positive public opinion is an important asset for convincing really smart people to work on the most important—yet still socially unsexy—problems, such as AI safety. I'm going to lay out the case for establishing EA-adjacent columns in reputable science/tech publications (e.g. MIT Tech Review, Scientific American, etc), and how—given the precedence of similar ventures and a fair number of EA science writers—there's a strong possibility this idea becomes reality. A note: I have a publication—X—in mind which I won't name in this post, but feel free to reach out if you're interested! 2. Problem Observations are based on personal experience from retreats and community building for Harvard/MIT. An outsized proportion of the progress made in the field of AI safety comes from a minority of brilliant EAs (e.g. Paul Christiano) working on hard problems, rather than simply increasing member counts and recruiting capable but otherwise unexceptional students, even when they're from a fancy school. Additionally, it's often hard to convince highly-talented CS students to work on AI safety when there are other competing career paths that are considerably more appealing, such as finance, Big Tech, and transformative startups. The general sentiment seems to be: “Wow, seems important. Hopefully someone else solves this.” There are 2 main reasons: It's hard to know where to start; the problem feels like a black box. For all of its importance and neglectedness, working on minimizing AI risk is not yet an appealing problem to work on. Headlines frequently evangelize the amazing work responsible for computer vision advancing beyond human capabilities, and deep learning systems beating the world's best at Go, but we rarely hear about AI safety triumphs, even when there are breakthroughs. Considering the technical rigor required and also the significant impact it has on the future going well, we need to hear more about these successes. This is the problem I'll attempt to address. 3. Solution One possible solution is reaching out to students in groups with a high density of technical prowess (e.g. IMO camps). Others have suggested similar ideas, so this proposal will focus on a different form of outreach. There should be a shift in the public discourse towards focusing more on AI safety and existential risk in general. I claim that working with a reputable publication organization and having libraries of articles to direct excited students towards will gradually make it easier to recruit von Neumanns. Some great examples of outreach in this category include The Precipice and the Future Perfect column at Vox, spearheaded by folks like Kelsey Piper, Ezra Klein, and Dylan Matthews. Back in 2019, Kelsey suggested that more initiatives like FP at other organizations could do quite a lot of good, and I'm not aware of any explicit expansions into this space. I would love to hear about some other EA-adjacent columns like this that currently exist, and I'm excited about generating more initiatives like these! Specifically, I want to launch an X-Risk or EA-adjacent column, similar to Future Perfect, for large science/tech publications like X. I think this would be quite impactful and tractable. 3.1 What Impact will this have? Working with existing organizations come with major perks. Readership stats for X hover around 5 million unique monthly readers and 500k estimated print subscriptions. Those are big numbers! Each month, there are around 20 articles, which translates into an average of 250,000 views per article. Some pieces will have millions and others in the low deci-thousands; but overall, pretty good. Having articles on AI Safety (and o...

The Nonlinear Library
LW - You Can Get Fluvoxamine by AppliedDivinityStudies

The Nonlinear Library

Play Episode Listen Later Jan 19, 2022 4:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You Can Get Fluvoxamine, published by AppliedDivinityStudies on January 18, 2022 on LessWrong. [TLDR: I paid $95 for a 10 minute video consultation with a doctor, told them I was depressed and wanted fluvoxamine, and got my prescription immediately.] I'm not a doctor, and this isn't medical advice. If you want information on the status of fluvoxamine as a Covid treatment, you can see the evidence base in the appendix, but interpreting those results isn't my business. I'm just here to tell you that if you want fluvoxamine, you can get it. Years ago, some of my friends were into downloading apps that would get you a 10 minute consultation with a doctor in order to quickly acquire a prescription for medical marajuana. Today, similar apps exist for a wide range of medications, and with a bit of Googling, you can find one that will prescribe you fluvoxamine. What's required on your end? In my case, $95, 10 minutes of my time, and some white lies about my mental health. Fluvoxamine is only prescribed right now for depression and anxiety, so if you want it, my advice is to say that: You have an ongoing history of moderate depression and anxiety You have taken Fluvoxamine in the past, and it's helped And that's basically it. Because there are many other treatments for depression, you do specifically have to ask for Fluvoxamine by name. If they try to give you something else, say that you've tried it before and didn't like the side effects (weight gain, insomnia, headaches, whatever). One more note, and this is critical: unless you are actually suicidal, do not tell your doctor that you have plans to commit suicide, to hurt yourself or others, or do anything that sounds like an immediate threat. This puts you at risk of being put involuntarily in an inpatient program, and you don't want that. Finally, you might ask: isn't this super unethical? Aren't you not supposed to lie to doctors to get drugs? Maybe, I don't know, this isn't medical advice, and it's not really ethical advice either. I think the only real potential harms here are we consume so much fluvoxamine that there isn't enough for depressed people, or that doctors start taking actual depressed patients who want fluvoxamine less seriously. As far as I can tell, there isn't currently a shortage, as to the latter concern, I couldn't really say. Appendix Again, this isn't medical advice. You shouldn't take any of these results or pieces of news coverage as evidence that fluvoxamine works and that the benefits outweigh the costs. I'm literally only adding this to cover my own ass and make the point that fluvoxamine is a normal mainstream thing and not some weird conspiracy drug. Here's the Lancet article, and the JAMA article. Here's Kelsey Piper at Vox: One medication the TOGETHER trial found strong results for, fluvoxamine, is generally used as an antidepressant and to treat obsessive-compulsive disorder. But it appears to reduce the risk of needing hospitalization or medical observation for Covid-19 by about 30 percent, and by considerably more among those patients who stick with the 10-day course of medication. Unlike monoclonal antibodies, fluvoxamine can be taken as a pill at home --- which has been an important priority for scientists researching treatments, because it means that patients can take their medication without needing to leave the home and without straining a hospital system that is expected to be overwhelmed. "We would not expect it to be affected by which variants" a person is sick with, Angela Reiersen, a psychiatrist at Washington University in St. Louis whose research turned up fluvoxamine as a promising anti-Covid candidate, told me. And here's a Wall Street Journal article headlined "Is Fluvoxamine the Covid Drug We've Been Waiting For?" with subheading "A 10-day treatment costs only $4 and app...

The Nonlinear Library: LessWrong
LW - You Can Get Fluvoxamine by AppliedDivinityStudies

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 19, 2022 4:35


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You Can Get Fluvoxamine, published by AppliedDivinityStudies on January 18, 2022 on LessWrong. [TLDR: I paid $95 for a 10 minute video consultation with a doctor, told them I was depressed and wanted fluvoxamine, and got my prescription immediately.] I'm not a doctor, and this isn't medical advice. If you want information on the status of fluvoxamine as a Covid treatment, you can see the evidence base in the appendix, but interpreting those results isn't my business. I'm just here to tell you that if you want fluvoxamine, you can get it. Years ago, some of my friends were into downloading apps that would get you a 10 minute consultation with a doctor in order to quickly acquire a prescription for medical marajuana. Today, similar apps exist for a wide range of medications, and with a bit of Googling, you can find one that will prescribe you fluvoxamine. What's required on your end? In my case, $95, 10 minutes of my time, and some white lies about my mental health. Fluvoxamine is only prescribed right now for depression and anxiety, so if you want it, my advice is to say that: You have an ongoing history of moderate depression and anxiety You have taken Fluvoxamine in the past, and it's helped And that's basically it. Because there are many other treatments for depression, you do specifically have to ask for Fluvoxamine by name. If they try to give you something else, say that you've tried it before and didn't like the side effects (weight gain, insomnia, headaches, whatever). One more note, and this is critical: unless you are actually suicidal, do not tell your doctor that you have plans to commit suicide, to hurt yourself or others, or do anything that sounds like an immediate threat. This puts you at risk of being put involuntarily in an inpatient program, and you don't want that. Finally, you might ask: isn't this super unethical? Aren't you not supposed to lie to doctors to get drugs? Maybe, I don't know, this isn't medical advice, and it's not really ethical advice either. I think the only real potential harms here are we consume so much fluvoxamine that there isn't enough for depressed people, or that doctors start taking actual depressed patients who want fluvoxamine less seriously. As far as I can tell, there isn't currently a shortage, as to the latter concern, I couldn't really say. Appendix Again, this isn't medical advice. You shouldn't take any of these results or pieces of news coverage as evidence that fluvoxamine works and that the benefits outweigh the costs. I'm literally only adding this to cover my own ass and make the point that fluvoxamine is a normal mainstream thing and not some weird conspiracy drug. Here's the Lancet article, and the JAMA article. Here's Kelsey Piper at Vox: One medication the TOGETHER trial found strong results for, fluvoxamine, is generally used as an antidepressant and to treat obsessive-compulsive disorder. But it appears to reduce the risk of needing hospitalization or medical observation for Covid-19 by about 30 percent, and by considerably more among those patients who stick with the 10-day course of medication. Unlike monoclonal antibodies, fluvoxamine can be taken as a pill at home --- which has been an important priority for scientists researching treatments, because it means that patients can take their medication without needing to leave the home and without straining a hospital system that is expected to be overwhelmed. "We would not expect it to be affected by which variants" a person is sick with, Angela Reiersen, a psychiatrist at Washington University in St. Louis whose research turned up fluvoxamine as a promising anti-Covid candidate, told me. And here's a Wall Street Journal article headlined "Is Fluvoxamine the Covid Drug We've Been Waiting For?" with subheading "A 10-day treatment costs only $4 and app...

The Nonlinear Library: EA Forum Top Posts
Can EA leverage an Elon-vs-world-hunger news cycle? by Jackson Wagner

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 4:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can EA leverage an Elon-vs-world-hunger news cycle?, published by Jackson Wagner on the Effective Altruism Forum. Summary: Elon Musk promises to donate six billion dollars if the UN can explain how this would truly solve world hunger (it would probably be much more expensive). Regardless of whether the donation happens or not, a major news cycle about the cost-effectiveness of international charitable donations seems like a great opportunity to raise the public profile of effective altruism. Details of The Billionare-Bashing Drama US senators are currently debating a large bill that will probably include some form of tax increase on the rich. Elon Musk, now the world's richest man, voiced his opposition to a proposed tax on unrealized capital gains. He framed his oppositon specifically in terms of government inefficiency, saying: "Who is best at capital allocation -- government or entrepeneurs -- is indeed what it comes down to." More recently, a recent CNN headline asserted that just 2% of Elon Musk's ~$300B net worth could "solve world hunger" by feeding the 42 million people who suffer from malnutrition -- people who are otherwise "literally going to die". Inevitably, this claim turns out to be somewhat hyperbolic/innumerate -- 6 billion dollars divided by 42 million people is around $140 per person, which would be a lot lower than givewell's most effective interventions (around ~$5000 per life saved). Maybe this billionare-bashing CNN interview is revealing an astounding, hithero unknown charitable opportunity. But more likely, most of the people are not literally going to die and/or the effort to alleviate the problem would cost much more than $6 billion (at the very least, if we need to keep giving people food each year, the real cost would be $6 billion repeating annually). I haven't yet looked into the details of the situation too closely. Now, Elon has offered to indeed donate $6 billion, on the (presumably impossible) condition that the UN provide a realistic plan for how the problem of world hunger could legitimately be solved on that budget. For scale, six billion dollars devoted to EA cause areas would represent more than a 10% increase on the ~$42B total funds currently committed to the movement. Right now, EA organizations are spending spend about $0.2 billion on GiveWell-style global health charities each year. This Seems Like A Good Time For EA To Shine This conversation is already distinct from most billionare-related discorse for its focus on cost-effectiveness and international aid for the world's poorest, rather than the usual arguing over the fairness of allowing rich people to exist at all and the desire to increase taxes in order to fund more social services in the developed world. In short, for a brief moment in time, a major news cycle is focused on how one can do the most good to save the most lives per dollar. This obviously seems like a great time to introduce the ideas of effective altruism to more people. I can only imagine that Kelsey Piper is already busily drafting up an article about this for Future Perfect. But what else can EA do to capitalize on this news cycle? Should Givewell try to outline how they would attempt to spend six billion dollars? Surely their current top charities would run out of room-for-more-funding? Would it be wiser to stay on-message with a relatively simple theme, like promoting Givewell's expertise in cost-effective global health and development spending? Or should we try to fire off a bunch of thinkpieces climbing the counterintuitiveness ladder from typical disaster aid to growth economics, and from there onwards to longtermism, x-risk reduction, etc? What should EA's general strategy be around these news cycles -- the movement generally tries to avoid political polarization, but surely some events are go...

The Nonlinear Library: EA Forum Top Posts
2019 AI Alignment Literature Review and Charity Comparison by Larks

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 118:59


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: 2019 AI Alignment Literature Review and Charity Comparison , published by Larks on the effective altruism forum. Cross-posted to LessWrong here. Introduction As in 2016, 2017 and 2018, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organisation in 2019 and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency. I'd like to apologize in advance to everyone doing useful AI Safety work whose contributions I may have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) another existential risk capital allocation project, 2) the miracle of life and 3) computer games. How to read this document This document is fairly extensive, and some parts (particularly the methodology section) are the same as last year, so I don't recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you. If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well. If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories. Here are the un-scientifically-chosen hashtags: Agent Foundations AI_Theory Amplification Careers CIRL Decision_Theory Ethical_Theory Forecasting Introduction Misc ML_safety Other_Xrisk Overview Philosophy Politics RL Security Shortterm Strategy New to Artificial Intelligence as an existential risk? If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend this Vox piece by Kelsey Piper. If you are already convinced and are interested in contributing technically, I recommend this piece by Jacob Steinheart, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation. Research Organisations FHI: The Future of Humanity Institute FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom. They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found here. Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work. In the past I have been very impressed with their work. Research Drexler's Reframing Superintelligence: Comprehensive AI Services as General Intelligence is a massive document arguing that superintelligent AI will be developed for individual discrete services for specific finite tasks, rather than as general-purpose agents. Basically the idea is that it makes more sense for people to develop specialised AIs, so these will happen first, and if/when we build AGI these services can help control it. To some extent this seems to match what is happening - we do have many specialised AIs - but on the other hand there are teams working directly on AGI, and often in ML 'build an ML system that does it...

The Nonlinear Library: EA Forum Top Posts
2020 AI Alignment Literature Review and Charity Comparison by Larks

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 132:57


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: 2020 AI Alignment Literature Review and Charity Comparison, published by Larks on the effective altruism forum. Write a Review cross-posted to LW here. Introduction As in 2016, 2017, 2018, and 2019, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organisation in 2020 and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2021 budgets to get a sense of urgency. I'd like to apologize in advance to everyone doing useful AI Safety work whose contributions I have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) other projects, 2) the miracle of life and 3) computer games. This article focuses on AI risk work. If you think other causes are important too, your priorities might differ. This particularly affects GCRI, FHI and CSER, who both do a lot of work on other issues which I attempt to cover but only very cursorily. How to read this document This document is fairly extensive, and some parts (particularly the methodology section) are largely the same as last year, so I don't recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you. If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well. Papers listed as ‘X researchers contributed to the following research lead by other organisations' are included in the section corresponding to their first author and you can Cntrl+F to find them. If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories. Here are the un-scientifically-chosen hashtags: AgentFoundations Amplification Capabilities Corrigibility DecisionTheory Ethics Forecasting GPT-3 IRL Misc NearAI OtherXrisk Overview Politics RL Strategy Textbook Transparency ValueLearning New to Artificial Intelligence as an existential risk? If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend this Vox piece by Kelsey Piper, or for a more technical version this by Richard Ngo. If you are already convinced and are interested in contributing technically, I recommend this piece by Jacob Steinheart, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation, or this from Critch & Krueger, or this from Everitt et al, though it is a few years old now Research Organisations FHI: The Future of Humanity Institute FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom. They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found here. Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work - as well as work on other Xrisks. They run a Research Scholars Program, where people can join them to do research at FHI. There is a f...

The Nonlinear Library: EA Forum Top Posts
Matt Levine on the Archegos failure by Kelsey Piper

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 6:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Matt Levine on the Archegos failure, published by Kelsey Piper on the Effective Altruism Forum. Matt Levine is a finance writer with a very entertaining free newsletter, also available on Bloomberg to subscribers. Today's newsletter struck me as a fairly remarkable failure analysis of a very expensive failure, in which Credit Suisse lost $5.5billion dollars when the hedge fund Archegos collapsed. That doesn't usually happen, and banks are, of course, very incentivized to avoid it. When it happened, Credit Suisse commissioned a very thorough investigation into what went wrong. Some background: Archegos was a hedge fund, founded in 2013, that defaulted spectacularly this spring. The Wall Street Journal estimates that they lost $8billion in 10 days. Levine wrote at the time: The basic story of Archegos is that it extracted as much leverage as possible from a half dozen Wall Street banks to buy a concentrated portfolio of tech and media stocks (apparently partially hedged with short index positions[2]), and those stocks went up a lot, before going down a lot. If you merely own some stocks, and they go way up and then way down, you'll end with approximately the money you started with and everything will be fine. But if you have taken out loans to buy stocks, then when they go up your wealth has increased. And if you then use your increased wealth to borrow lots more money and buy more stocks, then when they go down you will lose $8billion in ten days. None of this is unknown to bankers, so it's confusing that the bankers let Archegos do this. In the immediate aftermath, there was a lot of theorizing about how the banks might have had inaccurate or incomplete information about how heavily leveraged Archegos was. Levine: When the Archegos story came out this spring, there was a sense, from the outside, that the banks had missed something, that there was some structural component of Archegos's trades that caused the banks to underestimate the risks they were taking. For instance, there was a widespread theory that, because Archegos did most of its trades in the form of total return swaps (rather than owning stocks directly), it didn't have to disclose its positions publicly, and because it did those swaps with multiple banks, none of the banks knew how big and concentrated Archegos's total positions were, so they didn't know how bad it would be if Archegos defaulted. But, nope, absolutely not, Credit Suisse was entirely plugged in to Archegos's strategy and how much trading it was doing with other banks, and focused clearly on this risk. So what went wrong? According to the report Credit Suisse commissioned from a law firm on the whole mess, what went wrong is that Credit Suisse determined that Archegos was overleveraged, and that they needed more collateral, and they called Archegos to that effect, and Archegos responded "hey sorry I've been swamped this week, can we talk later?" and that was that. No, really, that's pretty much it. The report: On February 23, 2021, the PSR analyst covering Archegos reached out to Archegos's Accounting Manager and asked to speak about dynamic margining. Archegos's Accounting Manager said he would not have time that day, but could speak the next day. The following day, he again put off the discussion, but agreed to review the proposed framework, which PSR sent over that day. Archegos did not respond to the proposal and, a week-and-a-half later, on March 4, 2021, the PSR analyst followed up to ask whether Archegos “had any thoughts on the proposal.” His contact at Archegos said he “hadn't had a chance to take a look yet,” but was hoping to look “today or tomorrow.” Of course, when your counterparty is refusing to give you more collateral, you can pull all their loans. But Credit Suisse was kind of reluctant to pull that lever given that it wa...

The Nonlinear Library: LessWrong Top Posts
The Case for Extreme Vaccine Effectiveness by Ruby

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 42:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for Extreme Vaccine Effectiveness, published by Ruby on the LessWrong. I owe tremendous acknowledgments to Kelsey Piper, Oliver Habryka, Greg Lewis, and Ben Shaya. This post is built on their arguments and feedback (though I may have misunderstood them). Update, May 13 I first wrote this post before investigating the impact of covid variants on vaccine effectiveness, listing the topic as a major caveat to my conclusions. I have now spent enough time (not that much, honestly) looking into variants that I have a tentative position I'm acting on for now. My moderately confident conclusion is that the current spread of variants in the US barely impacts vaccine effectiveness. The Pfizer vaccine is reported to be 85% as effective against the feared B.1.351 variant (South African) as it is against B.1.1.7 (UK). Assuming that other variants are no more resistant than B.1.351 on average (a reasonable assumption) and that presently variants are no more than 25% of Covid cases (in Alameda and San Francisco). The net effect is 0.250.85 + 0.751.0 = 0.9625. In other words, vaccines still have 96% of the effect they would if B.1.1.7 were the only variant. Plus, that tiny reduction of vaccine effectiveness is dwarfed by the falling background prevalence of Covid. When I first wrote this post, Alameda and San Francisco were at 0.1-0.15%; now they're at ~0.05%. The same for New York and the United Kingdom. Although relaxing of restrictions might reverse this, right now, Covid-risk is very, very low in the Bay Area and many parts of the US. All updates/changelog can be viewed here. I plead before the Master of Cost-Benefit Ratios. “All year and longer I have followed your dictates. Please, Master, can I burn my microCovid spreadsheets? Can I bury my masks? Pour out my hand sanitizer as a libation to you? Please, I beseech thee.” “Well, how good is your vaccine?” responds the Master. “Quite good!” I beg. “We've all heard the numbers, 90-95%. Even MicroCOVID.org has made it official: a 10x reduction for Pfizer and Moderna!” The Master of Cost-Benefit Ratio shakes his head. “It helps, it definitely helps, but don't throw out that spreadsheet just yet. One meal at a crowded restaurant is enough to give even a vaccinated person hundreds of microCovids. Not to mention that your local prevalence could change by a factor of 5 in the next month or two, and that'd be half the gains from this vaccine of yours!” I whimper. “But what if . . . what if vaccines were way better than 10x? What about a 100x reduction in the risks from COVID-19?” He smiles. “Then we could go back to talking about how fast you like to drive.” In its most extreme form, I have heard it claimed that the vaccines provide 10x reduction against regular Covid, 100x against severe Covid, and 1000x against death. That is, for each rough increase in severity, you get 10x more protection. This makes sense if we think of Covid as some kind of "state transition" model where there's a certain chance of moving from lesser to more severe states, and vaccines reduce the likelihood at each stage. I think 10x at multiple stages is too much. By the time you're at 1000x reduction, model uncertainty is probably dominating. I feel more comfortable positing up to 100x, maybe 500x reduction. I dunno. There is a more limited claim of extreme vaccine effectiveness that I will defend today: In the case of the Pfizer vaccine (and likely Moderna too), the effectiveness in young healthy people is 99% against baseline symptomatic infection, or close to it. We can reasonably expect the effectiveness of the vaccine against more severe cases of Covid to be greater than effectiveness against milder cases of Covid. (Maybe it's 2x more effective against severe-Covid and 3x more effective against death compared to just getting it at all. Something lik...

The Nonlinear Library: LessWrong Top Posts
2020 AI Alignment Literature Review and Charity Comparison by Larks

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 131:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2020 AI Alignment Literature Review and Charity Comparison , published by Larks on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. cross-posted to the EA forum here. Introduction As in 2016, 2017, 2018, and 2019, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organisation in 2020 and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2020 budgets to get a sense of urgency. I'd like to apologize in advance to everyone doing useful AI Safety work whose contributions I have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) other projects, 2) the miracle of life and 3) computer games. This article focuses on AI risk work. If you think other causes are important too, your priorities might differ. This particularly affects GCRI, FHI and CSER, who both do a lot of work on other issues which I attempt to cover but only very cursorily. How to read this document This document is fairly extensive, and some parts (particularly the methodology section) are largely the same as last year, so I don't recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you. If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well. Papers listed as ‘X researchers contributed to the following research lead by other organisations' are included in the section corresponding to their first author and you can Cntrl+F to find them. If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories. Here are the un-scientifically-chosen hashtags: AgentFoundations Amplification Capabilities Corrigibility DecisionTheory Ethics Forecasting GPT-3 IRL Misc NearAI OtherXrisk Overview Politics RL Strategy Textbook Transparency ValueLearning New to Artificial Intelligence as an existential risk? If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend this Vox piece by Kelsey Piper, or for a more technical version this by Richard Ngo. If you are already convinced and are interested in contributing technically, I recommend this piece by Jacob Steinheart, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation, or this from Critch & Krueger, or this from Everitt et al, though it is a few years old now Research Organisations FHI: The Future of Humanity Institute FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom. They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found here. Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work - as well as work on other Xrisks. They run a Research Scholars Pro...

The Nonlinear Library: LessWrong Top Posts
2019 AI Alignment Literature Review and Charity Comparison by Larks

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 119:07


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2019 AI Alignment Literature Review and Charity Comparison, published by Larks on the AI Alignment Forum. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. Cross-posted to the EA forum here. Introduction As in 2016, 2017 and 2018, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organisation in 2019 and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency. I'd like to apologize in advance to everyone doing useful AI Safety work whose contributions I may have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) another existential risk capital allocation project, 2) the miracle of life and 3) computer games. How to read this document This document is fairly extensive, and some parts (particularly the methodology section) are the same as last year, so I don't recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you. If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well. If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories. Here are the un-scientifically-chosen hashtags: Agent Foundations AI_Theory Amplification Careers CIRL Decision_Theory Ethical_Theory Forecasting Introduction Misc ML_safety Other_Xrisk Overview Philosophy Politics RL Security Shortterm Strategy New to Artificial Intelligence as an existential risk? If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend this Vox piece by Kelsey Piper. If you are already convinced and are interested in contributing technically, I recommend this piece by Jacob Steinheart, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation. Research Organisations FHI: The Future of Humanity Institute FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom. They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found here. Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work. In the past I have been very impressed with their work. Research Drexler's Reframing Superintelligence: Comprehensive AI Services as General Intelligence is a massive document arguing that superintelligent AI will be developed for individual discrete services for specific finite tasks, rather than as general-purpose agents. Basically the idea is that it makes more sense for people to develop specialised AIs, so these will happen first, and if/when we build AGI these services can help control it. To some extent this seems to match what is happening - we do have many specialised AIs - but on the other hand there ...

The Nonlinear Library: LessWrong Top Posts
LessWrong is paying $500 for Book Reviews by Ruby

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 6:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LessWrong is paying $500 for Book Reviews , published by Ruby on the AI Alignment Forum. Kudos to Kelsey Piper and Buck for this idea. See Buck's shortform post for another formulation. LessWrong is trialing a new pilot program: paying USD500 for high-quality book reviews that are of general interest to LessWrong readers, subject to our judgment and discretion. How it Works Pick a book that you want to review. [Optional] Contact LessWrong (Intercom in the bottom right or team@lesswrong.com) to check in on whether the book is on-topic (to reduce the probability of not getting the bounty). Write the review and post it on LessWrong. Contact LessWrong to let us know you're submitting your review for payment. Optionally, send us your book review before posting to get free feedback. (In fact, feel free to send us your draft at any stage for feedback.) If we like your book review and it's the kind of post we had in mind, we pay out the $500. The program will by default run for one month (until October 13). At the end of the month, a bonus $750 will be split evenly between the top three book reviews received, as judged by us. Desired Reviews Most non-fiction topics related to science, history, and rationality will merit payment if the book review is of sufficient quality. By “quality” I'm referring to both content and form. Do the inferences seem correct? Does the reviewer seem to be asking the right questions? Does the summary feel informative or lacking? Do I feel confused or enlightened? Is it riveting or a slog to get through? On the writing side, relevant aspects are sentence construction, word choice, pacing, structure, imagery, etc. I don't want to be too prescriptive about form since I expect that being of sufficiently high quality (nebulously defined) is enough to make for exceptions, but generally, I'm interested in book reviews that: Convinces the reader that the topic is interesting, usually by explaining how the topic is relevant to the user's life or other interests. Summarize the core claims and arguments in the book so that others can benefit without having to read it. Perform an epistemic review of the book–which, if any, of its claims seem correct? Book reviews that involve a degree of fact-checking/epistemic spot checking will be considered favorably. Describe what the reviewer has come to believe and why. (An extra great format is to compare and contrast two or more books on the same topic.) Examples of Desired and Undesired Book Reviews Since it's hard to give an explicit definition of “quality”, I'm going to fall back on examples and hope that these are better than nothing. Generally, the book reviews tag is a good guide to the kinds of book reviews that are popular on LessWrong and that we want to incentivize. Below I've listed specific book reviews that were either particularly great or kind of poor. Again, most of these came down to quality rather than topic. Positive Examples Book summary: Unlocking the Emotional Brain Book Review: Working With Contracts Notes on "The Anthropology of Childhood" Outline of Galef's "Scout Mindset" Book Review: Design Principles of Biological Circuits These book reviews all present engagingly on a topic of interest. They're not difficult to read, and having read them, I know something more about the world than I did before. Negative Examples I am reluctant to name and shame particular essays on LessWrong, and instead, direct people to view the book reviews tag sorted by karma and look at the lowest scoring posts (you'll have to click load more to get the entire list). Karma is a strong correlate of quality (whether or not the bounty is paid out is not strictly contingent on the karma it gets, but is influenced by it). Importantly, quality is not the automatic result of effort. Someone could expend a lot of effort writi...

The Nonlinear Library: EA Forum Top Posts
There are a bajillion jobs working on plant-based foods right now by scottweathers

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 2:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are a bajillion jobs working on plant-based foods right now , published by scottweathers on the Effective Altruism Forum. A hugely popular EA Forum post this past winter illustrated the tremendous difficulty in getting hired by effective altruist organizations. This week at Berkeley REACH, Kelsey Piper highlighted a few shifts in the EA community that seem relevant. As EA has become more focused on big-picture ideas, we've become less excited by the prospect of giving ~10% of your income every year, for example by saving lives in developing countries. Still, it's pretty incredible that that's something most people can do! In the vein of high-impact, accessible opportunities, Kelsey also noted that Impossible Foods has ~30 openings at the moment, many of them doable by EAs without even requiring any specialized skills. Do you live in the San Francisco bay area? Can you do manufacturing work? Then there's a job there for you. To widen this a bit, there are frankly a bajllion jobs right now working on plant-based and cell-based meat. So I've highlighted a few below, focusing on breadth of companies and less technical positions. Happy applying! Seattle Food Tech: Plant-Based Meat Production (Part-time/Full-Time, Seattle, WA) Seattle Food Tech: People and Culture Manager (Seattle, WA) Impossible Foods: Chief of Staff to the CEO (Redwood City, CA) Impossible Foods: Lead Processing, 2nd shift (Oakland, CA) Beyond Meat: Facilities & Maintenance Coordinator (El Segundo, CA) Beyond Meat: Front Desk Specialist (El Segundo, CA) Califia Farms: Production Lead (Bakersfield, CA) Califia Farms: Data Engineer (Bakersfield, CA) Huel: Head of Sales (New York, NY) Ginkgo Bioworks: Software Architect (Boston, MA) Ginkgo Bioworks: Program Management Lead (Boston, MA) Daiya: Brand Manager (Vancouver, Canada) Hodo: Production Worker (Oakland, CA) Miyoko's: Sanitation Associate (Petaluma, CA) Quorn: Logistics Coordinator (Stokesley, United Kingdom) Ripple: Financial Analyst (Berkeley, CA) Memphis Meats: Research Associate (Berkeley, CA) JUST: Research Associate, Food Science (San Francisco, CA) Mission Barns: Clean Meat R&D Intern (Berkeley, CA) Perfect Day: Supply Chain Manager (Emeryville, CA) Clara Foods: Process Associate (South San Francisco, CA) Finless Foods: Senior Scientist (Berkeley, CA) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: EA Forum Top Posts
On fringe ideas by Kelsey Piper

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 7:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On fringe ideas, published by Kelsey Piper on the Effective Altruism Forum. Write a Review This is a linkpost for The Forum team published this with Kelsey's permission. We've slightly edited the original content, which you can find here. Question to the blog What do you think about the more fringe parts of EA? I get really angry seeing people call themselves EAs but then spending all of their time writing speculative essays on, say (to quote those I came across most recently), how actually wildlife conservation is bad because animals in the wild suffer. Like, it's fun to think about that, but my goals as an EA are very different than those of someone who thinks stuff like that is even comparable with global development or long-term sustainability. But I do wonder what your opinion is, since you do have a record of being sensible in evaluating ideas regardless of how fringe they are, which I really appreciate. Kelsey's response One big formative influence on how I think about this is imagining that effective altruism had existed at various other moments in history. Would we have been doing any good, or would we have been too stuck in the assumptions of the time period? Would an effective altruist movement in the 1840s U.S. have been abolitionist? If we think we would have failed to stand up against slavery, what do we need to change, now, as a movement, to make sure we're not getting similarly big things wrong? Would an effective altruist movement in the 1920s U.S. have been eugenicist? If we think we would have embraced a pseudoscientific and deeply harmful movement like the sterilization campaigns of the Progressive era, what habits of mind and thought would have prevented us from doing that, and are we actively employing them? I think that for effective altruism to be robustly good — to be a movement that would have done good even when embedded in societies that were doing great evil, or societies that were oriented around entirely the wrong questions, or a society that had a “do-gooder” consensus that was actually terrible — there are a bunch of things that have to be in place. Firstly, we have to actually be doing things that benefit the people who need it most. Last year, donations moved through GiveWell to top charities (not counting donations from Good Ventures) increased to $65 million. If that number wasn't impressive, and wasn't increasing, I would be worried that we were failing as a community. We need the reality check of being accountable for actual results. We need to actually do things. Next, we need to be continually monitoring for signs that the things we're doing are actually doing harm, under lots of possible worldviews. That includes worldviews that aren't intuitive, or that aren't the way most people think about charity. If recipients aren't happy, that's an enormous potential warning sign. If our efforts increase suffering, even if it's in some weird way that's hard to take seriously, that's a warning sign. If there are forces systematically ensuring we don't hear from recipients, that's a warning sign. Basically, we need to cast a really, really wide net for possible ways we're screwing up, so that the right answer is at least available to us. Next, imagine someone walked into that 1840s EA group and said, ‘I think black people are exactly as valuable as white people and it should be illegal to discriminate against them at all,” or someone walked into the 1920s EA group and said, “I think gay rights are really important.” I want us to be a community that wouldn't have kicked them out. I think the principle I want us to abide by is something like ‘if something is an argument for caring more about entities who are widely regarded as not worthy of such care, then even if the argument sounds pretty absurd, I am supportive of some people doing rese...

The Nonlinear Library: EA Forum Top Posts
In defence of epistemic modesty by scottweathers

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 2:51


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In defence of epistemic modesty , published by scottweathers on the Effective Altruism Forum. A hugely popular EA Forum post this past winter illustrated the tremendous difficulty in getting hired by effective altruist organizations. This week at Berkeley REACH, Kelsey Piper highlighted a few shifts in the EA community that seem relevant. As EA has become more focused on big-picture ideas, we've become less excited by the prospect of giving ~10% of your income every year, for example by saving lives in developing countries. Still, it's pretty incredible that that's something most people can do! In the vein of high-impact, accessible opportunities, Kelsey also noted that Impossible Foods has ~30 openings at the moment, many of them doable by EAs without even requiring any specialized skills. Do you live in the San Francisco bay area? Can you do manufacturing work? Then there's a job there for you. To widen this a bit, there are frankly a bajllion jobs right now working on plant-based and cell-based meat. So I've highlighted a few below, focusing on breadth of companies and less technical positions. Happy applying! Seattle Food Tech: Plant-Based Meat Production (Part-time/Full-Time, Seattle, WA) Seattle Food Tech: People and Culture Manager (Seattle, WA) Impossible Foods: Chief of Staff to the CEO (Redwood City, CA) Impossible Foods: Lead Processing, 2nd shift (Oakland, CA) Beyond Meat: Facilities & Maintenance Coordinator (El Segundo, CA) Beyond Meat: Front Desk Specialist (El Segundo, CA) Califia Farms: Production Lead (Bakersfield, CA) Califia Farms: Data Engineer (Bakersfield, CA) Huel: Head of Sales (New York, NY) Ginkgo Bioworks: Software Architect (Boston, MA) Ginkgo Bioworks: Program Management Lead (Boston, MA) Daiya: Brand Manager (Vancouver, Canada) Hodo: Production Worker (Oakland, CA) Miyoko's: Sanitation Associate (Petaluma, CA) Quorn: Logistics Coordinator (Stokesley, United Kingdom) Ripple: Financial Analyst (Berkeley, CA) Memphis Meats: Research Associate (Berkeley, CA) JUST: Research Associate, Food Science (San Francisco, CA) Mission Barns: Clean Meat R&D Intern (Berkeley, CA) Perfect Day: Supply Chain Manager (Emeryville, CA) Clara Foods: Process Associate (South San Francisco, CA) Finless Foods: Senior Scientist (Berkeley, CA) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: Alignment Forum Top Posts
2020 AI Alignment Literature Review and Charity Comparison by Larks

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 132:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2020 AI Alignment Literature Review and Charity Comparison, published by Larks on the AI Alignment Forum. Write a Review cross-posted to the EA forum here. Introduction As in 2016, 2017, 2018, and 2019, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organisation in 2020 and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2020 budgets to get a sense of urgency. I'd like to apologize in advance to everyone doing useful AI Safety work whose contributions I have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) other projects, 2) the miracle of life and 3) computer games. This article focuses on AI risk work. If you think other causes are important too, your priorities might differ. This particularly affects GCRI, FHI and CSER, who both do a lot of work on other issues which I attempt to cover but only very cursorily. How to read this document This document is fairly extensive, and some parts (particularly the methodology section) are largely the same as last year, so I don't recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you. If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well. Papers listed as ‘X researchers contributed to the following research lead by other organisations' are included in the section corresponding to their first author and you can Cntrl+F to find them. If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories. Here are the un-scientifically-chosen hashtags: AgentFoundations Amplification Capabilities Corrigibility DecisionTheory Ethics Forecasting GPT-3 IRL Misc NearAI OtherXrisk Overview Politics RL Strategy Textbook Transparency ValueLearning New to Artificial Intelligence as an existential risk? If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend this Vox piece by Kelsey Piper, or for a more technical version this by Richard Ngo. If you are already convinced and are interested in contributing technically, I recommend this piece by Jacob Steinheart, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation, or this from Critch & Krueger, or this from Everitt et al, though it is a few years old now Research Organisations FHI: The Future of Humanity Institute FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom. They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found here. Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work - as well as work on other Xrisks. They run a Research Scholars Program, where people can join them to do research at FHI. There is...

The Nonlinear Library: Alignment Forum Top Posts
2019 AI Alignment Literature Review and Charity Comparison by Larks

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 119:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2019 AI Alignment Literature Review and Charity Comparison, published by Larks on the AI Alignment Forum. Cross-posted to the EA forum here. Introduction As in 2016, 2017 and 2018, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organisation in 2019 and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency. I'd like to apologize in advance to everyone doing useful AI Safety work whose contributions I may have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) another existential risk capital allocation project, 2) the miracle of life and 3) computer games. How to read this document This document is fairly extensive, and some parts (particularly the methodology section) are the same as last year, so I don't recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you. If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well. If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories. Here are the un-scientifically-chosen hashtags: Agent Foundations AI_Theory Amplification Careers CIRL Decision_Theory Ethical_Theory Forecasting Introduction Misc ML_safety Other_Xrisk Overview Philosophy Politics RL Security Shortterm Strategy New to Artificial Intelligence as an existential risk? If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend this Vox piece by Kelsey Piper. If you are already convinced and are interested in contributing technically, I recommend this piece by Jacob Steinheart, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation. Research Organisations FHI: The Future of Humanity Institute FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom. They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found here. Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work. In the past I have been very impressed with their work. Research Drexler's Reframing Superintelligence: Comprehensive AI Services as General Intelligence is a massive document arguing that superintelligent AI will be developed for individual discrete services for specific finite tasks, rather than as general-purpose agents. Basically the idea is that it makes more sense for people to develop specialised AIs, so these will happen first, and if/when we build AGI these services can help control it. To some extent this seems to match what is happening - we do have many specialised AIs - but on the other hand there are teams working directly on AGI, and often in ML 'build an ML system that does it all...

Effective Altruism: Ten Global Problems – 80,000 Hours

How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs?When people look back on this era, is the interesting thing going to have been fights over whether or not the top marginal tax rate was 39.5% or 35.4%, or is it going to be that human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously?Ezra Klein is one of the most prominent journalists in the world. Ezra thinks that pressing issues are neglected largely because there's little pre-existing infrastructure to push them, and we chose him to introduce journalism.Full transcript, related links, and summary of this interviewThis episode first broadcast on the regular 80,000 Hours Podcast feed on March 20, 2021. Some related episodes include: #53 – Kelsey Piper on the room for important advocacy within journalism #88 – Tristan Harris on the need to change the incentives of social media companies #59 – Cass Sunstein on how social change happens, and why it's so often abrupt & unpredictable #57 – Tom Kalil on how to do the most good in government #51 – Martin Gurri on the revolt of the public & crisis of authority in the information age Series produced by Keiran Harris.

Rationally Speaking
How to reason about COVID, and other hard things (Kelsey Piper)

Rationally Speaking

Play Episode Listen Later Sep 14, 2021 77:55


Journalist Kelsey Piper (Future Perfect / Vox) discusses lessons learned from covering COVID: What has she been wrong about, and why? How much can we trust the CDC's advice? What does the evidence look like for different drugs like Fluvoxamine and Ivermectin? And should regular people really try to evaluate the evidence themselves instead of deferring to experts?

Vox's Worldly
Bonus: Rep. Ro Khanna on what America owes India

Vox's Worldly

Play Episode Listen Later May 7, 2021 42:20


On a special bonus Worldly, Zack interviews Rep. Ro Khanna — the vice-chair of the House’s India Caucus — on the covid crisis in that country. They talk about how things got so bad in India and what it says about the state of India’s political institutions and democracy. Then they talk about the US response, where Rep. Khanna gives an inside view of how the Biden administration decided to increase its commitment to India — and makes the case for doing even more. They also reference a whole lot of political philosophy. References: Vox's Kelsey Piper wrote a piece about vaccine patents Amartya Sen's book Development as Freedom Learn more about your ad choices. Visit megaphone.fm/adchoices

Reset
Is AI really a national security threat?

Reset

Play Episode Listen Later Apr 14, 2021 7:29


What do you think about when you're asked about artificial intelligence? You might think about search algorithms or social feeds or robots, but what you might not be thinking about is national security. That’s right, AI can be used for things like drone targeting and missile detection. A report by the National Security Commission on Artificial Intelligence wants the US to be a world leader in AI development, but what does it even mean to be a leader wielding this technology? Vox’s Kelsey Piper (@kelseytuoc) explains.  Read Kelsey’s story here.  Enjoyed this episode? Rate Recode Daily ⭐⭐⭐⭐⭐ and leave a review on Apple Podcasts.   What do you want to learn about on Recode Daily? Send your requests and questions to recodedaily@recode.net. We read every email!    Subscribe for free. Be the first to hear the next episode of Recode Daily by subscribing in your favorite podcast app.   Learn more about your ad choices. Visit megaphone.fm/adchoices This episode was made by:  - Host: Teddy Schleifer (@teddyschleifer) - Producer: Sofi LaLonde (@sofilalonde) - Engineer: Paul Mounsey Support Recode Daily by making a financial contribution to Vox! bit.ly/givepodcasts Learn more about your ad choices. Visit megaphone.fm/adchoices

Vox Quick Hits
Why so many houseless people didn't get their stimulus checks | Tell Me More

Vox Quick Hits

Play Episode Listen Later Mar 29, 2021 8:21


Like many Americans, you may have an extra $1,400 dollars in your bank account, or you're expecting it to arrive soon. After President Biden signed the Covid-19 relief bill on March 11, stimulus checks went out to millions of people, but not everyone who's eligible got one. In fact, many of the most vulnerable Americans, including people experiencing homelessness, did not receive payments. Vox's Kelsey Piper explains what happened and how we may be able to fix it.  References:  Read Kelsey's story here Learn more about your ad choices. Visit podcastchoices.com/adchoices

EARadio
Journalism and accurately communicating EA ideas | Rob Wiblin, Michael Levine, and Kelsey Piper

EARadio

Play Episode Listen Later Mar 23, 2021 22:53


A journalist, podcaster and communications expert talk about how to best explain EA ideas to the public — maximising understanding and minimising frustration. Rob Wiblin studied both genetics and economics at the Australian National University (ANU), graduating top of his class and being named Young Alumnus of the Year in 2015. He worked as a … Journalism and accurately communicating EA ideas | Rob Wiblin, Michael Levine, and Kelsey Piper Read More »

The Weeds
An A.I. wrote this title

The Weeds

Play Episode Listen Later Mar 12, 2021 57:17


Vox's Kelsey Piper joins Matt to talk about the future of artificial intelligence and AI research. Should AI research be more heavily regulated, or banned? What kind of future will the continued development of AI bring us? Will AI turn out to be more like Skynet, or... like Philip Morris? Resources: "The case for taking AI seriously as a threat to humanity" by Kelsey Piper, Vox/Future Perfect (Updated Oct. 15, 2020) Guest: Kelsey Piper (@KelseyTuoc), Staff Writer, Vox Host: Matt Yglesias (@mattyglesias), Slowboring.com Credits: Erikk Geannikis, Editor and Producer As the Biden administration gears up, we'll help you understand this unprecedented burst of policymaking. Sign up for The Weeds newsletter each Friday: vox.com/weeds-newsletter. The Weeds is a Vox Media Podcast Network production. Want to support The Weeds? Please consider making a contribution to Vox: bit.ly/givepodcasts About Vox Vox is a news network that helps you cut through the noise and understand what's really driving the events in the headlines. Follow Us: Vox.com Facebook group: The Weeds Learn more about your ad choices. Visit megaphone.fm/adchoices

The Ezra Klein Show
Andrew Yang on UBI, coronavirus, and his next job in politics

The Ezra Klein Show

Play Episode Listen Later Sep 3, 2020 89:13


The last time Andrew Yang was on the podcast, he was just beginning his long shot campaign for the presidency. Now, he’s fresh off a speaking slot at the Democratic convention, and, as he reveals here, talking to Joe Biden about a very specific role in a Biden administration.  Which is all to say: A lot has changed for Andrew Yang in the past few years. And even more has changed in the world. So I asked Yang back on the show to talk through this new world, and his possible role in it. Among our topics: - Could a universal basic income be the way we rebuild a fairer economy post-coronavirus?  - What’s changed in AI, and its likely effect on the economy, over the past five years?  - What’s the one mistake Yang wishes the Democratic Party would stop making?  - What did he learn from the surprising success of his own campaign?  - What job is he talking to Joe Biden about taking if Democrats win in November?  - Democrats think of themselves as the party of government. So why don’t they care more about making government work?  - How can Democrats get away with endlessly claiming to support ideas they have no actual intention of passing? - Do progressives have an overly dystopic view of technology? - Is there a way to pull presidential campaigns out of value statements, and into real plans for governing? - The unusual power Joe Biden holds in American politics And much more. References: Vox's Kelsey Piper's piece on GPT-3 My previous podcast with Andrew Yang Ezra's piece on "Why we can't build" Book recommendations: Zucked: Waking Up to the Facebook Catastrophe by Roger McNamee They Don't Represent Us by Lawrence Lessig  Humankind by Rutger Bregman This podcast is part of a larger Vox project called The Great Rebuild, which is made possible thanks to support from Omidyar Network, a social impact venture that works to reimagine critical systems and the ideas that govern them, and to build more inclusive and equitable societies. You can find out more at vox.com/the-great-rebuild We are conducting an audience survey to better serve you. It takes no more than five minutes, and it really helps out the show. Please take our survey here: voxmedia.com/podsurvey.  Please consider making a contribution to Vox to support this show: bit.ly/givepodcasts Your support will help us keep having ambitious conversations about big ideas. New to the show? Want to check out Ezra’s favorite episodes? Check out the Ezra Klein Show beginner’s guide (http://bit.ly/EKSbeginhere) Credits: Producer/Editor/Audio Wizard - Jeff Geld Researcher - Roge Karma Want to contact the show? Reach out at ezrakleinshow@vox.com Learn more about your ad choices. Visit megaphone.fm/adchoices

EARadio
EAG London 2019: Future Perfect—A year of coverage (Kelsey Piper)

EARadio

Play Episode Listen Later Jan 12, 2020 27:21


In 2018, Vox launched Future Perfect, with the goal of covering the most critical issues of the day through the lens of effective altruism. In this talk, Kelsey Piper discusses how the project worked out, her experience as a Vox staff writer, and her thoughts on the key challenges of EA-focused journalism. This talk was … Continue reading EAG London 2019: Future Perfect—A year of coverage (Kelsey Piper)

1A
No Plan B: Deciding Not To Have Children Because Of Climate Change

1A

Play Episode Listen Later Sep 25, 2019 34:18


"I think that the world will be worth living in 2050 and 2100," Vox reporter Kelsey Piper told us. "If you're longing to have children, then do that and fight for the world."Want to support 1A? Give to your local public radio station and subscribe to this podcast. Have questions? Find us on Twitter @1A.

Grey Mirror: MIT Media Lab’s Digital Currency Initiative on Technology, Society, and Ethics
#13 Kelsey Piper, Vox: Effective Altruist News,Memetic Immunity,Questions Social Justice Can Answer

Grey Mirror: MIT Media Lab’s Digital Currency Initiative on Technology, Society, and Ethics

Play Episode Listen Later Jul 8, 2019 54:35


Kelsey Piper is a writer at Future Perfect, an Effective Altruist-inspired media publication from Vox. We chat about her mindset behind Effective Altruist media, the types of questions social justice is good at answering, and memetic immunity as cultural evolution. https://twitter.com/kelseytuoc https://www.vox.com/future-perfect https://twitter.com/mitDCI https://twitter.com/RhysLindmark

Rationally Speaking
Rationally Speaking #230 - Kelsey Piper on “Big picture journalism: covering the topics that matter in the long run”

Rationally Speaking

Play Episode Listen Later Apr 1, 2019 53:20


This episode features journalist Kelsey Piper, blogger and journalist for "Future Perfect," a new site focused on topics that impact the long-term future of the world. Kelsey and Julia discuss some of her recent stories, including why people disagree about AI risk, and how she came up with her probabilistic predictions for 2018. They also discuss topics from Kelsey's personal blog, including why it's not necessarily a good idea to read articles you strongly disagree with, why "sovereignty" is such an important virtue, and the pros and cons of the steel man technique.

80,000 Hours Podcast with Rob Wiblin
#53 - Kelsey Piper on the room for important advocacy within journalism

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 26, 2019 154:30


“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risk.” Is this a plausible future lineup for major news outlets? Funded by the Rockefeller Foundation and given very little editorial direction, Vox's Future Perfect aspires to be more or less that. Competition in the news business creates pressure to write quick pieces on topical political issues that can drive lots of clicks with just a few hours' work. But according to Kelsey Piper, staff writer for this new section of Vox's website focused on effective altruist themes, Future Perfect's goal is to run in the opposite direction and make room for more substantive coverage that's not tied to the news cycle. They hope that in the long-term talented writers from other outlets across the political spectrum can also be attracted to tackle these topics. Links to learn more, summary and full transcript. Links to Kelsey's top articles. Some skeptics of the project have questioned whether this general coverage of global catastrophic risks actually helps reduce them. Kelsey responds: if you decide to dedicate your life to AI safety research, what’s the likely reaction from your family and friends? Do they think of you as someone about to join "that weird Silicon Valley apocalypse thing"? Or do they, having read about the issues widely, simply think “Oh, yeah. That seems important. I'm glad you're working on it.” Kelsey believes that really matters, and is determined by broader coverage of these kinds of topics. If that's right, is journalism a plausible pathway for doing the most good with your career, or did Kelsey just get particularly lucky? After all, journalism is a shrinking industry without an obvious revenue model to fund many writers looking into the world's most pressing problems. Kelsey points out that one needn't take the risk of committing to journalism at an early age. Instead listeners can specialise in an important topic, while leaving open the option of switching into specialist journalism later on, should a great opportunity happen to present itself. In today’s episode we discuss that path, as well as: • What’s the day to day life of a Vox journalist like? • How can good journalism get funded? • Are there meaningful tradeoffs between doing what's in the interest of Vox and doing what’s good? • How concerned should we be about the risk of effective altruism being perceived as partisan? • How well can short articles effectively communicate complicated ideas? • Are there alternative business models that could fund high quality journalism on a larger scale? • How do you approach the case for taking AI seriously to a broader audience? • How valuable might it be for media outlets to do Tetlock-style forecasting? • Is it really a good idea to heavily tax billionaires? • How do you avoid the pressure to get clicks? • How possible is it to predict which articles are going to be popular? • How did Kelsey build the skills necessary to work at Vox? • General lessons for people dealing with very difficult life circumstances Rob is then joined by two of his colleagues – Keiran Harris & Michelle Hutchinson – to quickly discuss: • The risk political polarisation poses to long-termist causes • How should specialists keep journalism available as a career option? • Should we create a news aggregator that aims to make someone as well informed as possible in big-picture terms? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

Slate Star Codex Podcast
Preschool: I Was Wrong

Slate Star Codex Podcast

Play Episode Listen Later Nov 7, 2018 9:16


Kelsey Piper has written an article for Vox: Early Childhood Education Yields Big Benefits – Just Not The Ones You Think. I had previously followed various studies that showed that preschool does not increase academic skill, academic achievement, or IQ, and concluded that it was useless. In fact, this had become a rallying point of movement for evidence-based social interventions; the continuing popular support for preschool proved that people were morons who didn’t care about science. I don’t think I ever said this aloud, but I believed it in my heart. I talked to Kelsey about some of the research for her article, and independently came to the same conclusion: despite the earlier studies of achievement being accurate, preschools (including the much-maligned Head Start) do seem to help children in subtler ways that only show up years later. Children who have been to preschool seem to stay in school longer, get better jobs, commit less crime, and require less welfare. The thing most of the early studies were looking for – academic ability – is one of the only things it doesn’t affect. This suggests that preschool is beneficial not because of the curriculum or because of “teaching young brains how to learn” or anything like that, but for purely social reasons. Kelsey reviews some evidence that it might improve child health, but this doesn’t seem to be the biggest part of the effect. Instead, she thinks that it frees low-income parents from childcare duties, lets them get better jobs (or in the case of mothers, sometimes lets them get a job at all), and improves parents’ human capital, with all the relevant follow-on effects. More speculatively, if the home environment is unusually bad, it gives the child a little while outside the home environment, and socializes them into a “normal” way of life. I’ll discuss a slightly more fleshed-out model of this in an upcoming post. My only caveat in agreeing with this perspective is that Chetty finds the same effect (no academic gains, but large life-outcome gains years later) from children having good rather than bad elementary school teachers. This doesn’t make sense in the context of freeing up parents’ time to get better jobs, or of getting children out of a bad home environment. It might make sense in terms of socializing them, though I would hate to have to sketch out a model of how that works. But since the teacher data and the Head Start data agree, that gives me more reason to think both are right. I can’t remember ever making a post about how Head Start was useless, but I definitely thought that, and to learn otherwise is a big update for me. I’ve written before about how when you make an update of that scale, it’s important to publicly admit error before going on to justify yourself or say why you should be excused as basically right in principle or whatever, so let me say it: I was wrong about Head Start. That having been said, on to the self-justifications and excuses

Audacious Compassion
Audacious Compassion 025 – Ramen Philosophy

Audacious Compassion

Play Episode Listen Later Sep 26, 2018 31:52


We talk about demonstrating active compassion in the face of systemic injustice. Our prompt came from a friend of the show and was paraphrased from a verbal conversation: I really like your show, but I have a hard time figuring out how to apply your ideas. I work in an industry where I see active … Continue reading "Audacious Compassion 025 – Ramen Philosophy"