Podcast appearances and mentions of ben singer

  • 85PODCASTS
  • 153EPISODES
  • 39mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 10, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ben singer

Latest podcast episodes about ben singer

Alert and Oriented
#50 - Doctor's Playbook - Medical Jeopardy Champion Turned Master Physician-Scientist: Dr. Benjamin Singer, MD

Alert and Oriented

Play Episode Listen Later Mar 10, 2025 44:04


In this episode of The Doctor's Playbook, we sit down with Dr. Ben Singer—pulmonary critical care physician, scientist, and two-time Medical Jeopardy national champion. A leader in ICU and pulmonary medicine, he was a trusted voice during the COVID-19 pandemic and continues to be the go-to expert for tackling the toughest clinical cases.We explore the art of clinical reasoning in an era of rapid recall, strategies to sharpen diagnostic skills, and how to balance efficiency with deep thinking. Dr. Singer also shares his insights on vulnerability and humility in medicine—why true confidence comes from knowing your limits, challenging assumptions, and embracing uncertainty.Plus, we dive into his experience leading Northwestern's Socrates Project, solving medical mysteries, and the strategy behind his Jeopardy wins. Whether you're a medical student, resident, or seasoned physician, this episode is packed with wisdom to refine your clinical approach and enhance patient care.Lead Host: Andrew MohamaSupporting Host: Kevin Grudzinski, MDGuest: Benjamin Singer, MDProduced By: Andrew MohamaAlert & Oriented is a medical student-run clinical reasoning podcast dedicated to providing a unique platform for early learners to practice their skills as a team in real time. Through our podcast, we strive to foster a learning environment where medical students can engage with one another, share knowledge, and gain valuable experience in clinical reasoning. We aim to provide a comprehensive and supportive platform for early learners to develop their clinical reasoning skills, build confidence in their craft, and become the best clinicians they can be.Follow the team on X:A&OAndrew MohamaRich AbramsNU Internal MedConnect on LinkedInAndrew MohamaA fantastic resource, by learners, for learners in Internal Medicine, Family Medicine, Pediatrics, Primary Care, Emergency Medicine, and Hospital Medicine.

Chicago's Afternoon News with Steve Bertrand

Dr. Ben Singer, pulmonary and critical care specialist at the Northwestern Medicine Canning Thoracic Institute, joins Lisa Dent to discuss hantavirus. In the wake of the death of Gene Hackman and his wife, Betsy Arakawa, hantavirus was determined to be her cause of death.

nevermind.
066: Learning to Beatbox. (ft. Jack Bensinger) | nevermind. with Veronika Slowikowska & Kyle Chase

nevermind.

Play Episode Listen Later Feb 10, 2025 65:54


veronika and kyle are back again and wow wow wow they're joined by jack bensinger! the three of them get into topics such as wind, beatboxing, making movies, and much much more. NEVERMIND MERCH: https://nevermindpod.com/ LIVE SHOWS!!: https://linktr.ee/veronika_iscool KYLE'S STUFF: https://trampolinewear.com/ Patreon: https://bit.ly/nevermindpatreon    jack: @jackbensinger https://www.instagram.com/jackbensinger/ joy tactics:  @joytactics   https://www.instagram.com/joytactics veronika: @veronika_iscool https://www.instagram.com/veronika_iscool/ kyle: @kylefornow https://www.instagram.com/kylefornow/ nevermind: @nevermindpod https://www.instagram.com/nevermindpod/ we're still getting good at this, but it's about to get even better. 00:00 Intro! 00:12 Spice Lords and Drug Tests 10:04 JACK BENSINGER! 11:26 How to Record a Podcast (Joy Tactics Style) 13:39 Jack Doxes Us?! 14:30 Telling The Same Jokes 17:38 1738! 17:40 I Don't Give a Fuckology 22:10 Miley Impressions 23:20 The Bond With Mr. Eric Rahill 25:02 Connor E? 26:27 Enemies to Lovers! 28:54 Kyle's Old Folks Home Story 30:37 Bulking Season 33:45 EPISODE 66! 34:40 Joy Tactic Merch? 36:59 Commenting on Youtube 44:08 Vin Diesel and Making Authentic Movies 49:44 Fave Comedians 52:00 Would Seth Rogen Be Good at Tiktok? 01:00:58 The Knights Bit!

The Adventures of a Hotwife
Season 3, Episode 1: Maya Bensinger

The Adventures of a Hotwife

Play Episode Listen Later Jan 6, 2025 75:26


It's 2025 and the start of Season 3 of The Adventures of a Hotwife!We are so excited for this new season and what we are going to bring you…And nobody better to kick it off than my guest cohost RealHotWife and the super sexy Maya Bensinger!Cum hear how this ex volleyball player turned hotwife. From her extremely hot first date with her husband (a little CNC, anyone?) to her 7 sexual experiences, sharing her husband with her sister, and a starlit gangbang on the beach with as many BBC's as she could find, this bombshell blonde has got stories to turn everyone on You don't want to miss this crazy start to Season 3.We're going to have some fun!Give Maya a follow https://x.com/MayaBensingerCheck out RealHotWife here: https://linktr.ee/realhotwifeSupport the showVisit https://linktr.ee/sexxxysoccermom to see a whole lot more of Sexxxy Soccer Mom!

Crimelines True Crime
Michelle Bensinger and Jubilee Lum

Crimelines True Crime

Play Episode Listen Later Dec 15, 2024 21:23


Two women disappear from the streets of Honolulu within a month of each other and are later found dead. Are the cases related and will they ever be solved? If you know anything about the murders of Michelle Bensinger and/or Jubilee Lum, you can call CrimeStoppers at 808 955-8300. https://www.p3tips.com/tipform.aspx?ID=606&CX=23539F  This case is *unsolved*   Come to Chile and Argentina with me! True Crime & Fine Wine w/ Josh Hallmark, Charlie Worroll & Lanie Hobbs  Support the show! Get the exclusive show Beyond the Files plus Crimelines episodes ad free on Supercast: https://crimelines.supercast.com/ Patreon: https://www.patreon.com/crimelines Apple Subscriptions: https://podcasts.apple.com/us/podcast/crimelines-true-crime/id1112004494  For one time support: https://www.basementfortproductions.com/support Links to all my socials and more: https://linktr.ee/crimelines   Sources: 2024 Crimelines Podcast Source List   Events: Feb 27-Mar 5 2025 True Crime & Fine Wine w/ Josh Hallmark, Charlie Worroll & Lanie Hobbs Transcript: https://app.podscribe.ai/series/3790 If an exact transcript is needed, please request at crimelinespodcast@gmail.com   Licensing and credits: Theme music by Scott Buckley https://www.scottbuckley.com.au/ Cover Art by Lars Hacking from Rusty Hinges   Crimelines is a registered trademark of Crimelines LLC.

Chameleon: Hollywood Con Queen
"Inside the Tent" with The Michigan Plot hosts Ken Bensinger & Jessica Garrison

Chameleon: Hollywood Con Queen

Play Episode Listen Later Oct 16, 2024 22:56


Campside was born to tell stories: big, surprising, original stories that can only originate from the beats of the world's best journalists. We've made dozens of hit podcasts and we're now welcoming you inside the tent. In this episode, Campside Co-founder Josh Dean talks with The Michigan Plot co-hosts, Ken Bensinger and Jessica Garrison, about reporting during the pandemic, why most big stories are way more complicated than you think, and the fine line between 'setting up a bunch of hateful stoners' and 'preventing an actual dangerous plot.' The Michigan Plot is also nominated for a Signal Award! Vote here! And to see what else Campside is nominated for, click here.  Let us know what you think at questions@campsidemedia.com! We hope to see you inside the tent again soon. Go to joincampside.com or click here for updates on Michigan Plot and all of Campside's hit shows. Learn more about your ad choices. Visit podcastchoices.com/adchoices

LessWrong Curated Podcast
“When is a mind me?” by Rob Bensinger

LessWrong Curated Podcast

Play Episode Listen Later Jul 8, 2024 27:00


xlr8harder writes:In general I don't think an uploaded mind is you, but rather a copy. But one thought experiment makes me question this. A Ship of Theseus concept where individual neurons are replaced one at a time with a nanotechnological functional equivalent.Are you still you?Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you", or predictions about how whole-brain emulation tech might change the way we use pronouns.Rather, I assume xlr8harder cares about more substantive questions like: If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self?Should I anticipate experiencing what my upload experiences?If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure?My answers: The original text contained 1 footnote which was omitted from this narration. The original text contained 7 images which were described by AI. --- First published: April 17th, 2024 Source: https://www.lesswrong.com/posts/zPM5r3RjossttDrpw/when-is-a-mind-me --- Narrated by TYPE III AUDIO.

LessWrong Curated Podcast
“Response to Aschenbrenner's ‘Situational Awareness'” by Rob Bensinger

LessWrong Curated Podcast

Play Episode Listen Later Jun 7, 2024 5:29


(Cross-posted from Twitter.) My take on Leopold Aschenbrenner's new report: I think Leopold gets it right on a bunch of important counts.Three that I especially care about: Full AGI and ASI soon. (I think his arguments for this have a lot of holes, but he gets the basic point that superintelligence looks 5 or 15 years off rather than 50+.)This technology is an overwhelmingly huge deal, and if we play our cards wrong we're all dead.Current developers are indeed fundamentally unserious about the core risks, and need to make IP security and closure a top priority.I especially appreciate that the report seems to get it when it comes to our basic strategic situation: it gets that we may only be a few years away from a truly world-threatening technology, and it speaks very candidly about the implications of this, rather than soft-pedaling [...]--- First published: June 6th, 2024 Source: https://www.lesswrong.com/posts/Yig9oa4zGE97xM2os/response-to-aschenbrenner-s-situational-awareness --- Narrated by TYPE III AUDIO.

The Nonlinear Library
LW - Response to Aschenbrenner's "Situational Awareness" by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Jun 6, 2024 4:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Response to Aschenbrenner's "Situational Awareness", published by Rob Bensinger on June 6, 2024 on LessWrong. (Cross-posted from Twitter.) My take on Leopold Aschenbrenner's new report: I think Leopold gets it right on a bunch of important counts. Three that I especially care about: 1. Full AGI and ASI soon. (I think his arguments for this have a lot of holes, but he gets the basic point that superintelligence looks 5 or 15 years off rather than 50+.) 2. This technology is an overwhelmingly huge deal, and if we play our cards wrong we're all dead. 3. Current developers are indeed fundamentally unserious about the core risks, and need to make IP security and closure a top priority. I especially appreciate that the report seems to get it when it comes to our basic strategic situation: it gets that we may only be a few years away from a truly world-threatening technology, and it speaks very candidly about the implications of this, rather than soft-pedaling it to the degree that public writings on this topic almost always do. I think that's a valuable contribution all on its own. Crucially, however, I think Leopold gets the wrong answer on the question "is alignment tractable?". That is: OK, we're on track to build vastly smarter-than-human AI systems in the next decade or two. How realistic is it to think that we can control such systems? Leopold acknowledges that we currently only have guesswork and half-baked ideas on the technical side, that this field is extremely young, that many aspects of the problem look impossibly difficult (see attached image), and that there's a strong chance of this research operation getting us all killed. "To be clear, given the stakes, I think 'muddling through' is in some sense a terrible plan. But it might be all we've got." Controllable superintelligent AI is a far more speculative idea at this point than superintelligent AI itself. I think this report is drastically mischaracterizing the situation. 'This is an awesome exciting technology, let's race to build it so we can reap the benefits and triumph over our enemies' is an appealing narrative, but it requires the facts on the ground to shake out very differently than how the field's trajectory currently looks. The more normal outcome, if the field continues as it has been, is: if anyone builds it, everyone dies. This is not a national security issue of the form 'exciting new tech that can give a country an economic or military advantage'; it's a national security issue of the form 'we've found a way to build a doomsday device, and as soon as anyone starts building it the clock is ticking on how long before they make a fatal error and take themselves out, and take the rest of the world out with them'. Someday superintelligence could indeed become more than a doomsday device, but that's the sort of thing that looks like a realistic prospect if ASI is 50 or 150 years away and we fundamentally know what we're doing on a technical level - not if it's more like 5 or 15 years away, as Leopold and I agree. The field is not ready, and it's not going to suddenly become ready tomorrow. We need urgent and decisive action, but to indefinitely globally halt progress toward this technology that threatens our lives and our children's lives, not to accelerate ourselves straight off a cliff. Concretely, the kinds of steps we need to see ASAP from the USG are: Spearhead an international alliance to prohibit the development of smarter-than-human AI until we're in a radically different position. The three top-cited scientists in AI (Hinton, Bengio, and Sutskever) and the three leading labs (Anthropic, OpenAI, and DeepMind) have all publicly stated that this technology's trajectory poses a serious risk of causing human extinction (in the CAIS statement). It is absurd on its face to let any private company...

The Nonlinear Library: LessWrong
LW - Response to Aschenbrenner's "Situational Awareness" by Rob Bensinger

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 6, 2024 4:49


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Response to Aschenbrenner's "Situational Awareness", published by Rob Bensinger on June 6, 2024 on LessWrong. (Cross-posted from Twitter.) My take on Leopold Aschenbrenner's new report: I think Leopold gets it right on a bunch of important counts. Three that I especially care about: 1. Full AGI and ASI soon. (I think his arguments for this have a lot of holes, but he gets the basic point that superintelligence looks 5 or 15 years off rather than 50+.) 2. This technology is an overwhelmingly huge deal, and if we play our cards wrong we're all dead. 3. Current developers are indeed fundamentally unserious about the core risks, and need to make IP security and closure a top priority. I especially appreciate that the report seems to get it when it comes to our basic strategic situation: it gets that we may only be a few years away from a truly world-threatening technology, and it speaks very candidly about the implications of this, rather than soft-pedaling it to the degree that public writings on this topic almost always do. I think that's a valuable contribution all on its own. Crucially, however, I think Leopold gets the wrong answer on the question "is alignment tractable?". That is: OK, we're on track to build vastly smarter-than-human AI systems in the next decade or two. How realistic is it to think that we can control such systems? Leopold acknowledges that we currently only have guesswork and half-baked ideas on the technical side, that this field is extremely young, that many aspects of the problem look impossibly difficult (see attached image), and that there's a strong chance of this research operation getting us all killed. "To be clear, given the stakes, I think 'muddling through' is in some sense a terrible plan. But it might be all we've got." Controllable superintelligent AI is a far more speculative idea at this point than superintelligent AI itself. I think this report is drastically mischaracterizing the situation. 'This is an awesome exciting technology, let's race to build it so we can reap the benefits and triumph over our enemies' is an appealing narrative, but it requires the facts on the ground to shake out very differently than how the field's trajectory currently looks. The more normal outcome, if the field continues as it has been, is: if anyone builds it, everyone dies. This is not a national security issue of the form 'exciting new tech that can give a country an economic or military advantage'; it's a national security issue of the form 'we've found a way to build a doomsday device, and as soon as anyone starts building it the clock is ticking on how long before they make a fatal error and take themselves out, and take the rest of the world out with them'. Someday superintelligence could indeed become more than a doomsday device, but that's the sort of thing that looks like a realistic prospect if ASI is 50 or 150 years away and we fundamentally know what we're doing on a technical level - not if it's more like 5 or 15 years away, as Leopold and I agree. The field is not ready, and it's not going to suddenly become ready tomorrow. We need urgent and decisive action, but to indefinitely globally halt progress toward this technology that threatens our lives and our children's lives, not to accelerate ourselves straight off a cliff. Concretely, the kinds of steps we need to see ASAP from the USG are: Spearhead an international alliance to prohibit the development of smarter-than-human AI until we're in a radically different position. The three top-cited scientists in AI (Hinton, Bengio, and Sutskever) and the three leading labs (Anthropic, OpenAI, and DeepMind) have all publicly stated that this technology's trajectory poses a serious risk of causing human extinction (in the CAIS statement). It is absurd on its face to let any private company...

Scriptnotes Podcast
642 - It's Brutal Out Here

Scriptnotes Podcast

Play Episode Listen Later May 7, 2024 68:05


Why are things so rough in Hollywood right now? John and Craig look at the industry's current contraction, its historical analogues, and offer suggestions for what might fix it. We also follow up on streaming ad breaks and New York accents, before answering listener questions on being paralyzed, whether it's by your second draft or writing professional emails. In our bonus segment for premium members, John and Craig wonder what to do with their digital lives once they've shuffled off this mortal coil, and how do you keep it from getting creepy? Links: The Life and Death of Hollywood by Daniel Bessner for Harpers One weird trick for fixing Hollywood by Max Read Moloch Trap Lola Dupre Codenames Duet 2 Kings 2:23-24 Get a Scriptnotes T-shirt! Check out the Inneresting Newsletter Gift a Scriptnotes Subscription or treat yourself to a premium subscription! Craig Mazin on Threads and Instagram John August on Threads, Instagram and Twitter John on Mastodon Outro by Ben Singer (send us yours!) Scriptnotes is produced by Drew Marquardt and edited by Matthew Chilelli. Email us at ask@johnaugust.com You can download the episode here.

The Nonlinear Library
LW - When is a mind me? by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Apr 17, 2024 23:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When is a mind me?, published by Rob Bensinger on April 17, 2024 on LessWrong. xlr8harder writes: In general I don't think an uploaded mind is you, but rather a copy. But one thought experiment makes me question this. A Ship of Theseus concept where individual neurons are replaced one at a time with a nanotechnological functional equivalent. Are you still you? Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you", or predictions about how whole-brain emulation tech might change the way we use pronouns. Rather, I assume xlr8harder cares about more substantive questions like: If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self? Should I anticipate experiencing what my upload experiences? If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure? My answers: Yeah. Yep. Yep, this is no big deal. A productive day for me might involve doing some work in the morning, getting a sandwich at Subway, destructively uploading my brain, then texting some friends to see if they'd like to catch a movie after I finish answering e-mails. _(ツ)_/ If there's an open question here about whether a high-fidelity emulation of me is "really me", this seems like it has to be a purely verbal question, and not something that I would care about at reflective equilibrium. Or, to the extent that isn't true, I think that's a red flag that there's a cognitive illusion or confusion still at work. There isn't a special extra "me" thing separate from my brain-state, and my precise causal history isn't that important to my values. I'd guess that this illusion comes from not fully internalizing reductionism and naturalism about the mind. I find it pretty natural to think of my "self" as though it were a homunculus that lives in my brain, and "watches" my experiences in a Cartesian theater. On this intuitive model, it makes sense to ask, separate from the experiences and the rest of the brain, where the homunculus is. ("OK, there's an exact copy of my brain-state there, but where am I?") E.g., consider a teleporter that works by destroying your body, and creating an exact atomic copy of it elsewhere. People often worry about whether they'll "really experience" the stuff their brain undergoes post-teleport, or whether a copy will experience it instead. "Should I anticipate 'waking up' on the other side of the teleporter? Or should I anticipate Oblivion, and it will be Someone Else who has those future experiences?" This question doesn't really make sense from a naturalistic perspective, because there isn't any causal mechanism that could be responsible for the difference between "a version of me that exists at 3pm tomorrow, whose experiences I should anticipate experiencing" and "an exact physical copy of me that exists at 3pm tomorrow, whose experiences I shouldn't anticipate experiencing". Imagine that the teleporter is located on Earth, and it sends you to a room on a space station that looks and feels identical to the room you started in. This means that until you exit the room and discover whether you're still on Earth, there's no way for you to tell whether the teleporter worked. But more than that, there will be nothing about your brain that tracks whether or not the teleporter sent you somewhere (versus doing nothing). There isn't an XML tag in the brain saying "this is a new brain, not the original"! There isn't a Soul or Homunculus that exists in addition to the brain, that could be the causal mechanism distinguishing "a brain that is me" from "a brain that is not me". There's just the brain-state, with no remainder. All of the same functional brain-states occur whether yo...

The Nonlinear Library: LessWrong
LW - When is a mind me? by Rob Bensinger

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 17, 2024 23:40


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When is a mind me?, published by Rob Bensinger on April 17, 2024 on LessWrong. xlr8harder writes: In general I don't think an uploaded mind is you, but rather a copy. But one thought experiment makes me question this. A Ship of Theseus concept where individual neurons are replaced one at a time with a nanotechnological functional equivalent. Are you still you? Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you", or predictions about how whole-brain emulation tech might change the way we use pronouns. Rather, I assume xlr8harder cares about more substantive questions like: If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self? Should I anticipate experiencing what my upload experiences? If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure? My answers: Yeah. Yep. Yep, this is no big deal. A productive day for me might involve doing some work in the morning, getting a sandwich at Subway, destructively uploading my brain, then texting some friends to see if they'd like to catch a movie after I finish answering e-mails. _(ツ)_/ If there's an open question here about whether a high-fidelity emulation of me is "really me", this seems like it has to be a purely verbal question, and not something that I would care about at reflective equilibrium. Or, to the extent that isn't true, I think that's a red flag that there's a cognitive illusion or confusion still at work. There isn't a special extra "me" thing separate from my brain-state, and my precise causal history isn't that important to my values. I'd guess that this illusion comes from not fully internalizing reductionism and naturalism about the mind. I find it pretty natural to think of my "self" as though it were a homunculus that lives in my brain, and "watches" my experiences in a Cartesian theater. On this intuitive model, it makes sense to ask, separate from the experiences and the rest of the brain, where the homunculus is. ("OK, there's an exact copy of my brain-state there, but where am I?") E.g., consider a teleporter that works by destroying your body, and creating an exact atomic copy of it elsewhere. People often worry about whether they'll "really experience" the stuff their brain undergoes post-teleport, or whether a copy will experience it instead. "Should I anticipate 'waking up' on the other side of the teleporter? Or should I anticipate Oblivion, and it will be Someone Else who has those future experiences?" This question doesn't really make sense from a naturalistic perspective, because there isn't any causal mechanism that could be responsible for the difference between "a version of me that exists at 3pm tomorrow, whose experiences I should anticipate experiencing" and "an exact physical copy of me that exists at 3pm tomorrow, whose experiences I shouldn't anticipate experiencing". Imagine that the teleporter is located on Earth, and it sends you to a room on a space station that looks and feels identical to the room you started in. This means that until you exit the room and discover whether you're still on Earth, there's no way for you to tell whether the teleporter worked. But more than that, there will be nothing about your brain that tracks whether or not the teleporter sent you somewhere (versus doing nothing). There isn't an XML tag in the brain saying "this is a new brain, not the original"! There isn't a Soul or Homunculus that exists in addition to the brain, that could be the causal mechanism distinguishing "a brain that is me" from "a brain that is not me". There's just the brain-state, with no remainder. All of the same functional brain-states occur whether yo...

Scriptnotes Podcast
639 - Intrinsic Motivation

Scriptnotes Podcast

Play Episode Listen Later Apr 16, 2024 65:33


John and Craig can't help but look at intrinsic motivations — those specific internal drives that guide characters behavior. They discuss how to structure and expose that internal drive, the importance of an innate irritability, how it can stop your characters from becoming flat, and rewarding that intrinsic motivation with choice. But first, we follow up on AI training, blueprints and “important” movies. We also weigh in on a high-school senior's college dilemma and answer a listener question on writing with your trailer in mind. In our bonus segment for premium members, John and Craig parse out their reasons for why humans may – or may not – ever leave the solar system. Links: My Pal Foot Foot by The Shaggs Braid by Jonathan Blow Connections from the New York Times Q: Who Found a Way to Crack the U.K.'s Premier Quiz Show? by David Segal for The New York Times On what motivates us: a detailed review of intrinsic v. extrinsic motivation by Laurel S. Morris, Mora M. Grehl, Sarah B. Rutter, Marishka Mehta, and Margaret L. Westwater Why are there so many illegal weed stores in New York City? by PJ Vogt Shōgun on FX Get a Scriptnotes T-shirt! Check out the Inneresting Newsletter Gift a Scriptnotes Subscription or treat yourself to a premium subscription! Craig Mazin on Threads and Instagram John August on Threads, Instagram and Twitter John on Mastodon Outro by Ben Singer (send us yours!) Scriptnotes is produced by Drew Marquardt and edited by Matthew Chilelli. Email us at ask@johnaugust.com You can download the episode here.

Too Far with Rachel Kaly and Robby Hoffman
"THE GIRLS" TALK TO "THE BOYS" ERIC RAHILL AND JACK BENSINGER

Too Far with Rachel Kaly and Robby Hoffman

Play Episode Listen Later Mar 19, 2024 56:35


Ya so we invited our rival boy podcast Joy Tactics to our live show to duke it out for top podcast across genders and sexuality. We discussed interesting things like why is school shooting for boys only. And listen, if you want the other half of this live show THEN SUBSCRIBE TO THE PATREON at www.patreon.com/toofarpod Hosted on Acast. See acast.com/privacy for more information.

The Brandon Jamel Show
The Joy Tactics x BJS Oscars Special (feat. Jack Bensinger, Eric Rahill, & Nate Varrone)

The Brandon Jamel Show

Play Episode Listen Later Mar 11, 2024 68:10


Jack Bensinger, Eric Rahill, and Nate Varrone from Joy Tactics join us in the BJS studio for a long awaited crossover everyone in the industry has been begging for More heat on the patreon https://www.patreon.com/thebrandonjamelshow Come see Brandon in Seattle 3/29-3/30 at Laughs Comedy Club  https://laughscomedyclub.com/eventbrite-event/comedian-brandon-wardell/ Jamel's gonna be in Louisiana and Colorado in April too! https://linktr.ee/broccolihouse

The Nonlinear Library
LW - On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche by Zack M Davis

The Nonlinear Library

Play Episode Listen Later Jan 10, 2024 7:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche, published by Zack M Davis on January 10, 2024 on LessWrong. Rob Bensinger argues that "ITT-passing and civility are good; 'charity' is bad; steelmanning is niche". The ITT - Ideological Turing Test - is an exercise in which one attempts to present one's interlocutor's views as persuasively as the interlocutor themselves can, coined by Bryan Caplan in analogy to the Turing Test for distinguishing between humans and intelligent machines. (An AI that can pass as human must presumably possess human-like understanding; an opponent of an idea that can pass as an advocate for it presumably must possess an advocate's understanding.) "Steelmanning" refers to the practice of addressing a stronger version of an interlocutor's argument, coined in disanalogy to "strawmanning", the crime of addressing a weaker version of an interlocutor's argument in the hopes of fooling an audience (or oneself) that the original argument has been rebutted. Bensinger describes steelmanning as "a useful niche skill", but thinks it isn't "a standard thing you bring out in most arguments." Instead, he writes, discussions should be structured around object-level learning, trying to pass each other's Ideological Turing Test, or trying resolve cruxes. I think Bensinger has it backwards: the Ideological Turing Test is a useful niche skill, but it doesn't belong on a list of things to organize a discussion around, whereas something like steelmanning naturally falls out of object-level learning. Let me explain. The ITT is a test of your ability to model someone else's models of some real-world phenomena of interest. But usually, I'm much more interested in modeling the real-world phenomena of interest directly, rather than modeling someone else's models of it. I couldn't pass an ITT for advocates of Islam or extrasensory perception. On the one hand, this does represent a distinct deficit in my ability to model what the advocates of these ideas are thinking, a tragic gap in my comprehension of reality, which I would hope to remedy in the Glorious Transhumanist Future if that were a real thing. On the other hand, facing the constraints of our world, my inability to pass an ITT for Islam or ESP seems ... basically fine? I already have strong reasons to doubt the existence of ontologically fundamental mental entities. I accept my ignorance of the reasons someone might postulate otherwise, not out of contempt, but because I just don't have the time. Or think of it this way: as a selfish seeker of truth speaking to another selfish seeker of truth, when would I want to try to pass my interlocutor's ITT, or want my interlocutor to try to pass my ITT? In the "outbound" direction, I'm not particularly selfishly interested in passing my interlocutor's ITT because, again, I usually don't care much about other people's beliefs, as contrasted to the reality that those beliefs are reputedly supposed to track. I listen to my interlocutor hoping to learn from them, but if some part of what they say seems hopelessly wrong, it doesn't seem profitable to pretend that it isn't until I can reproduce the hopeless wrongness in my own words. Crucially, the same is true in the "inbound" direction. I don't expect people to be able to pass my ITT before criticizing my ideas. That would make it harder for people to inform me about flaws in my ideas! But if I'm not particularly interested in passing my interlocutor's ITT or in my interlocutor passing mine, and my interlocutor presumably (by symmetry) feels the same way, why would we bother? All this having been said, I absolutely agree that, all else being equal, the ability to pass ITTs is desirable. It's useful as a check that you and your interlocutor are successfully communicating, rather than talking past each other. I...

Stavvy's World
#58 - Sarah Sherman and Jack Bensinger

Stavvy's World

Play Episode Listen Later Jan 8, 2024 98:38


Sarah Sherman and Jack Bensinger join the pod to discuss why Sarah was late, how she needs to find a better therapist, chiropractors, hall passes, glass eyes, and much more. Sarah, Jack and Stav help callers including a guy who's self-conscious about having dentures while dating, and a guy weirded out by the girl he's dating's attraction to anime characters. Download the DraftKings Sportsbook app and use code STAVVY to score $200 IN BONUS BETS INSTANTLY when you bet just $5. Also check out DraftKings Fantasy Sports! For more info, visit https://www.draftkings.com/ Follow Sarah Sherman on social media: https://www.sarahsquirm.com/ https://www.instagram.com/sarahsquirm/ https://twitter.com/SarahSquirm Follow Jack Bensinger on social media: https://www.jackbensinger.com/ https://www.instagram.com/jackbensinger https://www.youtube.com/c/jackbensinger https://twitter.com/JackBensinger https://www.tiktok.com/@jackbensinger Unlock exclusive, Patreon-only episodes at https://www.patreon.com/stavvysworld Wanna be part of the show? Call 904-800-STAV and leave a voicemail to get advice!

Headgum Happy Hour
Neighborhood Boys (w/ Janeane Garofalo, Eric Rahill, Jack Bensinger, Charlie Bardey, & Natalie Rotter-Laitman

Headgum Happy Hour

Play Episode Listen Later Dec 20, 2023 67:03


Jake and Amir are back hosting another Headgum Happy Hour where they introduce their new podcast, Segments, and sample hit segments such as ‘How can we get exactly ten people to raise their hands?', ‘Which two word phrase will make the other person break?', and a competition for most embarrassing photo! We're joined by Janeane Garofalo, Eric Rahill, Jack Bensinger, Charlie Bardey, and Natalie Rotter-Laitman to discuss the beauty of gifting your grandmother an Aura Frame, getting the surgery, the ethics of halloween costumes, the origins of Natalie's scarf, and more!Watch the video version on YouTubeLike the show? Rate and review it on Spotify and Apple PodcastsListen to Segments and watch video episodes on YouTube.Check out Charlie and Natalie's podcast Exploration: LIVE! on Headgum.Advertise on Headgum Happy Hour via Gumball.fmSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Nonlinear Library
LW - AI Views Snapshots by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Dec 13, 2023 1:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Views Snapshots, published by Rob Bensinger on December 13, 2023 on LessWrong. (Cross-posted from Twitter, and therefore optimized somewhat for simplicity.) Recent discussions of AI x-risk in places like Twitter tend to focus on "are you in the Rightthink Tribe, or the Wrongthink Tribe?". Are you a doomer? An accelerationist? An EA? A techno-optimist? I'm pretty sure these discussions would go way better if the discussion looked less like that. More concrete claims, details, and probabilities; fewer vague slogans and vague expressions of certainty. As a start, I made this image (also available as a Google Drawing): I obviously left out lots of other important and interesting questions, but I think this is OK as a conversation-starter. I've encouraged Twitter regulars to share their own versions of this image, or similar images, as a nucleus for conversation (and a way to directly clarify what people's actual views are, beyond the stereotypes and slogans). If you want to see a filled-out example, here's mine (though you may not want to look if you prefer to give answers that are less anchored): Google Drawing link. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Inside Julia's Kitchen
Meet Chris Keyser and Emily Bensinger

Inside Julia's Kitchen

Play Episode Listen Later Dec 1, 2023 61:08


To celebrate season two of “Julia,” now streaming on Max, this week's Inside Julia's Kitchen features series writers Chris Keyser and Emily Bensinger. Host Todd Schulkin talks to Chris and Emily about what interested them in Julia's story and why they choose to focus on her groundbreaking television series, “The French Chef.” They also share a sneak peek of what's to come in season two, and explain why they decided to open this season in France. Plus, we get a double Julia Moment.Heritage Radio Network is a listener supported nonprofit podcast network. Support Inside Julia's Kitchen by becoming a member!Inside Julia's Kitchen is Powered by Simplecast.

Dishing on Julia, the Official Julia Companion Podcast
S2 Ep. 5 - “Bûche de Noël” with Erica Dunton, Emily Bensinger, and Grace Young

Dishing on Julia, the Official Julia Companion Podcast

Play Episode Listen Later Nov 30, 2023 48:02


In Episode 5 of Dishing On Julia, host Kerry Diamond takes a deep dive with director Erica Dunton and writer Emily Bensinger. They share a peek behind the scenes of this very special episode and reflect on what makes Julia so enduring. In the second half of the episode, Grace Young joins Kerry to share the importance of Chinatowns and her fondest memories with Julia.  Learn more about your ad choices. Visit megaphone.fm/adchoices

In Depth With Graham Bensinger
Graham Bensinger- The Story of In Depth

In Depth With Graham Bensinger

Play Episode Listen Later Oct 30, 2023 62:49


This week on the In Depth podcast we turn the tables and interview the guy who's normally asking the questions. Graham opens up about a childhood chasing down celebrities, the interview that landed him on The Tonight Show, and his most embarrassing moment in college. We also talk with the St. Louis native about why he might be a wanted man in Iceland and his penchant for asking inappropriate questions. Graham even peels back the curtain on his personal life with some help from his now-fiancee.

Podcast About List
Ep. 263 - Six Fingers of Music ft. Jack Bensinger

Podcast About List

Play Episode Listen Later Oct 18, 2023 77:41


The Nonlinear Library
LW - An artificially structured argument for expecting AGI ruin by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later May 8, 2023 50:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An artificially structured argument for expecting AGI ruin, published by Rob Bensinger on May 7, 2023 on LessWrong. Philosopher David Chalmers asked: [I]s there a canonical source for "the argument for AGI ruin" somewhere, preferably laid out as an explicit argument with premises and a conclusion? Unsurprisingly, the actual reason people expect AGI ruin isn't a crisp deductive argument; it's a probabilistic update based on many lines of evidence. The specific observations and heuristics that carried the most weight for someone will vary for each individual, and can be hard to accurately draw out. That said, Eliezer Yudkowsky's So Far: Unfriendly AI Edition might be a good place to start if we want a pseudo-deductive argument just for the sake of organizing discussion. People can then say which premises they want to drill down on. In The Basic Reasons I Expect AGI Ruin, I wrote: When I say "general intelligence", I'm usually thinking about "whatever it is that lets human brains do astrophysics, category theory, etc. even though our brains evolved under literally zero selection pressure to solve astrophysics or category theory problems". It's possible that we should already be thinking of GPT-4 as "AGI" on some definitions, so to be clear about the threshold of generality I have in mind, I'll specifically talk about "STEM-level AGI", though I expect such systems to be good at non-STEM tasks too. STEM-level AGI is AGI that has "the basic mental machinery required to do par-human reasoning about all the hard sciences", though a specific STEM-level AGI could (e.g.) lack physics ability for the same reasons many smart humans can't solve physics problems, such as "lack of familiarity with the field". A simple way of stating the argument in terms of STEM-level AGI is: Substantial Difficulty of Averting Instrumental Pressures: As a strong default, absent alignment breakthroughs, STEM-level AGIs that understand their situation and don't value human survival as an end will want to kill all humans if they can. Substantial Difficulty of Value Loading: As a strong default, absent alignment breakthroughs, STEM-level AGI systems won't value human survival as an end. High Early Capabilities. As a strong default, absent alignment breakthroughs or global coordination breakthroughs, early STEM-level AGIs will be scaled to capability levels that allow them to understand their situation, and allow them to kill all humans if they want. Conditional Ruin. If it's very likely that there will be no alignment breakthroughs or global coordination breakthroughs before we invent STEM-level AGI, then given 1+2+3, it's very likely that early STEM-level AGI will kill all humans. Inadequacy. It's very likely that there will be no alignment breakthroughs or global coordination breakthroughs before we invent STEM-level AGI. Therefore it's very likely that early STEM-level AGI will kill all humans. (From 1–5) I'll say that the "invention of STEM-level AGI" is the first moment when an AI developer (correctly) recognizes that it can build a working STEM-level AGI system within a year. I usually operationalize "early STEM-level AGI" as "STEM-level AGI that is built within five years of the invention of STEM-level AGI". I think humanity is very likely to destroy itself within five years of the invention of STEM-level AGI. And plausibly far sooner — e.g., within three months or a year of the technology's invention. A lot of the technical and political difficulty of the situation stems from this high level of time pressure: if we had decades to work with STEM-level AGI before catastrophe, rather than months or years, we would have far more time to act, learn, try and fail at various approaches, build political will, craft and implement policy, etc. This argument focuses on "human survival", but from my perspec...

The Nonlinear Library
AF - An artificially structured argument for expecting AGI ruin by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later May 7, 2023 50:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An artificially structured argument for expecting AGI ruin, published by Rob Bensinger on May 7, 2023 on The AI Alignment Forum. Philosopher David Chalmers asked: [I]s there a canonical source for "the argument for AGI ruin" somewhere, preferably laid out as an explicit argument with premises and a conclusion? Unsurprisingly, the actual reason people expect AGI ruin isn't a crisp deductive argument; it's a probabilistic update based on many lines of evidence. The specific observations and heuristics that carried the most weight for someone will vary for each individual, and can be hard to accurately draw out. That said, Eliezer Yudkowsky's So Far: Unfriendly AI Edition might be a good place to start if we want a pseudo-deductive argument just for the sake of organizing discussion. People can then say which premises they want to drill down on. In The Basic Reasons I Expect AGI Ruin, I wrote: When I say "general intelligence", I'm usually thinking about "whatever it is that lets human brains do astrophysics, category theory, etc. even though our brains evolved under literally zero selection pressure to solve astrophysics or category theory problems". It's possible that we should already be thinking of GPT-4 as "AGI" on some definitions, so to be clear about the threshold of generality I have in mind, I'll specifically talk about "STEM-level AGI", though I expect such systems to be good at non-STEM tasks too. STEM-level AGI is AGI that has "the basic mental machinery required to do par-human reasoning about all the hard sciences", though a specific STEM-level AGI could (e.g.) lack physics ability for the same reasons many smart humans can't solve physics problems, such as "lack of familiarity with the field". A simple way of stating the argument in terms of STEM-level AGI is: Substantial Difficulty of Averting Instrumental Pressures: As a strong default, absent alignment breakthroughs, STEM-level AGIs that understand their situation and don't value human survival as an end will want to kill all humans if they can. Substantial Difficulty of Value Loading: As a strong default, absent alignment breakthroughs, STEM-level AGI systems won't value human survival as an end. High Early Capabilities. As a strong default, absent alignment breakthroughs or global coordination breakthroughs, early STEM-level AGIs will be scaled to capability levels that allow them to understand their situation, and allow them to kill all humans if they want. Conditional Ruin. If it's very likely that there will be no alignment breakthroughs or global coordination breakthroughs before we invent STEM-level AGI, then given 1+2+3, it's very likely that early STEM-level AGI will kill all humans. Inadequacy. It's very likely that there will be no alignment breakthroughs or global coordination breakthroughs before we invent STEM-level AGI. Therefore it's very likely that early STEM-level AGI will kill all humans. (From 1–5) I'll say that the "invention of STEM-level AGI" is the first moment when an AI developer (correctly) recognizes that it can build a working STEM-level AGI system within a year. I usually operationalize "early STEM-level AGI" as "STEM-level AGI that is built within five years of the invention of STEM-level AGI". I think humanity is very likely to destroy itself within five years of the invention of STEM-level AGI. And plausibly far sooner — e.g., within three months or a year of the technology's invention. A lot of the technical and political difficulty of the situation stems from this high level of time pressure: if we had decades to work with STEM-level AGI before catastrophe, rather than months or years, we would have far more time to act, learn, try and fail at various approaches, build political will, craft and implement policy, etc. This argument focuses on "human survival", but fr...

The Nonlinear Library
LW - AGI ruin mostly rests on strong claims about alignment and deployment, not about society by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Apr 24, 2023 9:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI ruin mostly rests on strong claims about alignment and deployment, not about society, published by Rob Bensinger on April 24, 2023 on LessWrong. Dustin Moskovitz writes on Twitter: My intuition is that MIRI's argument is almost more about sociology than computer science/security (though there is a relationship). People won't react until it is too late, they won't give up positive rewards to mitigate risk, they won't coordinate, the govt is feckless, etc. And that's a big part of why it seems overconfident to people, bc sociology is not predictable, or at least isn't believed to be. And Stefan Schubert writes: I think it's good @robbensinger wrote a list of reasons he expects AGI ruin. It's well-written. But it's notable and symptomatic that 9/10 reasons relate to the nature of AI systems and only 1/10 (discussed in less detail) to the societal response. Whatever one thinks the societal response will be, it seems like a key determinant of whether there'll be AGI ruin. Imo the debate on whether AGI will lead to ruin systematically underemphasises this factor, focusing on technical issues. It's useful to distinguish between warnings and all-things-considered predictions in this regard. When issuing warnings, it makes sense to focus on the technology itself. Warnings aim to elicit a societal response, not predict it. But when you actually try to predict what'll happen all-things-considered, you need to take the societal response into account in a big way As such I think Rob's list is better as a list of reasons we ought to take AGI risk seriously, than as a list of reasons it'll lead to ruin My reply is: It's true that in my "top ten reasons I expect AGI ruin" list, only one of the sections is about the social response to AGI risk, and it's a short section. But the section links to some more detailed discussions (and quotes from them in a long footnote): Four mindset disagreements behind existential risk disagreements in ML The inordinately slow spread of good AGI conversations in ML Inadequate Equilibria Also, discussing the adequacy of society's response before I've discussed AGI itself at length doesn't really work, I think, because I need to argue for what kind of response is warranted before I can start arguing that humanity is putting insufficient effort into the problem. If you think the alignment problem itself is easy, then I can cite all the evidence in the world regarding "very few people are working on alignment" and it won't matter. If you think a slowdown is unnecessary or counterproductive, then I can point out that governments haven't placed a ceiling on large training runs and you'll just go "So? Why should they?" Society's response can only be inadequate given some model of what's required for adequacy. That's a lot of why I factor out that discussion into other posts. More importantly, contra Dustin, I don't see myself as having strong priors or complicated models regarding the social situation. Eliezer Yudkowsky similarly says he doesn't have strong predictions about what governments or communities will do in this or that situation (beyond anti-predictions like "they probably won't do specific thing X that's wildly different from anything they've done before"): [Ngo][12:26] The other thing is that, for pedagogical purposes, I think it'd be useful for you to express some of your beliefs about how governments will respond to AI I think I have a rough guess about what those beliefs are, but even if I'm right, not everyone who reads this transcript will be [Yudkowsky][12:28] Why would I be expected to know that? I could talk about weak defaults and iterate through an unending list of possibilities. Thinking that Eliezer thinks he knows that to any degree of specificity feels like I'm being weakmanned! [Ngo][12:28] I'm not claiming you have any specifi...

The Nonlinear Library
LW - The basic reasons I expect AGI ruin by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Apr 18, 2023 33:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The basic reasons I expect AGI ruin, published by Rob Bensinger on April 18, 2023 on LessWrong. I've been citing AGI Ruin: A List of Lethalities to explain why the situation with AI looks lethally dangerous to me. But that post is relatively long, and emphasizes specific open technical problems over "the basics". Here are 10 things I'd focus on if I were giving "the basics" on why I'm so worried: 1. General intelligence is very powerful, and once we can build it at all, STEM-capable artificial general intelligence (AGI) is likely to vastly outperform human intelligence immediately (or very quickly). When I say "general intelligence", I'm usually thinking about "whatever it is that lets human brains do astrophysics, category theory, etc. even though our brains evolved under literally zero selection pressure to solve astrophysics or category theory problems". It's possible that we should already be thinking of GPT-4 as "AGI" on some definitions, so to be clear about the threshold of generality I have in mind, I'll specifically talk about "STEM-level AGI", though I expect such systems to be good at non-STEM tasks too. Human brains aren't perfectly general, and not all narrow AI systems or animals are equally narrow. (E.g., AlphaZero is more general than AlphaGo.) But it sure is interesting that humans evolved cognitive abilities that unlock all of these sciences at once, with zero evolutionary fine-tuning of the brain aimed at equipping us for any of those sciences. Evolution just stumbled into a solution to other problems, that happened to generalize to millions of wildly novel tasks. More concretely: AlphaGo is a very impressive reasoner, but its hypothesis space is limited to sequences of Go board states rather than sequences of states of the physical universe. Efficiently reasoning about the physical universe requires solving at least some problems that are different in kind from what AlphaGo solves. These problems might be solved by the STEM AGI's programmer, and/or solved by the algorithm that finds the AGI in program-space; and some such problems may be solved by the AGI itself in the course of refining its thinking. Some examples of abilities I expect humans to only automate once we've built STEM-level AGI (if ever): The ability to perform open-heart surgery with a high success rate, in a messy non-standardized ordinary surgical environment. The ability to match smart human performance in a specific hard science field, across all the scientific work humans do in that field. In principle, I suspect you could build a narrow system that is good at those tasks while lacking the basic mental machinery required to do par-human reasoning about all the hard sciences. In practice, I very strongly expect humans to find ways to build general reasoners to perform those tasks, before we figure out how to build narrow reasoners that can do them. (For the same basic reason evolution stumbled on general intelligence so early in the history of human tech development.) When I say "general intelligence is very powerful", a lot of what I mean is that science is very powerful, and that having all of the sciences at once is a lot more powerful than the sum of each science's impact. Another large piece of what I mean is that (STEM-level) general intelligence is a very high-impact sort of thing to automate because STEM-level AGI is likely to blow human intelligence out of the water immediately, or very soon after its invention. 80,000 Hours gives the (non-representative) example of how AlphaGo and its successors compared to the humanity: In the span of a year, AI had advanced from being too weak to win a single [Go] match against the worst human professionals, to being impossible for even the best players in the world to defeat. I expect general-purpose science AI to blow human science...

Dodge Movie Podcast
Isn't This Simply A Wonderful World

Dodge Movie Podcast

Play Episode Listen Later Apr 16, 2023 26:51


In Wonderful World, Matthew Broderick portrays Ben Singer (who was a successful children's singer) but now has a very cynical view of the world. It get's worse when his roommate, Ibu's sudden hospitalization brings in his sister Khadi who might be able to help him see a better worldview. The film was released in 2009 and was written and directed by Joshua Goldin. Listen to our spoiler filled discussion of the film and our take on it.  Timecodes: 00:00 - Introduction :17 - The Film stats 2:43 - The Pickup Line 6:14 - Why is Ben so bummed out? 14:47 - The scene that confused us both 17:51 - More clock talk 19:52 - Christi geeks out to sound 21:12 - Head Trauma 21:27 - Smoochie, Smoochie, Smoochie 21:34 - Driving Review 22:26 - To the Numbers To guess the theme of this month's films you can call or text us at 971-245-4148 or email to christi@dodgemediaproductions.com You can guess as many times as you would like. Guess the Monthly Theme for 2023 Contest - More Info Here   Next week's film will be Identity Thief (2011) Subscribe, Rate & Share Your Favorite Episodes! Thanks for tuning into today's episode of Dodge Movie Podcast with your host, Mike and Christi Dodge. If you enjoyed this episode, please head over to Apple Podcasts to subscribe and leave a rating and review. Special thanks to Melissa Villagrana our social media posts. Don't forget to visit our website, connect with us on Instagram, Facebook, LinkedIn, and share your favorite episodes across social media. Give us a call at 971-245-4148 or email at christi@dodgemediaproductions.com

En consulta privada con Pilar Cortés
T2-Ep.18 - Una sociedad hipersexualizada

En consulta privada con Pilar Cortés

Play Episode Listen Later Mar 27, 2023 28:46


Niñas con baja autoestima, relaciones pobres, falta de empatía, soledad, sentimientos de ser inadecuados, todo esto forma parte de la herencia que nos está dejando este bombardeo de mensajes sexuales que nos llegan de todas partes: canciones, series, publicidad, redes sociales, videojuegos… Además, irónicamente, esta cultura hipersexualizada está llevando a muchos jóvenes y adultos a tener una pésima vida sexual. En este episodio hablamos de las causas y las consecuencias de esta sociedad hipersexualizada en la que vivimos, y exploramos formas en las que podemos proteger a nuestros hijos y a nosotros mismos de sus efectos tan destructivos que estamos viendo en niños, adolescentes, jóvenes y adultos. Referencias Kammeyer, K.C.W. (2008). The Hypersexual Society. In: A Hypersexual Society. Palgrave Macmillan, New York. "A systematic review of body dissatisfaction and sociocultural messages related to the body among preschool children": https://www.sciencedirect.com/science/article/pii/S1740144515300061 Grubbs, J. B., Exline, J. J., Pargament, K. I., Volk, F., & Lindberg, M. J. (2017). Internet pornography use, perceived addiction, and religious/spiritual struggles. Archives of Sexual Behavior, 46(6), 1733–1745. Seabrook, R. C., Ward, L. M., & Giaccardi, S. (2019). Less than human? Media use, objectification of women, and men's acceptance of sexual aggression. Psychology of Violence, 9(5), 536–545. Graydon, Shari. “The Portrayal of Women in the Media: The Good, the Bad and the Beautiful,” chapter in Communications in Canadian Society, 5th edition, Ben Singer, ed., Nelson 2001 La exposición de los adolescentes varones a la pornografía en Internet: relaciones con el momento de la pubertad, la búsqueda de sensaciones y el rendimiento académico. Beyens, I., Vandenbosch, L., & Eggermont, S. (2015). The Journal of Early Adolescence, 35(8), 1045-1068

The Nonlinear Library
LW - "Rationalist Discourse" Is Like "Physicist Motors" by Zack M Davis

The Nonlinear Library

Play Episode Listen Later Feb 26, 2023 15:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Rationalist Discourse" Is Like "Physicist Motors", published by Zack M Davis on February 26, 2023 on LessWrong. Imagine being a student of physics, and coming across a blog post proposing a list of guidelines for "physicist motors"—motor designs informed by the knowledge of physicists, unlike ordinary motors. Even if most of the things on the list seemed like sensible advice to keep in mind when designing a motor, the framing would seem very odd. The laws of physics describe how energy can be converted into work. To the extent that any motor accomplishes anything, it happens within the laws of physics. There are theoretical ideals describing how motors need to work in principle, like the Carnot engine, but you can't actually build an ideal Carnot engine; real-world electric motors or diesel motors or jet engines all have their own idiosyncratic lore depending on the application and the materials at hand; an engineer who worked on one, might not the be best person to work on another. You might appeal to principles of physics to explain why some particular motor is inefficient or poorly-designed, but you would not speak of physicist motors as if that were a distinct category of thing—and if someone did, you might quietly begin to doubt how much they really knew about physics. As a student of rationality, I feel the same way about guidelines for "rationalist discourse." The laws of probability and decision theory describe how information can be converted into optimization power. To the extent that any discourse accomplishes anything, it happens within the laws of rationality. Rob Bensinger proposes "Elements of Rationalist Discourse" as a companion to Duncan Sabien's earlier "Basics of Rationalist Discourse". Most of the things on both lists are, indeed, sensible advice that one might do well to keep in mind when arguing with people, but as Bensinger notes, "Probably this new version also won't match 'the basics' as other people perceive them." But there's a reason for that: a list of guidelines has the wrong type signature for being "the basics". The actual basics are the principles of rationality one would appeal to explain which guidelines are a good idea: principles like how evidence is the systematic correlation between possible states of your observations and possible states of reality, how you need evidence to locate the correct hypothesis in the space of possibilities, how the quality of your conclusion can only be improved by arguments that have the power to change that conclusion. Contemplating these basics, it should be clear that there's just not going to be anything like a unique style of "rationalist discourse", any more than there is a unique "physicist motor." There are theoretical ideals describing how discourse needs to work in principle, like Bayesian reasoners with common priors exchanging probability estimates, but you can't actually build an ideal Bayesian reasoner. Rather, different discourse algorithms (the collective analogue of "cognitive algorithm") leverage the laws of rationality to convert information into optimization in somewhat different ways, depending on the application and the population of interlocutors at hand, much as electric motors and jet engines both leverage the laws of physics to convert energy into work without being identical to each other, and with each requiring their own engineering sub-specialty to design. Or to use another classic metaphor, there's also just not going to be a unique martial art. Boxing and karate and ju-jitsu all have their own idiosyncratic lore adapted to different combat circumstances, and a master of one would easily defeat a novice of the other. One might appeal to the laws of physics and the properties of the human body to explain why some particular martial arts school was not teaching their st...

The Nonlinear Library
LW - Elements of Rationalist Discourse by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Feb 12, 2023 7:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elements of Rationalist Discourse, published by Rob Bensinger on February 12, 2023 on LessWrong. I liked Duncan Sabien's Basics of Rationalist Discourse, but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's). Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives. The basics of rationalist discourse, as I understand them: 1. Truth-Seeking. Try to contribute to a social environment that encourages belief accuracy and good epistemic processes. Try not to “win” arguments using asymmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments like soldiers at all. 2. Non-Violence: Argument gets counter-argument. Argument does not get bullet. Argument does not get doxxing, death threats, or coercion. 3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models. Where possible, avoid saying stuff that you expect to lower the net belief accuracy of the average reader; or failing that, at least flag that you're worried about this happening. As a corollary: 3.1. Meta-Honesty. Make it easy for others to tell how honest, literal, PR-y, etc. you are (in general, or in particular contexts). This can include everything from "prominently publicly discussing the sorts of situations in which you'd lie" to "tweaking your image/persona/tone/etc. to make it likelier that people will have the right priors about your honesty". 4. Localizability. Give people a social affordance to decouple / evaluate the local validity of claims. Decoupling is not required, and indeed context is often important and extremely worth talking about! But it should almost always be OK to locally address a specific point or subpoint, without necessarily weighing in on the larger context or suggesting you'll engage further. 5. Alternative-Minding. Consider alternative hypotheses, and ask yourself what Bayesian evidence you have that you're not in those alternative worlds. This mostly involves asking what models retrodict. Cultivate the skills of original seeing and of seeing from new vantage points. As a special case, try to understand and evaluate the alternative hypotheses that other people are advocating. Paraphrase stuff back to people to see if you understood, and see if they think you pass their Ideological Turing Test on the relevant ideas. Be a fair bit more willing to consider nonstandard beliefs, frames/lenses, and methodologies, compared to (e.g.) the average academic. Keep in mind that inferential gaps can be large, most life-experience is hard to transmit in a small number of words (or in words at all), and converging on the truth can require a long process of cultivating the right mental motions, doing exercises, gathering and interpreting new data, etc. Be careful to explicitly distinguish "what this person literally said" from "what I think this person means". Be careful to explicitly distinguish "what I think this person means" from "what I infer about this person as a result". 6. Reality-Minding. Keep your eye on the ball, hug the query, and don't lose sight of object-level reality. Make it a habit to flag when you notice ways to test an assertion. Make it a habit to actually test claims, when the value-of-information is high enough. Reward scholarship, inquiry, betting, pre-registered predictions, and sticking your neck out, especially where this is time-consuming, effortful, or socially risky. 7. Reducibility. Err on the side of using simple, conc...

The Articulate Fly
S5, Ep 3: The Spin with Jim Bensinger

The Articulate Fly

Play Episode Listen Later Jan 10, 2023 9:07


On this episode, I am joined by Jim Bensinger.  Jim and I take a deep dive into dubbing.  Thanks to our friends at Norvise for sponsoring the series. Check Out Jim's Video All Things Social Media Follow Jim on Facebook. Follow Norvise on Facebook, Instagram and YouTube. Follow us on Facebook, Instagram, Twitter and YouTube. Support the Show Shop on Amazon Become a Patreon Patron Subscribe to the Podcast or, Even Better, Download Our App Download our mobile app for free from the Apple App Store, the Google Play Store or the Amazon Android Store. Subscribe to the podcast in the podcatcher of your choice.

The Nonlinear Library
AF - Thoughts on AGI organizations and capabilities work by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Dec 7, 2022 11:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on AGI organizations and capabilities work, published by Rob Bensinger on December 7, 2022 on The AI Alignment Forum. (Note: This essay was largely written by Rob, based on notes from Nate. It's formatted as Rob-paraphrasing-Nate because (a) Nate didn't have time to rephrase everything into his own words, and (b) most of the impetus for this post came from Eliezer wanting MIRI to praise a recent OpenAI post and Rob wanting to share more MIRI-thoughts about the space of AGI organizations, so it felt a bit less like a Nate-post than usual.) Nate and I have been happy about the AGI conversation seeming more honest and “real” recently. To contribute to that, I've collected some general Nate-thoughts in this post, even though they're relatively informal and disorganized. AGI development is a critically important topic, and the world should obviously be able to hash out such topics in conversation. (Even though it can feel weird or intimidating, and even though there's inevitably some social weirdness in sometimes saying negative things about people you like and sometimes collaborate with.) My hope is that we'll be able to make faster and better progress if we move the conversational norms further toward candor and substantive discussion of disagreements, as opposed to saying everything behind a veil of collegial obscurity. Capabilities work is currently a bad idea Nate's top-level view is that ideally, Earth should take a break on doing work that might move us closer to AGI, until we understand alignment better. That move isn't available to us, but individual researchers and organizations who choose not to burn the timeline are helping the world, even if other researchers and orgs don't reciprocate. You can unilaterally lengthen timelines, and give humanity more chances of success, by choosing not to personally shorten them. Nate thinks capabilities work is currently a bad idea for a few reasons: He doesn't buy that current capabilities work is a likely path to ultimately solving alignment. Insofar as current capabilities work does seem helpful for alignment, it strikes him as helping with parallelizable research goals, whereas our bottleneck is serial research goals. (See A note about differential technological development.) Nate doesn't buy that we need more capabilities progress before we can start finding a better path. This is not to say that capabilities work is never useful for alignment, or that alignment progress is never bottlenecked on capabilities progress. As an extreme example, having a working AGI on hand tomorrow would indeed make it easier to run experiments that teach us things about alignment! But in a world where we build AGI tomorrow, we're dead, because we won't have time to get a firm understanding of alignment before AGI technology proliferates and someone accidentally destroys the world. Capabilities progress can be useful in various ways, while still being harmful on net. (Also, to be clear: AGI capabilities are obviously an essential part of humanity's long-term path to good outcomes, and it's important to develop them at some point — the sooner the better, once we're confident this will have good outcomes — and it would be catastrophically bad to delay realizing them forever.) On Nate's view, the field should do experiments with ML systems, not just abstract theory. But if he were magically in charge of the world's collective ML efforts, he would put a pause on further capabilities work until we've had more time to orient to the problem, consider the option space, and think our way to some sort of plan-that-will-actually-probably-work. It's not as though we're hurting for ML systems to study today, and our understanding already lags far behind today's systems' capabilities. Publishing capabilities advances is even more obviously bad Fo...

Podcast About List
TEASER Premium #160 - The Jack Frost Interview ft. Jack Bensinger

Podcast About List

Play Episode Listen Later Dec 3, 2022 2:30


Subscribe to our Patreon to listen to the whole episode: http://patreon.com/PodcastAboutList Subscribe to our YouTube channel for video episodes: https://youtube.com/@PodcastAboutList

The Nonlinear Library
AF - A challenge for AGI organizations, and a challenge for readers by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Dec 1, 2022 3:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A challenge for AGI organizations, and a challenge for readers, published by Rob Bensinger on December 1, 2022 on The AI Alignment Forum. (Note: This post is a write-up by Rob of a point Eliezer wanted to broadcast. Nate helped with the editing, and endorses the post's main points.) Eliezer Yudkowsky and Nate Soares (my co-workers) want to broadcast strong support for OpenAI's recent decision to release a blog post ("Our approach to alignment research") that states their current plan as an organization. Although Eliezer and Nate disagree with OpenAI's proposed approach — a variant of "use relatively unaligned AI to align AI" — they view it as very important that OpenAI has a plan and has said what it is. We want to challenge Anthropic and DeepMind, the other major AGI organizations with a stated concern for existential risk, to do the same: come up with a plan (possibly a branching one, if there are crucial uncertainties you expect to resolve later), write it up in some form, and publicly announce that plan (with sensitive parts fuzzed out) as the organization's current alignment plan. Currently, Eliezer's impression is that neither Anthropic nor DeepMind has a secret plan that's better than OpenAI's, nor a secret plan that's worse than OpenAI's. His impression is that they don't have a plan at all. Having a plan is critically important for an AGI project, not because anyone should expect everything to play out as planned, but because plans force the project to concretely state their crucial assumptions in one place. This provides an opportunity to notice and address inconsistencies, and to notice updates to the plan (and fully propagate those updates to downstream beliefs, strategies, and policies) as new information comes in. It's also healthy for the field to be able to debate plans and think about the big picture, and for orgs to be in some sense "competing" to have the most sane and reasonable plan. We acknowledge that there are reasons organizations might want to be abstract about some steps in their plans — e.g., to avoid immunizing people to good-but-weird ideas, in a public document where it's hard to fully explain and justify a chain of reasoning; or to avoid sharing capabilities insights, if parts of your plan depend on your inside-view model of how AGI works. We'd be happy to see plans that fuzz out some details, but are still much more concrete than (e.g.) “figure out how to build AGI and expect this to go well because we'll be particularly conscientious about safety once we have an AGI in front of us". Eliezer also hereby gives a challenge to the reader: Eliezer and Nate are thinking about writing up their thoughts at some point about OpenAI's plan of using AI to aid AI alignment. We want you to write up your own unanchored thoughts on the OpenAI plan first, focusing on the most important and decision-relevant factors, with the intent of rendering our posting on this topic superfluous. Our hope is that challenges like this will test how superfluous we are, and also move the world toward a state where we're more superfluous / there's more redundancy in the field when it comes to generating ideas and critiques that would be lethal for the world to never notice. We didn't run a draft of this post by DM or Anthropic (or OpenAI), so this information may be mistaken or out-of-date. My hope is that we're completely wrong! Nate's personal guess is that the situation at DM and Anthropic may be less “yep, we have no plan yet”, and more “various individuals have different plans or pieces-of-plans, but the organization itself hasn't agreed on a plan and there's a lot of disagreement about what the best approach is”. In which case Nate expects it to be very useful to pick a plan now (possibly with some conditional paths in it), and make it a priority to hash out and...

Frank Buckley Interviews
Re-release: Ken Bensinger, FIFA Corruption Scandal

Frank Buckley Interviews

Play Episode Listen Later Nov 23, 2022 42:25


Ken Bensinger is an award-winning reporter with the BuzzFeed News investigations team. He is the author of “Red Card: How the U.S. Blew the Whistle on the World's Biggest Sports Scandal.” The book takes readers inside the investigations into the FIFA corruption scandal. It revealed how hundreds of millions of dollars in bribes drove much of the decision-making surrounding the world's most popular sport including which countries would host the World Cup.During this podcast (which happened as the 2018 World Cup was getting underway), Ken discusses the people at the center of the investigations including the federal agents who relentlessly pursued the corrupt soccer officials and the methods they used to bring them down. He also takes listeners on a journey with those investigators from their early suspicions to the satisfying convictions they secured of more than 40 individuals and corporations.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Burn Bag Podcast
FIFA's Pay-to-Play: Inside the Corruption of the World Cup with Ken Bensinger

The Burn Bag Podcast

Play Episode Listen Later Nov 23, 2022 45:40


In this episode, A'ndre and Ryan talk to NYT reporter and author Ken Bensinger about  FIFA and the corruption surrounding the World Cup. Ken's 2018 book, Red Card,  explores the U.S.-led corruption case into FIFA that upended the world of international football. The conversation begins with Ken explaining the significance of football and the power of FIFA. We then dig into the rise of former FIFA President Sepp Blatter and how commercial success led to the growth of corruption. The episode concludes with Ken's perspective on the 2022 World Cup in Qatar and why FIFA's efforts to increase transparency and accountability are just window dressing.   To learn more, check out Red Card: How the U.S. Blew the Whistle on the World's Biggest Sports Scandal and the Netflix docuseries FIFA Uncovered. Disclaimer: The opinions expressed in this episode are those of the hosts. They do not purport to reflect the opinions or views of any host's employer.

Bribe, Swindle or Steal
FIFA's Red Card: Ken Bensinger

Bribe, Swindle or Steal

Play Episode Listen Later Nov 9, 2022 34:19


As we approach the 2022 World Cup, we're revisiting our 2018 episode with Ken Bensinger, who discusses his fascinating book, Red Card, and the decades of misconduct by FIFA eventually uncovered by the FBI. We play “violation bingo” as Ken describes the bribery, self-dealing, conflicts of interest and money laundering that were business as usual at FIFA.

The Nonlinear Library
LW - A common failure for foxes by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Oct 15, 2022 3:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A common failure for foxes, published by Rob Bensinger on October 14, 2022 on LessWrong. A common failure mode for people who pride themselves in being foxes (as opposed to hedgehogs): Paying more attention to easily-evaluated claims that don't matter much, at the expense of hard-to-evaluate claims that matter a lot. E.g., maybe there's an RCT that isn't very relevant, but is pretty easily interpreted and is conclusive evidence for some claim. At the same time, maybe there's an informal argument that matters a lot more, but it takes some work to know how much to update on it, and it probably won't be iron-clad evidence regardless. I think people who think of themselves as being "foxes" often spend too much time thinking about the RCT and not enough time thinking about the informal argument, for a few reasons: 1. A desire for cognitive closure, confidence, and a feeling of "knowing things" — of having authoritative Facts on hand rather than mere Opinions. A proper Bayesian cares about VOI, and assigns probabilities rather than having separate mental buckets for Facts vs. Opinions. If activity A updates you from 50% to 95% confidence in hypothesis H1, and activity B updates you from 50% to 60% confidence in hypothesis H2, then your assessment of whether to do more A-like activities or more B-like activities going forward should normally depend a lot on how useful it is to know about H1 versus H2. But real-world humans (even if they think of themselves as aspiring Bayesians) are often uncomfortable with uncertainty. We prefer sharp thresholds, capital-k Knowledge, and a feeling of having solid ground to rest on. 2. Hyperbolic discounting of intellectual progress. With unambiguous data, you get a fast sense of progress. With fuzzy arguments, you might end up confident after thinking about it a while, or after reading another nine arguments; but it's a long process, with uncertain rewards. 3. Social modesty and a desire to look un-arrogant. It can feel socially low-risk and pleasantly virtuous to be able to say "Oh, I'm not claiming to have good judgment or to be great at reasoning or anything; I'm just deferring to the obvious clear-cut data, and outside of that, I'm totally uncertain." Collecting isolated facts increases the pool of authoritative claims you can make, while protecting you from having to stick your neck out and have an Opinion on something that will be harder to convince others of, or one that rests on an implicit claim about your judgment. But in fact it often is better to make small or uncertain updates about extremely important questions, than to collect lots of high-confidence trivia. It keeps your eye on the ball, where you can keep building up confidence over time; and it helps build reasoning skill. High-confidence trivia also often poses a risk: either consciously or unconsciously, you can end up updating about the More Important Questions you really care about, because you're spending all your time thinking about trivia. Even if you verbally acknowledge that updating from the superficially-related RCT to the question-that-actually-matters would be a non sequitur, there's still a temptation to substitute the one question for the other. Because it's still the Important Question that you actually care about. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Pressbox
Graham Bensinger - Segment 4 - 9/6/22

The Pressbox

Play Episode Listen Later Sep 6, 2022 20:39


Graham Bensinger joins the show

The Kubik Report
Inside the Mysteries of Myanmar - with David Bensinger

The Kubik Report

Play Episode Listen Later Aug 18, 2022 39:14


David Bensinger gives us insight into the mysterious and exotic nation of Myanmar. We talk about its history, its people, and our decades of work with the Church and their physical needs.  For years Myanmar hid itself from the world scene and has been in continual conlict and civil war since the British gave it its full independence in January.  For a few years there was pullback from this hermit-like approach and visitors from the West were welcome.  Aaron and Michelle Dean visited in 2017. In 2018 Austin and Aaron Jennings from Australia visited and helped conduct the Feast of Tabernacles.  They made a short video that can be seen at  https://youtu.be/8gZlq9P5ES0  We discuss some of the current needs and optimistic plans.  Posted August 18, 2022

Midlife Pilot Podcast
EP20 - Advice to newer pilots from more experienced pilots - with Ben Singer aka The Sage

Midlife Pilot Podcast

Play Episode Listen Later Aug 11, 2022 62:10


Chris and Brian are joined by the sage, Ben Singer, to talk words of advice from more experienced pilots that have resonated with us over our time as pilots, and we take input from the live chat as well. --- Subscribe to the audio podcast! (and leave a review) Follow us on Instagram: https://www.instagram.com/midlife_pilot/ https://www.instagram.com/brian.siskind/ Subscribe to Brian's YouTube channel: https://bit.ly/briansiskind

The Nonlinear Library
LW - ITT-passing and civility are good; "charity" is bad; steelmanning is niche by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Jul 5, 2022 9:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ITT-passing and civility are good; "charity" is bad; steelmanning is niche, published by Rob Bensinger on July 5, 2022 on LessWrong. I often object to claims like "charity/steelmanning is an argumentative virtue". This post collects a few things I and others have said on this topic over the last few years. My current view is: Steelmanning ("the art of addressing the best form of the other person's argument, even if it's not the one they presented") is a useful niche skill, but I don't think it should be a standard thing you bring out in most arguments, even if it's an argument with someone you strongly disagree with. Instead, arguments should mostly be organized around things like: Object-level learning and truth-seeking, with the conversation as a convenient excuse to improve your own model of something you're curious about. Trying to pass each other's Ideological Turing Test (ITT), or some generalization thereof. The ability to pass ITTs is the ability "to state opposing views as clearly and persuasively as their proponents". The version of "ITT" I care about is one where you understand the substance of someone's view well enough to be able to correctly describe their beliefs and reasoning; I don't care about whether you can imitate their speech patterns, jargon, etc. Trying to identify and resolve cruxes: things that would make one or the other of you (or both) change your mind about the topic under discussion. Argumentative charity is a complete mess of a concept⁠—people use it to mean a wide variety of things, and many of those things are actively bad, or liable to cause severe epistemic distortion and miscommunication. Some version of civility and/or friendliness and/or a spirit of camaraderie and goodwill seems like a useful ingredient in many discussions. I'm not sure how best to achieve this in ways that are emotionally honest ("pretending to be cheerful and warm when you don't feel that way" sounds like the wrong move to me), or how to achieve this without steering away from candor, openness, "realness", etc. I've said that I think people should be "nicer and also ruder". And: The sweet spot for EA PR is something like: 'friendly, nuanced, patient, and totally unapologetic about being a fire hose of inflammatory hot takes'.

The Nonlinear Library
LW - The inordinately slow spread of good AGI conversations in ML by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Jun 21, 2022 13:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The inordinately slow spread of good AGI conversations in ML, published by Rob Bensinger on June 21, 2022 on LessWrong. Spencer Greenberg wrote on Twitter: Recently @KerryLVaughan has been critiquing groups trying to build AGI, saying that by being aware of risks but still trying to make it, they're recklessly putting the world in danger. I'm interested to hear your thought/reactions to what Kerry says and the fact he's saying it. Michael Page replied: I'm pro the conversation. That said, I think the premise -- that folks are aware of the risks -- is wrong. Honestly, I think the case for the risks hasn't been that clearly laid out. The conversation among EA-types typically takes that as a starting point for their analysis. The burden for the we're-all-going-to-die-if-we-build-x argument is -- and I think correctly so -- quite high. Oliver Habryka then replied: I find myself skeptical of this. Like, my sense is that it's just really hard to convince someone that their job is net-negative. "It is difficult to get a man to understand something when his salary depends on his not understanding it" And this barrier is very hard to overcome with just better argumentation. My reply: I disagree with "the case for the risks hasn't been that clearly laid out". I think there's a giant, almost overwhelming pile of intro resources at this point, any one of which is more than sufficient, written in all manner of style, for all manner of audience. (I do think it's possible to create a much better intro resource than any that exist today, but 'we can do much better' is compatible with 'it's shocking that the existing material hasn't already finished the job'.) I also disagree with "The burden for the we're-all-going-to-die-if-we-build-x argument is -- and I think correctly so -- quite high." If you're building a machine, you should have an at least somewhat lower burden of proof for more serious risks. It's your responsibility to check your own work to some degree, and not impose lots of micromorts on everyone else through negligence. But I don't think the latter point matters much, since the 'AGI is dangerous' argument easily meets higher burdens of proof as well. I do think a lot of people haven't heard the argument in any detail, and the main focus should be on trying to signal-boost the arguments and facilitate conversations, rather than assuming that everyone has heard the basics. A lot of the field is very smart people who are stuck in circa-1995 levels of discourse about AGI. I think 'my salary depends on not understanding it' is only a small part of the story. ML people could in principle talk way more about AGI, and understand the problem way better, without coming anywhere close to quitting their job. The level of discourse is by and large too low for 'I might have to leave my job' to be the very next obstacle on the path. Also, many ML people have other awesome job options, have goals in the field other than pure salary maximization, etc. More of the story: Info about AGI propagates too slowly through the field, because when one ML person updates, they usually don't loudly share their update with all their peers. This is because: 1. AGI sounds weird, and they don't want to sound like a weird outsider. 2. Their peers and the community as a whole might perceive this information as an attack on the field, an attempt to lower its status, etc. 3. Tech forecasting, differential technological development, long-term steering, exploratory engineering, 'not doing certain research because of its long-term social impact', prosocial research closure, etc. are very novel and foreign to most scientists. EAs exert effort to try to dig up precedents like Asilomar partly because Asilomar is so unusual compared to the norms and practices of the vast majority of science. Scientists generally ...

The Nonlinear Library
LW - On saving one's world by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later May 17, 2022 2:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On saving one's world, published by Rob Bensinger on May 17, 2022 on LessWrong. If the world is likeliest to be saved by sober scholarship, then let us be sober scholars in the face of danger. If the world is likeliest to be saved by playful intellectual exploration, then let us be playful in the face of danger. Strategic, certainly; aware of our situation, of course; but let us not throw away the one mental mode that can actually save us, if that's in fact our situation. If the world is likeliest to be saved by honest, trustworthy, and high-integrity groups, who by virtue of their trustworthiness can much more effectively collaborate and much more quickly share updates; then let us be trustworthy. What is the path to good outcomes otherwise? CFAR has a notion of "flailing". Alone on a desert island, if you injure yourself, you're likelier to think fast about how to solve the problem. Whereas injuring yourself around friends, you're more likely to "flail": lean into things that demonstrate your pain/trouble to others. To my eye, a lot of proposals that we set aside sober scholarship, or playful intellectual exploration, or ethical integrity, look like flailing. I don't see an argument that this setting-aside actually chains forward into good outcomes; it seems performative to me, like hoping that if our reaction "feels extreme" enough, some authority somewhere will take notice and come to the rescue. Who is that authority? If you have a coherent model of this, we can talk about it and figure out if that's really the best strategy for eliciting their aid. But if no one comes to mind, consider the possibility that you're executing a social instinct that's adaptive to threats like tigers and broken legs, but maladaptive to threats like Unfriendly AI. If you feel scared about something, I generally think it's good to be honest about that fact and discuss it soberly, rather than hiding it. I don't think this is incompatible with rigorous scholarship or intellectual play. But I would clearly distinguish "being honest about your world-models and feelings, because honesty is legitimately a good idea" from "making it your main strategy to do whatever action sequence feels emotionally resonant with the problem". An "extreme" key doesn't necessarily open an "extreme" lock. A dire-sounding key doesn't necessarily open a dire-feeling lock. A fearful or angry key doesn't necessarily open a lock that makes you want to express fear or anger. Rather, the lock's exact physical properties determine which exact key (or set of keys) opens it, and we need to investigate the physical world in order to find the right key. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - (Part III) Christiano, Cotra, and Yudkowsky on AI progress, by Eliezer Yudkowsky and Ajeya Cotra

The Nonlinear Library

Play Episode Listen Later Mar 24, 2022 26:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is part three of: Christiano, Cotra, and Yudkowsky on AI progress, by Eliezer Yudkowsky and Ajeya Cotra 9.9. Prediction disagreements and bets [Christiano][16:19] anyway, I'm pretty unpersuaded by the kind of track record appeal you are making here [Yudkowsky][16:20] if the future goes the way I predict and yet anybody somehow survives, perhaps somebody will draw a hyperbolic trendline on some particular chart where the trendline is retroactively fitted to events including those that occurred in only the last 3 years, and say with a great sage nod, ah, yes, that was all according to trend, nor did anything depart from trend trend lines permit anything [Christiano][16:20] like from my perspective the fundamental question is whether I would do better or worse by following the kind of reasoning you'd advocate, and it just looks to me like I'd do worse, and I'd love to make any predictions about anything to help make that more clear and hindsight-proof in advance [Yudkowsky][16:20] you just look into the past and find a line you can draw that ended up where reality went [Christiano][16:21] it feels to me like you really just waffle on almost any prediction about the before-end-of-days [Yudkowsky][16:21] I don't think I know a lot about the before-end-of-days [Christiano][16:21] like if you make a prediction I'm happy to trade into it, or you can pick a topic and I can make a prediction and you can trade into mine [Cotra][16:21] but you know enough to have strong timing predictions, e.g. your bet with caplan [Yudkowsky][16:21] it's daring enough that I claim to know anything about the Future at all! [Cotra][16:21] surely with that difference of timelines there should be some pre-2030 difference as well [Christiano][16:21] but you are the one making the track record argument against my way of reasoning about things! how does that not correspond to believing that your predictions are better! what does that mean? [Yudkowsky][16:22] yes and if you say something narrow enough or something that my model does at least vaguely push against, we should bet [Christiano][16:22] my point is that I'm willing to make a prediction about any old thing, you can name your topic I think the way I'm reasoning about the future is just better in general and I'm going to beat you on whatever thing you want to bet on [Yudkowsky][16:22] but if you say, "well, Moore's Law on trend, next 3 years", then I'm like, "well, yeah, sure, since I don't feel like I know anything special about that, that would be my prediction too" [Christiano][16:22] sure you can pick the topic pick a quantity or a yes/no question or whatever [Yudkowsky][16:23] you may know better than I would where your Way of Thought makes strong, narrow, or unusual predictions [Christiano][16:23] I'm going to trend extrapolation everywhere spoiler [Yudkowsky][16:23] okay but any superforecaster could do that and I could do the same by asking a superforecaster [Cotra][16:24] but there must be places where you'd strongly disagree w the superforecaster since you disagree with them eventually, e.g. >2/3 doom by 2030 [Bensinger][18:40] (Nov. 25 follow-up comment) ">2/3 doom by 2030" isn't an actual Eliezer-prediction, and is based on a misunderstanding of something Eliezer said. See Eliezer's comment on LessWrong. [Yudkowsky][16:24] in the terminal phase, sure [Cotra][16:24] right, but there are no disagreements before jan 1 2030? no places where you'd strongly defy the superforecasters/trend extrap? [Yudkowsky][16:24] superforecasters were claiming that AlphaGo had a 20% chance of beating Lee Se-dol and I didn't disagree with that at the time, though as the final days approached I became nervous and suggested to a friend that they buy out of a bet about that [Cotra][16:25] what about like whether we get some kind of AI ability (e.g. coding better...

The Nonlinear Library
AF - Shah and Yudkowsky on alignment failures by Rohin Shah

The Nonlinear Library

Play Episode Listen Later Feb 28, 2022 144:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shah and Yudkowsky on alignment failures, published by Rohin Shah on February 28, 2022 on The AI Alignment Forum. This is the final discussion log in the Late 2021 MIRI Conversations sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn. The discussion begins with summaries and comments on Richard and Eliezer's debate. Rohin's summary has since been revised and published in the Alignment Newsletter. After this log, we'll be concluding this sequence with an AMA, where we invite you to comment with questions about AI alignment, cognition, forecasting, etc. Eliezer, Richard, Paul Christiano, Nate, and Rohin will all be participating. Color key: Chat by Rohin and Eliezer Other chat Emails Follow-ups 19. Follow-ups to the Ngo/Yudkowsky conversation 19.1. Quotes from the public discussion [Bensinger][9:22] (Nov. 25) Interesting extracts from the public discussion of Ngo and Yudkowsky on AI capability gains: Eliezer: I think some of your confusion may be that you're putting "probability theory" and "Newtonian gravity" into the same bucket. You've been raised to believe that powerful theories ought to meet certain standards, like successful bold advance experimental predictions, such as Newtonian gravity made about the existence of Neptune (quite a while after the theory was first put forth, though). "Probability theory" also sounds like a powerful theory, and the people around you believe it, so you think you ought to be able to produce a powerful advance prediction it made; but it is for some reason hard to come up with an example like the discovery of Neptune, so you cast about a bit and think of the central limit theorem. That theorem is widely used and praised, so it's "powerful", and it wasn't invented before probability theory, so it's "advance", right? So we can go on putting probability theory in the same bucket as Newtonian gravity? They're actually just very different kinds of ideas, ontologically speaking, and the standards to which we hold them are properly different ones. It seems like the sort of thing that would take a subsequence I don't have time to write, expanding beyond the underlying obvious ontological difference between validities and empirical-truths, to cover the way in which "How do we trust this, when" differs between "I have the following new empirical theory about the underlying model of gravity" and "I think that the logical notion of 'arithmetic' is a good tool to use to organize our current understanding of this little-observed phenomenon, and it appears within making the following empirical predictions..." But at least step one could be saying, "Wait, do these two kinds of ideas actually go into the same bucket at all?" In particular it seems to me that you want properly to be asking "How do we know this empirical thing ends up looking like it's close to the abstraction?" and not "Can you show me that this abstraction is a very powerful one?" Like, imagine that instead of asking Newton about planetary movements and how we know that the particular bits of calculus he used were empirically true about the planets in particular, you instead started asking Newton for proof that calculus is a very powerful piece of mathematics worthy to predict the planets themselves - but in a way where you wanted to see some highly valuable material object that calculus had produced, like earlier praiseworthy achievements in alchemy. I think this would reflect confusion and a wrongly directed inquiry; you would have lost sight of the particular reasoning steps that made ontological sense, in the course of trying to figure out whether calculus was praiseworthy under the standards of praiseworthiness that you'd been previously raised to believe in as universal standards about a...

The Nonlinear Library
AF - Late 2021 MIRI Conversations: AMA / Discussion by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Feb 28, 2022 1:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Late 2021 MIRI Conversations: AMA / Discussion, published by Rob Bensinger on February 28, 2022 on The AI Alignment Forum. With the release of Rohin Shah and Eliezer Yudkowsky's conversation, the Late 2021 MIRI Conversations sequence is now complete. This post is intended as a generalized comment section for discussing the whole sequence, now that it's finished. Feel free to: raise any topics that seem relevant signal-boost particular excerpts or comments that deserve more attention direct questions to participants In particular, Eliezer Yudkowsky, Richard Ngo, Paul Christiano, Nate Soares, and Rohin Shah expressed active interest in receiving follow-up questions here. The Schelling time when they're likeliest to be answering questions is Wednesday March 2, though they may participate on other days too. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Martha Bassett Show
Sam Baker / Sarah Kate Morgan / Ben Singer – Live at the Reeves

The Martha Bassett Show

Play Episode Listen Later Dec 15, 2021 58:48


This show from October 6, 2018 was Sam Baker's first visit to our show and what a treat! All of the folks on this one have become fast friends of TMBS and we always look forward to their return. Along with Sam’s gorgeous songwriting and delightful banter with Martha, Sarah Kate Morgan visits and is regarded as one of the finest dulcimer players anywhere and is an equally gifted vocalist. Ben Singer also joins and shares some great new songs with us.