Concerning AI | Existential Risk From Artificial Intelligence

Concerning AI | Existential Risk From Artificial Intelligence

Follow Concerning AI | Existential Risk From Artificial Intelligence
Share on
Copy link to clipboard

Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?

Brandon Sanders & Ted Sarvata


    • Oct 23, 2018 LATEST EPISODE
    • infrequent NEW EPISODES
    • 72 EPISODES


    Search for episodes from Concerning AI | Existential Risk From Artificial Intelligence with a specific topic:

    Latest episodes from Concerning AI | Existential Risk From Artificial Intelligence

    0070: We Don’t Get to Choose

    Play Episode Listen Later Oct 23, 2018


    Or do we?   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0070-2018-09-30.mp3

    0069: Will bias get us first?

    Play Episode Listen Later Sep 5, 2018


    Ted interviews Jacob Ward, former editor of Popular Science, journalist at many outlets. Jake’s article about the book he’s writing: Black Box Jake’s website JacobWard.com Implicit bias tests at Harvard We discuss the idea that we’re currently using narrow AIs to inform all kinds of decisions, and that we’re trusting those AIs way more than […]

    0068: Sanityland: More on Assassination Squads

    Play Episode Listen Later Jul 23, 2018


    0067: The OpenAI Charter (and Assassination Squads)

    Play Episode Listen Later Jul 6, 2018


    We love the OpenAI Charter. This episode is an introduction to the document and gets pretty dark. Lots more to come on this topic!

    0066: The AI we have is not the AI we want

    Play Episode Listen Later May 3, 2018


    http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0066-2018-04-01.mp3

    0065: AGI Fire Alarm

    Play Episode Listen Later Apr 19, 2018


    There’s No Fire Alarm for Artificial General Intelligence by Eliezer Yudkowsky   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0065-2018-03-18.mp3  

    0064: AI Go Foom

    Play Episode Listen Later Apr 5, 2018


    We discuss Intelligence Explosion Microeconomics by Eliezer Yudkowsky   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0064-2018-03-11.mp3

    0063: Ted’s Talk

    Play Episode Listen Later Mar 26, 2018


    Ted gave a live talk a few weeks ago.

    0062: There’s No Room at the Top

    Play Episode Listen Later Mar 16, 2018


      http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0062-2018-03-04.mp3

    0061: Collapse Will Save Us

    Play Episode Listen Later Mar 2, 2018


    Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?

    0060: Peter Scott’s Timeline For Artificial Intelligence Risks

    Play Episode Listen Later Feb 13, 2018


    Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You can get in touch with Peter at HumanCusp.com and Peter@HumanCusp.com For reference (not discussed in this episode): Crisis of Control: How Artificial SuperIntelligences May Destroy Or Save the Human Race by Peter J. Scott http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0060-2018-01-21.mp3

    0059: Unboxing the Spectre of a Meltdown

    Play Episode Listen Later Jan 30, 2018


    SpectreAttack.com               http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0059-2018-01-14.mp3    

    0058: Why Disregard the Risks?

    Play Episode Listen Later Jan 16, 2018


    There are understandable reasons why accomplished leaders in AI disregard AI risks. We discuss what they might be. Wikipedia’s list of cognitive biases Alpha Zero Virtual Reality recorded January 7, 2017, originally posted to Concerning.AI http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0058-2018-01-07.mp3

    0057: Waymo is Everybody?

    Play Episode Listen Later Jan 2, 2018


    If the Universe Is Teeming With Aliens, Where is Everybody?                 http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0057-2017-11-12.mp3

    0056: Julia Hu of Lark, an AI Health Coach

    Play Episode Listen Later Dec 19, 2017


    Julia Hu, founder and CEO of Lark, an AI health coach, is our guest this episode. Her tech is really cool and clearly making a positive difference in lots of people's lives right now. Longer term, she doesn't see much to worry about.

    0055: Sean Lane

    Play Episode Listen Later Dec 5, 2017


    Ted had a fascinating conversation with Sean Lane, founder and CEO of Crosschx.

    0054: Predictions of When

    Play Episode Listen Later Nov 21, 2017


    We often talk about how know one really knows when the singularity might happen (if it does), when human-level AI will exist (if ever), when we might see superintelligence, etc. Back in January, we made up a 3 number system for talking about our own predictions and asked our community on facebook to play along […]

    0053: Listener Feedback

    Play Episode Listen Later Nov 7, 2017


    Great voice memos from listeners led to interesting conversations.

    0052: Paths to AGI #4: Robots Revisited

    Play Episode Listen Later Oct 24, 2017


    We continue our mini series about paths to AGI. Sam Harris’s podcast about the nature of consciousness Robot or Not podcast See also: 0050: Paths to AGI #3: Personal Assistants 0047: Paths to AGI #2: Robots 0046: Paths to AGI #1: Tools   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0052-2017-10-08.mp3

    0051: Rodney Brooks Says Not To Worry

    Play Episode Listen Later Oct 10, 2017


    Rodney Brooks article: The Seven Deadly Sins of Predicting the Future of AI

    0050: Paths to AGI #3: Personal Assistants

    Play Episode Listen Later Sep 25, 2017


    3rd in a series about future of current narrow AIs.

    0049: After On by Rob Reid

    Play Episode Listen Later Sep 11, 2017


    Read After On by Rob Reid, before you listen or because you listen.

    0047: Paths to AGI #2: Robots

    Play Episode Listen Later Sep 5, 2017


    This is our 2nd episode thinking about possible paths to superintelligence focusing on one kind of narrow AI each show. This episode is about embodiment and robots. It's possible we never really agreed about what we were talking about and need to come back to robots. Future ideas for this series include: personal assistants (Siri, Alexa, etc) non-player characters search engines (or maybe those just fall under tools social networks or other big data / working on completely different time / size scale from humans collective intelligence simulation whole-brain emulation augmentation (computer / brain interface) self-driving cars See also:  0046: Paths to AGI #1: Tools Robots learning to pick stuff up Roomba mapping      https://youtu.be/iZhEcRrMA-M https://youtu.be/97hOaXJ5nGU https://youtu.be/tynDYRtRrag https://youtu.be/FUbpCuBLvWM          

    0048: AI XPrize and Thrival Festival (special mini-episode)

    Play Episode Listen Later Aug 29, 2017


    For show notes, please see https://concerning.ai/2017/08/29/0048-ai-xprize-and-thrival-festival-special-mini-episode/

    0046: Paths to AGI #1: Tools

    Play Episode Listen Later Aug 22, 2017


    How might we get from today's narrow AIs to AGI? This episode focus is tools.

    0045: We Enjoy Our Stories

    Play Episode Listen Later Aug 8, 2017


    Is all AI-involved science fiction the same?

    0044: Nexus Trilogy

    Play Episode Listen Later Jun 21, 2017


    We talked about the Nexus Trilogy of novels as a way to further our thinking about the wizard hat idea Tim Urban wrote about in his article about Elon Musk's Neuralink.

    neuralink tim urban nexus trilogy
    0043: Not a Propeller Hat Episode

    Play Episode Listen Later Jun 5, 2017


    Are we living our lives as if AI were an existential threat?

    0042: Listener Feedback

    Play Episode Listen Later May 22, 2017


    Listener Feedback this episode

    0041: Can Neuralink Save Us?

    Play Episode Listen Later May 5, 2017


    Tim Urban's article at Wait But Why: Elon Musk's Neuralink and the Brain’s Magical Future

    0040: If it were superintelligent, it would be hard to argue with

    Play Episode Listen Later Apr 14, 2017


    Mostly a listener feedback episode. Lots of great stuff here!

    0039: We Need More Sparrow Fables

    Play Episode Listen Later Mar 31, 2017


    We need better language to talk about these difficult technical topics. See https://concerning.ai/2017/03/31/0039-we-need-more-sparrow-fables/ for notes.

    0038: We Don’t Want to Die

    Play Episode Listen Later Mar 17, 2017


    See https://concerning.ai/2017/03/17/0038-we-dont-want-to-die/

    0037: Listeners Gone Wild

    Play Episode Listen Later Mar 4, 2017


    Listener Voicemail & Comments Eric’s voicemail Evan’s comment (Our interview with Evan: ep 0011: Evan Prodromou, AI practitioner (part 1), ep 0012: Evan Prodromou, AI practitioner (part 2) John’s comment Ted got the author’s name wrong Predictably Irrational by Dan Ariely (25:12) Moving on from Feedback into what’s going to get us from here to there Instantaneous Judgement (Stimulus-Response) Reinforcement […]

    0036: Baby You Can Drive My Car

    Play Episode Listen Later Feb 21, 2017


    Main topic of this show: Unexpected Consequences of Self Driving Cars by Rodney Brooks

    0035: New Water Story

    Play Episode Listen Later Feb 15, 2017


    What should our values be? Could "Life is Precious" replace the Consumption Story?

    Concerning AI: Episode XXXIV – A New Hope

    Play Episode Listen Later Feb 8, 2017


    Do we need to do philosophy on a deadline? Can AI help make us better humans?

    0033: Mind Game and the Curse of Dimensionality

    Play Episode Listen Later Jan 31, 2017


    Wind up your propeller hats! This one is a doozy. Hopefully someone can explain it to me (Ted).

    0032: Westworld

    Play Episode Listen Later Jan 25, 2017


    In which we talk about Westworld, among other things.

    0031: Listener Feedback

    Play Episode Listen Later Jan 16, 2017


    Too time constrained for show notes this time. If you want to send us notes to be added here, please do it! Best place to reach is is the Concerning AI group on Facebook. All of the listener feedback in this episode comes from that group. Thank you all! Subscribe in Overcast, iTunes, or through […]

    0030: Season 2, Episode 1

    Play Episode Listen Later Jan 11, 2017


    It's been a while since we recorded. What have we been up to?

    0029: I Disagree, Therefore I am

    Play Episode Listen Later Nov 21, 2016


    We recorded this episode on Nov 6, 2016, two days before the US election. Sorry it’s taken so long to get out. Also, no show notes due to need to simply get it published and avoid further delay. Enjoy! http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0029-2016-11-06.mp3

    0028: Food for Thought

    Play Episode Listen Later Oct 18, 2016


    Nick Bostrom’s Superintelligence Fiction from Liu Cixin: The Three Body Problem The Dark Forest Death’s End We’re a lot more beautiful if we try. (5:41) The Upward Spiral (9:45) Are we getting any wiser? (12:43) What are we trying for? To continue an aesthetic lineage. (13:55) Kurzweil. When the machines tell us they are human, […]

    0027: Listener Feedback and the Locality Effect

    Play Episode Listen Later Sep 29, 2016


    Korey’s comment: … one question you asked on ‘The Locality Principle’, was what other people are doing to avert a possible AIpocalypse; I’m starting a company! An entertainment venture with one driving purpose: to create a fully realized virtual world, one with as many complex and varied entities as the real world itself. … Links for […]

    0026: The Locality Principle

    Play Episode Listen Later Sep 22, 2016


    http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0026-2016-09-18.mp3 These notes won’t be much use without listening to the episode, but if you do listen, one or two of you might find one or two of them helpful. Lyle Cantor‘s comment (excerpt) in the Concerning AI Facebook group: Regarding the OpenAI strategy of making sure we have more than one superhuman AI, this is […]

    0025: The Concerning AI Summer is Over

    Play Episode Listen Later Sep 12, 2016


    Some things we talked about: Companies developing narrow AI, not giving one thought about AI safety, because just getting the thing to work at all is really hard. Self-driving cars and how fast they’re progressing Difference between Open AI and MIRI in approach Weaponized offense and defense Eliezer’s thought about that missing the point http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0025-2016-09-04.mp3

    0024: Simulation Revisited

    Play Episode Listen Later Jul 21, 2016


    No notes this time, just a speculative conversation about some possible implications of the idea that we could be living in a simulation. Subscribe in Overcast, iTunes, or though our feed. To get in touch with us, visit the Concerning AI Facebook group. http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0024-2016-07-17.mp3 Post-Human Series by David Simpson

    0023: That Would Just Be Absurd

    Play Episode Listen Later Jun 10, 2016


    Are people better than robots?

    0022: A Few Useful Things to Know about Machine Learning

    Play Episode Listen Later May 29, 2016


    This episode, we talk about this paper: A Few Useful Things to Know about Machine Learning

    0021: Some of ’em wear dapper suits

    Play Episode Listen Later May 9, 2016


    We want robot surgeons, bus and taxi drivers and investment advisors. Do you?

    0020: AI in Fiction

    Play Episode Listen Later Apr 26, 2016


    Fiction is fun. And, we can't rely on it to help us figure out what's going to happen.

    Claim Concerning AI | Existential Risk From Artificial Intelligence

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel