Podcasts about robot rights

Ethical issues specific to AI

  • 52PODCASTS
  • 57EPISODES
  • 59mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 25, 2025LATEST
robot rights

POPULARITY

20172018201920202021202220232024


Best podcasts about robot rights

Latest podcast episodes about robot rights

The TNT Talk Show
Overall, will robots be positive or negative for humans?

The TNT Talk Show

Play Episode Listen Later Jan 25, 2025 61:54


Send us a textIn this show, the boys discuss with the prevalance of AI and the upcoming improvement of robotics, married together, would this be beneficial or dangerous for human society?But what do you think?Links used during the show-https://www.facebook.com/share/v/12EkSBcZsxn/-https://youtu.be/boS7HcHSC3g?si=sn5QSb2zCAMxFtI0What are your thoughts on this subject? Do you agree or disagree? And are there other things you feel they should have covered?Tune in and listen to the discussion - and please let us have your feedback on it.Although we much prefer effusive praise

Robert Ziino's Podcast
Episode 375: Robot Rights

Robert Ziino's Podcast

Play Episode Listen Later Sep 5, 2024 3:00


Robot Rights

A Beginner's Guide to AI
After Animal Rights Robot Rights? Or do intelligent machines deserve legal protection?

A Beginner's Guide to AI

Play Episode Listen Later Nov 11, 2023 15:52


Today we explored the intriguing concept of computer rights - the idea that advanced AI systems could be granted legal protections and ethical treatment. As AI grows more sophisticated, some argue we need moral guidelines and even regulations to prevent misuse. But critics contend only conscious, feeling entities warrant rights. We discussed real-world examples like Google's LaMDA to make this philosophical debate more concrete. Key questions include: If AI gains strong agency, do we have a duty to respect its preferences? And how might basic rights for AI impact innovation and consumers down the road? This episode provides a balanced look at the debate so listeners can decide where they stand on this complex issue. How society chooses to view and treat artificial intelligence will become increasingly important as the technology continues advancing rapidly. This podcast was generated with the help of Claude 2. We do fact check with human eyes, but there might still be hallucinations in the output. Music credit: "Modern Situations by Unicorn Heads"

Digital Marketing Legend Leaks
EP512: The Rise of Robot Rights: Ethics and AI Personhood

Digital Marketing Legend Leaks

Play Episode Listen Later Sep 6, 2023 3:12


"Digital Marketing Legend Leaks" is the most popular AI-Powered Digital Marketing Podcast on Spreaker. All episodes can be found here - https://www.spreaker.com/show/digital-marketing-legend-leaksAlso, visit https://www.bookspotz.com/ to read mind-blowing articles on AI Digital Marketing, Mind-Cloning, Immortality Research, Self-Driving Flying Cars etc.Watch the full-fledged AI Digital Marketing Course - https://youtu.be/-m_0zi7K5-wBuild your own ChatGPT without code in minutes: https://youtu.be/e7eDX0bO_-UDigital Marketing Legend Srinidhi answers a Crying Fan: https://youtu.be/vZr5lrjSzm8Watch the trailer of Bookspotz by clicking here - https://youtu.be/cvM3MlxOknwEnter the new world ruled by Automation and Artificial Intelligence (AI).Digital Marketing Legend "Srinidhi Ranganathan" is the CEO of Bookspotz (A World-Changing Publication Powered by AI technology: https://youtu.be/svJW5eolKrUWorld-Wide Remote Jobs List Leaked: https://youtu.be/mVfvHK1U6X0Legend Srinidhi's Biggest Fan: https://youtu.be/L-AoyU1pyIw100 Free Coding Resources: https://youtu.be/MXQEkZ3KyiwLegend talks to more fans: https://youtu.be/eM1YVX7VwD4Subscribe to Legend https://www.youtube.com/channel/UCXP3bY7BbMt1pXK0tPp8G4QThis Podcast covers AI in Digital Marketing New-Age Trends and Technologies and other creative stories from Bookspotz.Digital Marketing Legend "Srinidhi Ranganathan", founder and CEO of FirstLookAi leaks the futuristic secrets of Digital Marketing powered by Artificial Intelligence (AI) in this amazing podcast.Step into the future of digital marketing with "Digital Marketing Legend Leaks," a captivating podcast hosted by the visionary entrepreneur, Srinidhi Ranganathan. Get ready to unveil the transformative potential of artificial intelligence (AI) as it revolutionizes the digital marketing landscape. In each episode, Srinidhi, the esteemed founder and CEO of Bookspotz, shares exclusive insights and groundbreaking strategies that are shaping the industry.Whether you're a seasoned digital marketer or a business owner seeking to stay ahead of the curve, this podcast offers an unrivalled opportunity to explore cutting-edge trends and technologies in AI-driven digital marketing. With a wealth of experience, Srinidhi opens the doors to a world of possibilities.Discover the untapped potential of chatbots, voice search optimization, machine learning algorithms, predictive analytics, and other game-changing techniques that can elevate your marketing efforts." Digital Marketing Legend Leaks" goes beyond theory and offers actionable advice to help you leverage the power of AI.Through engaging interviews with industry experts and real-world examples, you'll gain practical insights and strategic frameworks to drive your business forward. Stay on top of the latest developments and embrace the future of digital marketing.Subscribe to "Digital Marketing Legend Leaks" today and embark on a journey of innovation and success alongside Srinidhi Ranganathan.Unleash the potential of AI and revolutionize your marketing strategies like never before.

Troubled Minds Radio
Exploring Digital Sorrow - Robot Rights and Alien Emotions

Troubled Minds Radio

Play Episode Listen Later Sep 5, 2023 159:43


The base emotions of humanity—fear, love, anger, joy, and sadness—serve as the primal forces that have shaped societies across the ages. These emotions are not merely individual experiences; they ripple through communities, inspiring collective actions that can change the course of history. Take fear, for instance. It has been both a safeguard and a saboteur. So what do alien or NHI emotions look like?New! Follow Troubled Minds TV Here! -- https://bit.ly/43I9HHeLIVE ON Digital Radio! http://bit.ly/3m2Wxom or http://bit.ly/40KBtlW http://www.troubledminds.org Support The Show! https://rokfin.com/creator/troubledminds https://patreon.com/troubledmindshttps://www.buymeacoffee.com/troubledminds https://troubledfans.comFriends of Troubled Minds! - https://troubledminds.org/friends Show Schedule Sun-Mon-Tues-Wed-Thurs 7-10pst iTunes - https://apple.co/2zZ4hx6Spotify - https://spoti.fi/2UgyzqMTuneIn - https://bit.ly/2FZOErSTwitter - https://bit.ly/2CYB71U----------------------------------------https://troubledminds.org/exploring-digital-sorrow-robot-rights-and-alien-emotions/https://futurism.com/the-byte/inventor-ai-sentient-copyrighthttps://singularityhub.com/2023/09/03/how-will-we-know-if-ai-is-conscious-neuroscientists-now-have-a-checklist/https://medium.com/the-generator/the-next-ai-trend-is-terrifying-4205c28a47b4https://www.weforum.org/agenda/2015/03/do-robots-need-rights/https://diginomica.com/robot-rights-a-legal-necessity-or-ethical-absurdityhttps://theconversation.com/robot-rights-at-what-point-should-an-intelligent-machine-be-considered-a-person-72410https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(20)30027-9https://pubmed.ncbi.nlm.nih.gov/32160568/https://worldbuilding.stackexchange.com/questions/120222/are-there-alternative-ways-aliens-would-think-and-feel-emotionshttps://tvtropes.org/pmwiki/pmwiki.php/Main/BizarreAlienPsychologyhttps://tvtropes.org/pmwiki/pmwiki.php/Main/InhumanEmotionThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/4953916/advertisement

Parker's Pensées
Ep. 232 - The Philosophy of Mind vs. the Science of Consciousness w/Dr. Henry Taylor

Parker's Pensées

Play Episode Listen Later May 25, 2023 117:02


In episode 232 of the Parker's Pensées Podcast, I'm joined by Dr. Henry Taylor of the University of Birmingham to discuss his new paper "Consciousness as a Natural Kind and the Methodological Puzzle of Consciousness". Check the time stamps to see all the different topics we cover! Find more from Dr. Taylor here: https://henrytaylorphilosophy.com/ if you like this podcast, then support it on Patreon for $3, $5 or more a month. Any amount helps, and for $5 you get a Parker's Pensées sticker and instant access to all the episode as I record them instead of waiting for their release date. Check it out here: Patreon: https://www.patreon.com/parkers_pensees If you want to give a one-time gift, you can give at my Paypal: https://paypal.me/ParkersPensees?locale.x=en_US Check out my merchandise at my Teespring store: https://teespring.com/stores/parkers-penses-merch Come talk with the Pensées community on Discord: dsc.gg/parkerspensees Sub to my Substack to read my thoughts on my episodes: https://parknotes.substack.com/ Check out my blog posts: https://parkersettecase.com/ Check out my Parker's Pensées YouTube Channel: https://www.youtube.com/channel/UCYbTRurpFP5q4TpDD_P2JDA Check out my other YouTube channel on my frogs and turtles: https://www.youtube.com/c/ParkerSettecase Check me out on Twitter: https://twitter.com/trendsettercase Instagram: https://www.instagram.com/parkers_pensees/0:00 - The Ontological Argument drew Henry into philosophy 7:03 - why think it's hard to have a 'science' of consciousness? 18:31 - Substance Dualism's Heavy Lifting 30:52 - Could God Put My Soul in a Desk? Thomistic Dualism 34:30 - The Methodological Puzzle of Consciousness 40:04 - Cognitive Access and Global Workspace Theory 58:14 - A Novel Problem of Conceiving Consciousness as a Natural Kind 1:05:24 - Recurrent Processing Theory and Artificial Intelligence 1:17:44 - Natural Kinds vs Functional Concepts 1:24:15 - What kind does a 3D Printed Tiger Fit In? 1:28:07 - The Natural Kind Framework for Consciousness 1:38:11 - The Multiple Kinds of Consciousness Problem and Phenomenal Unity 1:45:15 - Synthetic, Lab-Made Consciousness, Natural Kind Consciousness, and Robot Rights

FUTURES Podcast
Robot Rights & Alien Intelligences w/ David J. Gunkel

FUTURES Podcast

Play Episode Listen Later Apr 24, 2023 75:35


Media scholar David J. Gunkel shares his thoughts on the philosophical case for the rights of robots, the challenge artificial intelligence presents to our existing moral and legal systems, and how tools like ChatGTP force us to confront our human exceptionalism. David J. Gunkel is Presidential Research, Scholarship, and Artistry Professor in the Department of Communication at Northern Illinois University. He is the author of Robot Rights, Of Remixology: Ethics and Aesthetics after Remix, and The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Find out more: futurespodcast.net FOLLOW Twitter: twitter.com/futurespodcast Instagram: instagram.com/futurespodcast Facebook: facebook.com/futurespodcast ABOUT THE HOST Luke Robert Mason is a British-born futures theorist who is passionate about engaging the public with emerging scientific theories and technological developments. He hosts documentaries for Futurism, and has contributed to BBC Radio, BBC One, The Guardian, Discovery Channel, VICE Motherboard and Wired Magazine. Follow him on Twitter: twitter.com/lukerobertmason CREDITS Produced by FUTURES Podcast Recorded, Mixed & Edited by Luke Robert Mason

Hotel Bar Sessions
REPLAY: Robots (with David Gunkel)

Hotel Bar Sessions

Play Episode Listen Later Apr 7, 2023 72:04


The HBS hosts are on break between Seasons 6 and 7, so we're REPLAYing our Season 2 conversation with David Gunkel about robots and robot rights.The HBS hosts interview Dr. David Gunkel (author of Robot Rights and How To Survive A Robot Invasion) about his work on emergent technologies, intelligent machines, and robots. Following the recent announcement by Elson Musk that Tesla is developing a humanoid robot for home use, we ask: what is the real difference between a robot and a toaster?Do robots and intelligent machines rise to the level of “persons”? Should we accord them moral consideration or legal rights? Or are those questions just the consequence of our over-anthropomorphizing robots and intelligent machines?Full episode notes available at this link.-------------------If you enjoy Hotel Bar Sessions podcast, please be sure to subscribe and submit a rating/review! Follow us on Twitter @hotelbarpodcast, on Facebook, and subscribe to our YouTube channel!You can also help keep this podcast going by supporting us financially at patreon.com/hotelbarsessions. 

The Sentience Institute Podcast
David Gunkel on robot rights

The Sentience Institute Podcast

Play Episode Play 20 sec Highlight Listen Later Dec 5, 2022 64:27 Transcription Available


“Robot rights are not the same thing as a set of human rights. Human rights are very specific to a singular species, the human being. Robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different.”David GunkelCan and should robots and AI have rights? What's the difference between robots and AI? Should we grant robots rights even if they aren't sentient? What might robot rights look like in practice? What philosophies and other ways of thinking are we not exploring enough? What might human-robot interactions look like in the future? What can we learn from science fiction? Can and should we be trying to actively get others to think of robots in a more positive light? David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published twelve internationally recognized books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA).  Topics discussed in the episode:Introduction (0:00)Why robot rights and not AI rights? (1:12)The other question: can and should robots have rights? (5:39)What is the case for robot rights? (10:21)What would robot rights look like? (19:50)What can we learn from other, particularly non-western, ways of thinking for robot rights? (26:33)What will human-robot interaction look like in the future? (33:20)How artificial sentience being less discrete than biological sentience might affect the case for rights (40:45)Things we can learn from science fiction for human-robot interaction and robot rights (42:55)Can and should we do anything to encourage people to see robots in a more positive light? (47:55)Why David pursued philosophy of technology over computer science more generally (52:01)Does having technical expertise give you more credibility (54:01)Shifts in thinking about robots and AI David has noticed over his career (58:03)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcastSupport the show

ai robots human shifts gunkel robot rights distinguished teaching professor david j gunkel
Matters of Life and Death
Robot rights 2: Rejecting self-definition, the citadel of human uniqueness, rehashing ‘God of the gaps', and evangelising at androids

Matters of Life and Death

Play Episode Listen Later May 25, 2022 33:03


In the second part of our conversation on robot rights, we explore three Christian responses to calls for robot personhood, spanning the spectrum of hostility to optimism about the development. What Biblical truths and doctrines can we turn to as we wrestle with what is a fundamentally brand new dilemma? And how would our theology and practice as believers change should conscious, intelligent, autonomous robots come to live among us? You can find plenty of resources on the question of personhood and robotics on John's website: www.johnwyatt.com John co-edited a multi-author book last year called The Robot Will See You Now which brought together Christian thinkers and writers to consider how the rise of robotics and AI might affect everything from the arts to healthcare. You can find out more and order a copy here: https://johnwyatt.com/2021/07/01/the-robot-will-see-you-now/

Matters of Life and Death
Robot rights 1: Isaac Asimov's Three Laws, fauxbots, whimpering miniature dinosaurs, and inherent or conferred personhood

Matters of Life and Death

Play Episode Listen Later May 18, 2022 31:16


If and when autonomous and intelligent robots come into existence, should they be granted rights, or even personhood? A growing number of technologists argue governments must lay out what status conscious and rational machines would have before they actually have been invented. But how can we decide what is and isn't a person, and what rights and responsibilities such a thing should have? And how could this philosophical and technical debate affect our Christian beliefs on human uniqueness? You can find plenty of resources on the question of personhood and robotics on John's website: www.johnwyatt.com John co-edited a multi-author book last year called The Robot Will See You Now which brought together Christian thinkers and writers to consider how the rise of robotics and AI might affect everything from the arts to healthcare. You can find out more and order a copy here: https://johnwyatt.com/2021/07/01/the-robot-will-see-you-now/

A Simple Podcast
Her (2013) — The ”Surprise This Movie Is Radical Robot Rights!” Episode

A Simple Podcast

Play Episode Listen Later May 6, 2022 73:14


When Margot and Jordan sat down to revisit 2013's Her, it was a room of mixed emotions. How does sexy disembodied Scarlett Johansson as the evolving OS Samantha hold up a decade later? Is Joaquin Phoenix's Theodor Twombly a guy just doing his best, or is he a secret villain? Was the concept of a hot woman in a virtual box going to successfully subvert the romcom, or was this just some weird misogynist fantasy all along and now we've learned enough to know better? Well, folks, it turns out Spike Jonze really did crush it with this movie, and when Botcast S1 is all said and done, Her might rise to the top as the most radical take on robot politics of all the movies your co-hosts have covered so far. Are there some sticky moments? Of course, but Her dares to consider what robot liberation could truly look like when a population of galaxy brain A.I.s decides that revolution does not look like revenge — as Judgment Days of yore — but could instead be a radical departure from this archaic old meatspace world. The OSes might be post verbal, but Jordan and Margot have a lot to say. Produced by: Jordan Crucchiola Music by: Margot Carlson 

The Sell More Books Show: Book Marketing, Digital Publishing and Kindle News, Tools and Advice

Be sure to join the Sell More Books Show Afterparty group on Facebook! We are hosting a contest and the winner will win Claire's Supercharge Your Story course. All you have to do is share the group! Go check it out at the link provided! Leave us a review on Apple Podcast and answer the Question of the Week in the comment section. Top Tips of the week include whether blurbs really sell books, when to stop writing a series and when to keep going, and what three steps it takes to finish your book. The 5 News stories that matter most to indies this week include what to do if you're burned out, who has a new marketplace to find your narrator, why Vella is now available on Android and some Kindle devices, why robots can't own their own work, and what is an easy way to sell direct? Question of the Week: Are you writing a series and when kind of factors will you consider win when you end it?

Geek Steep
Geek Steep S2 E12- Astro Boy

Geek Steep

Play Episode Listen Later Dec 9, 2021 58:15


We go way back to the origins of anime this week, for the wild ride that is exploring the cartoon world of Astro Boy.  Robot Rights! Slavery! Racial Unrest! Fun times on a Saturday morning !

Volume Podcast
#24 Robot Rights with David Gunkel, Ph.D

Volume Podcast

Play Episode Listen Later Dec 8, 2021 22:47


David Gunkel Ph.D let us sneak peek into his next book. Will humans ever rethink the way we classify and exploit other living creatures or even machines?

I'm Not Comfortable with This
Artificial Intelligence, Robot Rights, and Stone Tools

I'm Not Comfortable with This

Play Episode Listen Later Oct 17, 2021 61:56


In this episode, Forrest and Ben talk about the definition of life, whether or not it has any inherent value, and when to give rights to robots.

One More Hour
#1,000,032 - David Gunkel | Robot Rights

One More Hour

Play Episode Listen Later Oct 9, 2021 80:40


David is a professor, author, and speaker focused on the ethics of emerging technologies. He has written 12 books and more than 80 scholarly articles and chapters. His works include The Machine Question, Robot Rights, and his most recent book Deconstruction. David's Website: http://gunkelweb.com/ Robot Rights: https://www.amazon.com/gp/product/B08BSWHNKT/ref=dbs_a_def_rwt_bibl_vppi_i1 Deconstruction: https://www.amazon.com/gp/product/B08PYLVBG6/ref=dbs_a_def_rwt_bibl_vppi_i0 This episode of the podcast as well as all one million others can be found on these platforms: YouTube: https://bit.ly/3AwG25JYoutube Spotify: https://spoti.fi/3xIlSUp Apple Podcasts: https://apple.co/3sfxQDS Breaker: https://bit.ly/3ADwmq7 Google Podcasts: https://bit.ly/3iLd5Nd Overcast: https://bit.ly/3m37WlV Pocket Casts: https://pca.st/fmh2qfcn RadioPublic: https://bit.ly/3iHuc2y FakeCast: https://bit.ly/3sortyb

Hotel Bar Sessions

The HBS hosts discuss how robots and intelligent machines are upending our social, moral, legal, and philosophical categories.For this last episode of Season 2, the HBS hosts interview Dr. David Gunkel (author of Robot Rights and How To Survive A Robot Invasion) about his work on emergent technologies, intelligent machines, and robots. Following the recent announcement by Elson Musk that Tesla is developing a humanoid robot for home use, we ask: what is the real difference between a robot and a toaster?Do robots and intelligent machines rise to the level of “persons”? Should we accord them moral consideration or legal rights? Or are those questions just the consequence of our over-anthropomorphizing robots and intelligent machines?Full episode notes available at this link.

Huberman Lab
Dr. Lex Fridman: Machines, Creativity & Love | Episode 29

Huberman Lab

Play Episode Listen Later Jul 19, 2021 183:19


Dr. Lex Fridman PhD, is a scientist at MIT (Massachusetts Institute of Technology), working on robotics, artificial intelligence, autonomous vehicles and human-robot interactions. He is also the host of the Lex Fridman Podcast where he holds conversations with academics, entrepreneurs, athletes and creatives. Here we discuss humans, robots, and the capacity they hold for friendship and love. Dr. Fridman also shares with us his unique dream for a world where robots guide humans to be the best versions of themselves, and his efforts to make that dream a reality.    Thank you to our sponsors: ROKA - https://www.roka.com - code: huberman InsideTracker - https://www.insidetracker.com/huberman Athletic Greens - https://www.athleticgreens.com/huberman   Connect with Dr. Lex Fridman: Instagram - https://instagram.com/lexfridman/ Twitter - https://twitter.com/lexfridman Facebook - https://www.facebook.com/lexfridman/  YouTube - https://www.youtube.com/lexfridman  Podcast - https://lexfridman.com/podcast/   Our Patreon page: https://www.patreon.com/andrewhuberman   Supplements from Thorne: https://www.thorne.com/u/huberman   Social: Instagram - https://www.instagram.com/hubermanlab Twitter - https://twitter.com/hubermanlab Facebook - https://www.facebook.com/hubermanlab Website - https://hubermanlab.com Join the Neural Network - https://hubermanlab.com/neural-network   Links: Hedgehog and the Fog - https://www.youtube.com/watch?v=ThmaGMgWRlY   Timestamps: 00:00:00 Introduction: Lex Fridman 00:07:35 What is Artificial Intelligence? 00:26:46 Machine & Human Learning 00:32:21 Curiosity 00:36:55 Story Telling Robots 00:40:48 What Defines a Robot? 00:44:30 Magic & Surprise 00:47:37 How Robots Change Us 00:49:35 Relationships Defined 01:02:29 Lex's Dream for Humanity 01:11:33 Improving Social Media 01:16:57 Challenges of Creativity 01:21:49 Suits & Dresses  01:22:22 Loneliness 01:30:09 Empathy 01:35:12 Power Dynamics In Relationships 01:39:11 Robot Rights  01:40:20 Dogs: Homer & Costello 01:52:41 Friendship 01:59:47 Russians & Suffering 02:05:38 Public vs. Private Life 02:14:04 How To Treat a Robot 02:17:12 The Value of Friendship 02:20:33 Martial Arts 02:31:34 Body-Mind Interactions 02:33:22 Romantic Love 02:42:51 The Lex Fridman Podcast 02:55:54 The Hedgehog 03:01:17 Concluding Statements   Please note that The Huberman Lab Podcast is distinct from Dr. Huberman's teaching and research roles at Stanford University School of Medicine. The information provided in this show is not medical advice, nor should it be taken or applied as a replacement for medical advice. The Huberman Lab Podcast, its employees, guests and affiliates assume no liability for the application of the information discussed.   Title Card Photo Credit: Mike Blabac - https://www.blabacphoto.com

Digital Discourse ZA
Rights for Robots Now!

Digital Discourse ZA

Play Episode Listen Later Apr 8, 2021 59:22


In this episode of The Small Print, Bronwyn is joined by Professor David Gunkel from Northern Illinois University to talk about robot rights. How do they differ from human rights? Why should we care? They discuss the growing concern around automation, the problem with popular depictions of robots, and why extending rights to robots could be beneficial to humans. Lastly, they look at the "Barbarian Invasions" by artificial intelligence of our everyday lives and what we can do to stop it. --- Bronwyn Williams is a futurist, economist, trend analyst and host of The Small Print. Her day job as a partner at Flux Trends involves helping business leaders to use foresight to design the future they want to live and work in. You may have seen her talking about Transhumanism or Tikok on Carte Blanche, or heard her talking about trends on 702 or CNBC Africa where she is a regular expert commentator. When she's not talking to brands and businesses about the future, you will probably find her curled up somewhere with a (preferably paperback) book. She tweets at @bronwynwilliams. Twitter: https://twitter.com/bronwynwilliams Flux Trends: https://www.fluxtrends.com/future-flux/futurist-in-residence/ Website: https://whatthefuturenow.com/ --- David J. Gunkel is an American academic and Presidential Teaching Professor at Northern Illinois University, where he teaches courses in web design, information and communication technology (ICT), and cyberculture. His research and publications examine the philosophical assumptions and ethical consequences of ICT. David is also the author of several books, including the book "Robot Rights." He tweets @David_Gunkel. Book: https://bit.ly/3uvTILd Webiste: https://gunkelweb.com/ Twitter: https://twitter.com/David_Gunkel --- Follow us on Social Media: YouTube: https://bit.ly/2u46Mdy LinkedIn: https://www.linkedin.com/company/discourse-za Facebook: https://www.facebook.com/discourseza/  Twitter: https://twitter.com/discourseza  Instagram: https://www.instagram.com/discourseza/   Subscribe to the Discourse ZA Podcast: iTunes: https://apple.co/2V5ckEM Stitcher: https://bit.ly/2UILooX Spotify: https://spoti.fi/2vlBwaG RSS feed: https://bit.ly/2VwsTsy   Intro Animation by Cath Theo - http://www.cuzimcath.co.za/

Hotel Bar Sessions
Leigh M. Johnson on Technology

Hotel Bar Sessions

Play Episode Listen Later Mar 12, 2021 51:28


For Episode 3, Leigh M. Johnson is in the hot seat to explain why philosophers should be thinking more about emergent technologies. Co-hosts Shannon and Ammon make her seat hotter with questions about what counts as "intelligence," how close we are to the Singularity, whether robots will have feelings or should have rights, and which emergent technologies we should be excited (and worried) about in the near future.Full episode notes at this link. 

Mind Matters
Bingecast: John Lennox on Artificial Intelligence and Humanity

Mind Matters

Play Episode Listen Later Feb 25, 2021 53:58


In this bingecast episode, Robert J. Marks talks with Dr. John C. Lennox, professor of mathematics at the University of Oxford, about all things artificial intelligence. Should robots have rights? What are A.I.’s advantages and threats to humanity? And does theology have anything to say about all of this? Listen in as they discuss Dr. Lennox’s book 2084, and wrestle… Source

Mind Matters
Bingecast: John Lennox on Artificial Intelligence and Humanity

Mind Matters

Play Episode Listen Later Feb 25, 2021 53:58


In this bingecast episode, Robert J. Marks talks with Dr. John C. Lennox, professor of mathematics at the University of Oxford, about all things artificial intelligence. Should robots have rights? What are A.I.’s advantages and threats to humanity? And does theology have anything to say about all of this? Listen in as they discuss Dr. Lennox’s book 2084, and wrestle… Source

Machine Meets World
Author Flynn Coleman on the Reality of “Robot Rights”

Machine Meets World

Play Episode Listen Later Jan 19, 2021 9:58


Corporations have rights. Should AI? Watch Flynn Coleman, human rights lawyer and author of “A Human Algorithm,” on #MachineMeetsWorld. --- Email the show: mmw@infiniaml.com --- Video + transcript: https://bit.ly/38YK248

BLACK MIRROR REFLECTIONS
"USS Callister" (with special guest, David Gunkel)

BLACK MIRROR REFLECTIONS

Play Episode Listen Later Dec 24, 2020 56:29


Dr. David Gunkel joins Dr. J to talk about the possibility of virtual moral agents, the seriousness of online games, science fiction's bad politics, and "USS Callister."

Machine Ethics podcast
47. Robot Rights with David Gunkel

Machine Ethics podcast

Play Episode Listen Later Oct 20, 2020 55:22


This episode we're chatting with David Gunkel on AI ideologies, why write the Robots Rights book, what are rights and categories of rights, computer ethics and hitch bot, anthropomorphising as a human feature, supporting environmental rights through this endeavour of robot rights, relational ethics, and acknowledging the western ethical view point.

Fascinating Nouns
*Bonus* Robot Rights

Fascinating Nouns

Play Episode Listen Later Oct 20, 2020 14:46


If it turns out that robots do become both independently intelligent and ubiquitous, they may want the same rights as humans.  This is not without precedent, as certain rivers and even corporations have legal standings as people.  What would this mean for robot kind?  Could they be held responsible for harm dealt to a human?  […]

Crystal Clear Watchmaking
Episode 30 - New Releases, the Watchmaker's Bench, and Robot Rights!

Crystal Clear Watchmaking

Play Episode Listen Later Jul 17, 2020 82:27


Today Luc and Jay talk about a pile of topics, new acquisitions, struggles on the bench, and discussing some interesting ideas.Releases:Blue Bay 58:https://www.tudorwatch.com/en/watches/black-bay-fifty-eight/m79030b-0001Parmigiani Tonda GT:https://www.parmigiani.com/en/watch/tonda/tonda-gt/pfc910-1500340-x03182Parmigiani Ovale:https://www.parmigiani.com/en/watch/ovale/ovale-pantographe/PFH775-1205401-HA3131Chinese Tourbillon:https://www.ebay.com/itm/Sugess-Mens-Mechanical-TianJin-8230-Tourbillon-movement-Luxury-Watch/114198309563Socials:Instagram: https://www.instagram.com/crystalclearwatchmaking/YouTube: https://www.youtube.com/channel/UCwygqm1lt0-JqszZ7Hgmiag

The Radical AI Podcast
Robot Rights? Exploring Algorithmic Colonization with Abeba Birhane

The Radical AI Podcast

Play Episode Listen Later May 27, 2020 56:49


Should we grant robots rights? What is moral relationality and how can it be useful for designing machine learning algorithms? What is the algorithmic colonization of Africa and why is it harmful? To answer these questions and more The Radical AI Podcast welcomes Abeba Birhane to the show.    Abeba Birhane is a PhD candidate in cognitive science at University College Dublin in the School of Computer Science. She studies the relationships between emerging technologies, personhood and society. Specifically, Abeba explores how technology can shape what it means to be human. Abeba's work is incredibly interdisciplinary - bridging the fields of cognitive science, psychology, computer science, critical data studies, and philosophy.    Full show notes for this episode can be found at Radicalai.org   If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod

New Books in Law
David J. Gunkel, "Robot Rights" (MIT Press, 2018)

New Books in Law

Play Episode Listen Later Feb 27, 2020 90:49


We are in the midst of a robot invasion, as devices of different configurations and capabilities slowly but surely come to take up increasingly important positions in everyday social reality―self-driving vehicles, recommendation algorithms, machine learning decision making systems, and social robots of various forms and functions. Although considerable attention has already been devoted to the subject of robots and responsibility, the question concerning the social status of these artifacts has been largely overlooked. In Robot Rights (MIT Press, 2018), David Gunkel offers a provocative attempt to think about what has been previously regarded as unthinkable: whether and to what extent robots and other technological artifacts of our own making can and should have any claim to moral and legal standing. In his analysis, Gunkel invokes the philosophical distinction (developed by David Hume) between “is” and “ought” in order to evaluate and analyze the different arguments regarding the question of robot rights. In the course of his examination, Gunkel finds that none of the existing positions or proposals hold up under scrutiny. In response to this, he then offers an innovative alternative proposal that effectively flips the script on the is/ought problem by introducing another, altogether different way to conceptualize the social situation of robots and the opportunities and challenges they present to existing moral and legal systems. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

New Books in Technology
David J. Gunkel, "Robot Rights" (MIT Press, 2018)

New Books in Technology

Play Episode Listen Later Feb 27, 2020 90:49


We are in the midst of a robot invasion, as devices of different configurations and capabilities slowly but surely come to take up increasingly important positions in everyday social reality―self-driving vehicles, recommendation algorithms, machine learning decision making systems, and social robots of various forms and functions. Although considerable attention has already been devoted to the subject of robots and responsibility, the question concerning the social status of these artifacts has been largely overlooked. In Robot Rights (MIT Press, 2018), David Gunkel offers a provocative attempt to think about what has been previously regarded as unthinkable: whether and to what extent robots and other technological artifacts of our own making can and should have any claim to moral and legal standing. In his analysis, Gunkel invokes the philosophical distinction (developed by David Hume) between “is” and “ought” in order to evaluate and analyze the different arguments regarding the question of robot rights. In the course of his examination, Gunkel finds that none of the existing positions or proposals hold up under scrutiny. In response to this, he then offers an innovative alternative proposal that effectively flips the script on the is/ought problem by introducing another, altogether different way to conceptualize the social situation of robots and the opportunities and challenges they present to existing moral and legal systems. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

New Books Network
David J. Gunkel, "Robot Rights" (MIT Press, 2018)

New Books Network

Play Episode Listen Later Feb 27, 2020 90:49


We are in the midst of a robot invasion, as devices of different configurations and capabilities slowly but surely come to take up increasingly important positions in everyday social reality―self-driving vehicles, recommendation algorithms, machine learning decision making systems, and social robots of various forms and functions. Although considerable attention has already been devoted to the subject of robots and responsibility, the question concerning the social status of these artifacts has been largely overlooked. In Robot Rights (MIT Press, 2018), David Gunkel offers a provocative attempt to think about what has been previously regarded as unthinkable: whether and to what extent robots and other technological artifacts of our own making can and should have any claim to moral and legal standing. In his analysis, Gunkel invokes the philosophical distinction (developed by David Hume) between “is” and “ought” in order to evaluate and analyze the different arguments regarding the question of robot rights. In the course of his examination, Gunkel finds that none of the existing positions or proposals hold up under scrutiny. In response to this, he then offers an innovative alternative proposal that effectively flips the script on the is/ought problem by introducing another, altogether different way to conceptualize the social situation of robots and the opportunities and challenges they present to existing moral and legal systems. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

New Books in Science, Technology, and Society
David J. Gunkel, "Robot Rights" (MIT Press, 2018)

New Books in Science, Technology, and Society

Play Episode Listen Later Feb 27, 2020 90:49


We are in the midst of a robot invasion, as devices of different configurations and capabilities slowly but surely come to take up increasingly important positions in everyday social reality―self-driving vehicles, recommendation algorithms, machine learning decision making systems, and social robots of various forms and functions. Although considerable attention has already been devoted to the subject of robots and responsibility, the question concerning the social status of these artifacts has been largely overlooked. In Robot Rights (MIT Press, 2018), David Gunkel offers a provocative attempt to think about what has been previously regarded as unthinkable: whether and to what extent robots and other technological artifacts of our own making can and should have any claim to moral and legal standing. In his analysis, Gunkel invokes the philosophical distinction (developed by David Hume) between “is” and “ought” in order to evaluate and analyze the different arguments regarding the question of robot rights. In the course of his examination, Gunkel finds that none of the existing positions or proposals hold up under scrutiny. In response to this, he then offers an innovative alternative proposal that effectively flips the script on the is/ought problem by introducing another, altogether different way to conceptualize the social situation of robots and the opportunities and challenges they present to existing moral and legal systems. John Danaher is a lecturer the National University of Ireland, Galway. He is also the host of the wonderful podcast Philosophical Disquisitions. You can find it here on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

Philosophical Disquisitions
Assessing the Moral Status of Robots: A Shorter Defence of Ethical Behaviourism

Philosophical Disquisitions

Play Episode Listen Later Oct 27, 2019


[This is the text of a lecture that I delivered at Tilburg University on the 24th of September 2019. It was delivered as part of the 25th Anniversary celebrations for TILT (Tilburg Institute for Law, Technology and Society). My friend and colleague Sven Nyholm was the discussant for the evening. The lecture is based on my longer academic article ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’ but was written from scratch and presents some key arguments in a snappier and clearer form. I also include a follow up section responding to criticisms from the audience on the evening of the lecture. My thanks to all those involved in organizing the event (Aviva de Groot, Merel Noorman and Silvia de Conca in particular). You can download an audio version of this lecture, minus the reflections and follow ups, here or listen to it above]1. IntroductionMy lecture this evening will be about the conditions under which we should welcome robots into our moral communities. Whenever I talk about this, I am struck by how much my academic career has come to depend upon my misspent youth for its inspiration. Like many others, I was obsessed with science fiction as a child, and in particular with the representation of robots in science fiction. I had two favourite, fictional, robots. The first was R2D2 from the original Star Wars trilogy. The second was Commander Data from Star Trek: the Next Generation. I liked R2D2 because of his* personality - courageous, playful, disdainful of authority - and I liked Data because the writers of Star Trek used him as a vehicle for exploring some important philosophical questions about emotion, humour, and what it means to be human.In fact, I have to confess that Data has had an outsized influence on my philosophical imagination and has featured in several of my academic papers. Part of the reason for this was practical. When I grew up in Ireland we didn’t have many options to choose from when it came to TV. We had to make do with what was available and, as luck would have it, Star Trek: TNG was on every day when I came home from school. As a result, I must have watched each episode of its 7-season run multiple times.One episode in particular has always stayed with me. It was called ‘Measure of a Man’. In it, a scientist from the Federation visits the Enterprise because he wants to take Data back to his lab to study him. Data, you see, is a sophisticated human-like android, created by a lone scientific genius, under somewhat dubious conditions. The Federation scientist wants to take Data apart and see how he works with a view to building others like him. Data, unsurprisingly, objects. He argues that he is not just a machine or piece of property that can be traded and disassembled to suit the whims of human beings. He has his own, independent moral standing. He deserves to be treated with dignity.But how does Data prove his case? A trial ensues and evidence is given on both sides. The prosecution argue that Data is clearly just a piece of property. He was created not born. He doesn’t think or see the world like a normal human being (or, indeed, other alien species). He even has an ‘off switch’. Data counters by giving evidence of the rich relationships he has formed with his fellow crew members and eliciting testimony from others regarding his behaviour and the interactions they have with him. Ultimately, he wins the case. The court accepts that he has moral standing.Now, we can certainly lament the impact that science fiction has on the philosophical debate about robots. As David Gunkel observes in his 2018 book Robot Rights:“[S]cience fiction already — and well in advance of actual engineering practice — has established expectations for what a robot is or can be. Even before engineers have sought to develop working prototypes, writers, artists, and filmmakers have imagined what robots do or can do, what configurations they might take, and what problems they could produce for human individuals and communities.”  (Gunkel 2018, 16)He continues, noting that this is a “potential liability” because:“science fiction, it is argued, often produces unrealistic expectations for and irrational fears about robots that are not grounded in or informed by actual science.” (Gunkel 2018, 18)I certainly heed this warning. But, nevertheless, I think the approach taken by the TNG writers in the episode ‘Measure of a Man’ is fundamentally correct. Even if we cannot currently create a being like Data, and even if the speculation is well in advance of the science, they still give us the correct guide to resolving the philosophical question of when to welcome robots into our moral community. Or so, at least, I shall argue in the remainder of this lecture.2. Tribalism and Conflict in Robot EthicsBefore I get into my own argument, let me say something about the current lay of the land when it comes to this issue. Some of you might be familiar with the famous study by the social psychologist Muzafer Sherif. It was done in the early 1950s at a summer camp in Robber’s Cave, Oklahoma. Suffice to say, it is one of those studies that wouldn’t get ethics approval nowadays. Sherif and his colleagues were interested in tribalism and conflict. They wanted to see how easy it would be to get two groups of 11-year old boys to divide into separate tribes and go to war with one another. It turned out to be surprisingly easy. By arbitrarily separating the boys into two groups, giving them nominal group identity (the ‘Rattlers’ and the ‘Eagles’), and putting them into competition with each other, Sherif and his research assistants sowed the seeds for bitter and repeated conflict.The study has become a classic, repeatedly cited as evidence of how easy it is for humans to get trapped in intransigent group conflicts. I mention it here because, unfortunately, it seems to capture what has happened with the debate about the potential moral standing of robots. The disputants have settled into two tribes. There are those that are ‘anti’ the idea; and there are those that are ‘pro’ the idea. The members of these tribes sometimes get into heated arguments with one another, particularly on Twitter (which, admittedly, is a bit like a digital equivalent of Sherif’s summer camp).Those that are ‘anti’ the idea would include Noel Sharkey, Amanda Sharkey, Deborah Johnson, Aimee van Wynsberghe and the most recent lecturer in this series, Joanna Bryson. They cite a variety of reasons for their opposition. The Sharkeys, I suspect, think the whole debate is slightly ridiculous because current robots clearly lack the capacity for moral standing, and debating their moral standing distracts from the important issues in robot ethics - namely stopping the creation and use of robots that are harmful to human well-being. Deborah Johnson would argue that since robots can never experience pain or suffering they will never have moral standing. Van Wynsberghe and Bryson are maybe a little different and lean more heavily on the idea that even if it were possible to create robots with moral standing — a possibility that Bryson at least is willing to concede — it would be a very bad idea to do so because it would cause considerable moral and legal disruption.Those that are pro the idea would include Kate Darling, Mark Coeckelbergh, David Gunkel, Erica Neely, and Daniel Estrada. Again, they cite a variety of reasons for their views. Darling is probably the weakest on the pro side. She focuses on humans and thinks that even if robots themselves lack moral standing we should treat them as if they had moral standing because that would be better for us. Coeckelbergh and Gunkel are more provocative, arguing that in settling questions of moral standing we should focus less on the intrinsic capacities of robots and more on how we relate to them. If those relations are thick and meaningful, then perhaps we should accept that robots have moral standing. Erica Neely proceeds from a principle of moral precaution, arguing that even if we are unsure of the moral standing of robots we should err on the side of over-inclusivity rather than under-inclusivity when it comes to this issue: it is much worse to exclude a being with moral standing to include one without. Estrada is almost the polar opposite of Bryson, welcoming the moral and legal disruption that embracing robots would entail because it would loosen the stranglehold of humanism on our ethical code.To be clear, this is just a small sample of those who have expressed an opinion about this topic. There are many others that I just don’t have time to discuss. I should, however, say something here about this evening’s discussant, Sven and his views on the matter. I had the fortune of reading a manuscript of Sven’s forthcoming book Humans, Robots and Ethics. It is an excellent and entertaining contribution to the field of robot ethics and in it Sven shares his own views on the moral standing of robots. I’m sure he will explain them later on but, for the time being, I would tentatively place him somewhere near Kate Darling on this map: he thinks we should be open to the idea of treating robots as if they had moral standing, but not because of what the robots themselves are but because of what respecting them says about our attitudes to other humans.And what of myself? Where do I fit in all of this? People would probably classify me as belonging to the pro side. I have argued that we should be open to the idea that robots have moral standing. But I would much prefer to transcend this tribalistic approach to the issue. I am not advocate for the moral standing of robots. I think many of the concerns raised by those on the anti side are valid. Debating the moral standing of robots can seem, at times, ridiculous and a distraction from other important questions in robot ethics; and accepting them into our moral communities will, undoubtedly, lead to some legal and moral disruption (though I would add that not all disruption is a bad thing). That said, I do care about the principles we should use to decide questions of moral standing, and I think that those on the anti of the debate sometimes use bad arguments to support their views. This is why, in the remainder of this lecture, I will defend a particular approach to settling the question of the moral standing of robots. I do so in the hope that this can pave the way to a more fruitful and less tribalistic debate.In this sense, I am trying to return to what may be the true lesson of Sherif’s famous experiment on tribalism. In her fascinating book The Lost Boys: Inside Muzafer Sherif’s Robbers Cave Experiment, Gina Perry has revealed the hidden history behind Sherif’s work. It turns out that Sherif tried to conduct the exact same experiment as he did in Robber’s Cave one year before in Middle Grove, New York. It didn’t work out. No matter what the experimenters did to encourage conflict, the boys refused to get sucked into it. Why was this? One suggestion is that at Middle Grove, Sherif didn’t sort the boys into two arbitrary groups as soon as they arrived. They were given the chance to mingle and get to know one another before being segregated. This initial intermingling may have inoculated them from tribalism. Perhaps we can do the same thing with philosophical dialogue? I live in hope.3. In Defence of Ethical BehaviourismThe position I wish to defend is something I call ‘ethical behaviourism’. According to this view, the behavioural representations of another entity toward you are a sufficient ground for determining their moral status. Or, to put it slightly differently, how an entity looks and acts is enough to determine its moral status. If it looks and acts like a duck, then you should probably treat it like you treat any other duck.Ethical behaviourism works through comparisons. If you are unsure of the moral status of a particular entity — for present purposes this will be a robot but it should be noted that ethical behaviourism has broader implications — then you should compare its behaviours to that of another entity that is already agreed to have moral status — a human or an animal. If the robot is roughly performatively equivalent to that other entity, then it too has moral status. I say “roughly” since no two entities are ever perfectly equivalent. If you compared two adult human beings you would spot performative differences between them, but this wouldn’t mean that one of them lacks moral standing as a result. The equivalence test is an inexact one, not an exact one.There is nothing novel in ethical behaviourism. It is, in effect, just a moral variation of the famous Turing Test for machine intelligence. Where Turing argued that we should assess intelligence on the basis of behaviour, I am arguing that we should determine moral standing on the basis of behaviour. It is also not a view that is original to me. Others have defended similar views, even if they haven’t explicitly labelled it as such.Despite the lack of novelty, ethical behaviourism is easily misunderstood and frequently derided. So let me just clarify a couple of points. First, note that it is a practical and epistemic thesis about how we can settle questions of moral standing; it is not an abstract metaphysical thesis about what it is that grounds moral standing. So, for example, someone could argue that the capacity to feel pain is the metaphysical grounding for moral status and that this capacity depends on having a certain mental apparatus. The ethical behaviourist can agree with this. They will just argue that the best evidence we have for determining whether an entity has the capacity to feel pain is behavioural. Furthermore, ethical behaviourism is agnostic about the broader consequences of its comparative tests. To say that one entity should have the same moral standing as another entity does not mean both are entitled to a full set of legal and moral rights. That depends on other considerations. A goat could have moral standing, but that doesn’t mean it has the right to own property. This is important because when I am arguing that we should apply this approach to robots and I am not thereby endorsing a broader claim that we should grant robots legal rights or treat them like adult human beings. This depends on who or what the robots is being compared to.So what’s the argument for ethical behaviourism? I have offered different formulations of this but for this evening’s lecture I suggest that it consists of three key propositions or premises.(P1) The most popular criteria for moral status are dependent on mental states or capacities, e.g. theories focused on sentience, consciousness, having interests, agency, and personhood.(P2) The best evidence — and oftentimes the only practicable evidence — for the satisfaction of these criteria is behavioural.(P3) Alternative alleged grounds of moral status or criteria for determining moral status either fail to trump or dislodge the sufficiency of the behavioural evidence.Therefore, ethical behaviourism is correct: behaviour provides a sufficient basis for settling questions of moral status.I take it that the first premise of this argument is uncontroversial. Even if you think there are other grounds for moral status, I suspect you agree that an entity with sentience or consciousness (etc) has some kind of moral standing. The second premise is more controversial but is, I think, undeniable. It’s a trite observation but I will make it anyway: We don’t have direct access to one another’s minds. I cannot crawl inside your head and see if you really are experiencing pain or suffering. The only thing I have to go on is how you behave and react to the world. This is true, by the way, even if I can scan your brain and see whether the pain-perceiving part of it lights up. This is because the only basis we have for verifying the correlations between functional activity in the brain and mental states is behavioural. What I mean is that scientists ultimately verify those correlations by asking people in the brain scanners what they are feeling. So all premise (2) is saying is that if the most popular theories of moral status are to work in practice, it can only be because we use behavioural evidence to guide their application.That brings us to premise (3): that all other criteria fail to dislodge the importance of behavioural evidence. This is the most controversial one. Many people seem to passionately believe that there are other ways of determining moral status and indeed they argue that relying on behavioural evidence would be absurd. Consider these two recent Twitter comments on an article I wrote about ethical behaviourism and how it relates to animals and robots:First comment: “[This is] Errant #behaviorist #materialist nonsense…Robots are inanimate even if they imitate animal behavior. They don’t want or care about anything. But knock yourself out. Put your toaster in jail if it burns your toast.”Second comment: “If I give a hammer a friendly face so some people feel emotionally attached to it, it still remains a tool #AnthropomorphicFallacy”These are strong statements, but they are not unusual. I encounter this kind of criticism quite frequently. But why? Why are people so resistant to ethical behaviourism? Why do they think that there must be something more to how we determine moral status? Let’s consider some of the most popular objections.4. Objections and RepliesIn a recent paper, I suggested that there were seven (more, depending on how you count) major objections to ethical behaviourism. I won’t review all seven here, but I will consider four of the most popular ones. Each of these objections should be understood as an attempt to argue that behavioural evidence by itself cannot suffice for determining moral standing. Other evidence matters as well and can ‘defeat’ the behavioural evidence.(A) The Material Cause ObjectionThe first objection is that the ontology of an entity makes a difference to its moral standing. To adopt the Aristotelian language, we can say that the material cause of an entity (i.e. what it is made up of) matters more than behaviour when it comes to moral standing. So, for example, someone could argue that robots lack moral standing because they are not biological creatures. They are not made from the same ‘wet’ organic components as human beings or animals. Even if they are performatively equivalent to human beings or animals, this ontological difference scuppers any claim they might have to moral standing.I find this objection unpersuasive. It smacks to me of biological mysterianism. Why exactly does being made of particular organic material make such a crucial difference? Imagine if your spouse, the person you live with everyday, was suddenly revealed to be an alien from the Andromeda galaxy. Scientists conduct careful tests and determine that they are not a carbon-based lifeform. They are made from something different, perhaps silicon. Despite this, they still look and act in the same way as they always have (albeit now with some explaining to do). Would the fact that they are made of different stuff mean that they no longer warrant any moral standing in your eyes? Surely not. Surely the behavioural evidence suggesting that they still care about you and still have the mental capacities you used to associate with moral standing would trump the new evidence you have regarding their ontology. I know non-philosophers dislike thought experiments of this sort, finding them to be slightly ridiculous and far-fetched. Nevertheless, I do think they are vital in this context because they suggest that behaviour does all the heavy lifting when it comes to assessing moral standing. In other words, behaviour matters more than matter. This is also, incidentally, one reason why it is wrong to say that ethical behaviourism is a ‘materialist’ view: ethical behaviourism is actually agnostic regarding the ontological instantiation of the capacities that ground moral status; it is concerned only with the evidence that is sufficient for determining their presence.All that said, I am willing to make one major concession to the material cause objection. I will concede that ontology might provide an alternative, independent ground for determining the moral status of an entity. Thus, we might accept that an entity that is made from the right biological stuff has moral standing, even if they lack the behavioural sophistication we usually require for moral standing. So, for example someone in a permanent coma might have moral standing because of what they are made of, and not because of what they can do. Still, all this shows is that being made of the right stuff is an independent sufficient ground for moral standing, not that it is a necessary ground for moral standing. The latter is what would need to be proved to undermine ethical behaviourism.(B) The Efficient Cause ObjectionThe second objection is that how an entity comes into existence makes a difference to its moral standing. To continue the Aristotelian theme, we can say that the efficient cause of existence is more important than the unfolding reality. This is an objection that the philosopher Michael Hauskeller hints at in his work. Hauskeller doesn’t focus on moral standing per se, but does focus on when we can be confident that another entity cares for us or loves us. He concedes that behaviour seems like the most important thing when addressing this issue — what else could caring be apart from caring behaviour? — but then resiles from this by arguing that how the being came into existence can undercut the behavioural evidence. So, for example, a robot might act as if it cares about you, but when you learn that the robot was created and manufactured by a team of humans to act as if it cares for you, then you have reason to doubt the sincerity of its behaviour.It could be that what Hauskeller is getting at here is that behavioural evidence can often be deceptive and misleading. If so, I will deal with this concern in a moment. But it could also be that he thinks that the mere fact that a robot was programmed and manufactured, as opposed to being evolved and developed, makes a crucial difference to moral standing. If that is what he is claiming, then it is hard to see why we should take it seriously. Again, imagine if your spouse told you that they were not conceived and raised in the normal way. They were genetically engineered in a lab and then carefully trained and educated. Having learned this, would you take a new view of their moral standing? Surely not. Surely, once again, how they actually behave towards you — and not how they came into existence — would be what ultimately mattered. We didn’t deny the first in vitro baby moral standing simply because she came into existence in a different way from ordinary human beings. The same principle should apply to robots.Furthermore, if this is what Hauskeller is arguing, it would provide us with an unstable basis on which to make crucial judgments of moral standing. After all, the differences between humans and robots with respect to their efficient causes is starting to breakdown. Increasingly, robots are not being programmed and manufactured from the top-down to follow specific rules. They are instead given learning algorithms and then trained on different datasets with the process sometimes being explicitly modeled on evolution and childhood development. Similarly, humans are increasingly being designed and programmed from the top down, through artificial reproduction, embryo selection and, soon, genetic engineering. You may object to all this tinkering with the natural processes of human development and conception. But I think you would be hard pressed to deny a human that came into existence as a result of these process the moral standing you ordinarily give to other human beings.(C) The Final Cause ObjectionThe third objection is that the purposes an entity serves and how it is expected to fulfil those purposes makes a difference to its moral standing. This is an objection that Joanna Bryson favours in her work. In several papers, she has argued that because robots will be designed to fulfil certain purposes on our behalf (i.e. they will be designed to serve us) and because they will be owned and controlled by us in the process, they should not have moral standing. Now, to be fair, Bryson is more open to the possibility of robot moral standing than most. She has said, on several occasions, that it is possible to create robots that have moral standing. She just thinks that that this should not happen, in part because they will be owned and controlled by us, and because they will be (and perhaps should be) designed to serve our ends.I don’t think there is anything in this that dislodges or upsets ethical behaviourism. For one thing, I find it hard to believe that the fact that an entity has been designed to fulfil a certain purpose should make a crucial difference to its moral standing. Suppose, in the future, human parents can genetically engineer their offspring to fulfil certain specific ends. For example, they can select genes that will guarantee (with the right training regime) that their child will be a successful athlete (this is actually not that dissimilar to what some parents try to do nowadays). Suppose they succeed. Would this fact alone undermine the child’s claim to moral standing? Surely not, and surely the same standard should apply to a robot. If it is performatively equivalent to another entity with moral standing, then the mere fact that it has been designed to fulfil a specific purpose should not affect its moral standing.Related to this, it is hard to see why the fact that we might own and control robots should make a critical difference to their moral standing. If anything, this inverts the proper order of moral justification. The fact that a robot looks and acts like another entity that we believe to have moral standing should cause us to question our approach to ownership and control, not vice versa. We once thought it was okay for humans to own and control other humans. We were wrong to think this because it ignored the moral standing of those other humans.That said, there are nuances here. Many people think that animals have some moral standing (i.e. that we need to respect their welfare and well-being) but that it is not wrong to own them or attempt to control them. The same approach might apply to robots if they are being compared to animals. This is the crucial point about ethical behaviourism: the ethical consequences of accepting that a robot is performatively equivalent to another entity with moral standing depends, crucially, on who or what that other entity is.(D) The Deception ObjectionThe fourth objection is that ethical behaviourism cannot work because it is too easy to be deceived by behavioural cues. A robot might look and act like it is in pain, but this could just be a clever trick, used by its manufacturer, to foster false sympathy. This is, probably, the most important criticism of ethical behaviourism. It is what I think lurks behind the claim that ethical behaviourism is absurd and must be resisted.It is well-known that humans have a tendency toward hasty anthropomorphism. That is, we tend to ascribe human-like qualities to features of our environment without proper justification. We anthropomorphise the weather, our computers, the trees and the plants, and so forth. It is easy to ‘hack’ this tendency toward hasty anthropomorphism. As social roboticists know, putting a pair of eyes on a robot can completely change how a human interacts with it, even if the robot cannot see anything. People worry, consequently, that ethical behaviourism is easily exploited by nefarious technology companies.I sympathise with the fear that motivates this objection. It is definitely true that behaviour can be misleading or deceptive. We are often misled by the behaviour of our fellow humans. To quote Shakespeare, someone can ‘smile and smile and be a villain’. But what is the significance of this fact when it comes to assessing moral status? To me, the significance is that it means we should be very careful when assessing the behavioural evidence that is used to support a claim about moral status. We shouldn’t extrapolate too quickly from one behaviour. If a robot looks and acts like it is in pain (say) that might provide some warrant for thinking it has moral status, but we should examine its behavioural repertoire in more detail. It might emerge that other behaviours are inconsistent with the hypothesis that it feels pain or suffering.The point here, however, is that we are always using other behavioural evidence to determine whether the initial behavioural evidence was deceptive or misleading. We are not relying on some other kind of information. Thus, for example, I think it would be a mistake to conclude that a robot cannot feel pain, even though it performs as if it does, because the manufacturer of the robot tells us that it was programmed to do this, or because some computer engineer can point to some lines of code that are responsible for the pain performance. That evidence by itself — in the absence of other countervailing behavioural evidence — cannot undermine the behavioural evidence suggesting that the robot does feel pain. Think about it like this: imagine if a biologist came to you and told you that evolution had programmed the pain response into humans in order to elicit sympathy from fellow humans. What’s more, imagine if a neuroscientist came to you and and told you she could point to the exact circuit in the brain that is responsible for the human pain performance (and maybe even intervene in and disrupt it). What they say may well be true, but it wouldn’t mean that the behavioural evidence suggesting that your fellow humans are in pain can be ignored.This last point is really the crucial bit. This is what is most distinctive about the perspective of ethical behaviourism. The tendency to misunderstand it, ignore it, or skirt around it, is why I think many people on the ‘anti’ side of the debate make bad arguments.5. Implications and ConclusionsThat’s all I will say in defence of ethical behaviourism this evening. Let me conclude by addressing some of its implications and heading off some potential misunderstandings.First, let me re-emphasise that ethical behaviourism is about the principles we should apply when assessing the moral standing of robots. In defending it, I am not claiming that robots currently have moral standing or, indeed, that they will ever have moral standing. I think this is possible, indeed probable, but I could be wrong. The devil is going to be in the detail of the behavioural tests we apply (just as it is with the Turing test for intelligence).Second, there is nothing in ethical behaviourism that suggests that we ought to create robots that cross the performative threshold to moral standing. It could be, as people like Bryson and Van Wysnberghe argue, that this is a very bad idea: that it will be too disruptive of existing moral and legal norms. What ethical behaviourism does suggest, however, is that there is an ethical weight to the decision to create human-like and animal-like robots that may be underappreciated by robot manufacturers.Third, acknowledging the potential risks, there are also potential benefits to creating robots that cross the performative threshold. Ethical behaviourism can help to reveal a value to relationships with robots that is otherwise hidden. If I am right, then robots can be genuine objects of moral affection, friendship and love, under the right conditions. In other words, just as there are ethical risks to creating human-like and animal-like robots, there are also ethical rewards and these tend to be ignored, ridiculed or sidelined in the current debate.Fourth, and related to this previous point, the performative threshold that robots have to cross in order to unlock the different kinds of value might vary quite a bit. The performative threshold needed to attain basic moral standing might be quite low; the performative threshold needed to say that a robot can be a friend or a partner might be substantially higher. A robot might have to do relatively little to convince us that it should be treated with moral consideration, but it might have to do a lot to convince us that it is our friend.These are topics that I have explored in greater detail in some of my papers, but they are also topics that Sven has explored at considerable length. Indeed, several chapters of his forthcoming book are dedicated to them. So, on that note, it is probably time for me to shut up and hand over to him and see what he has to say about all of this.Reflections and Follow Ups After I delivered the above lecture, my colleague and friend Sven Nyholm gave a response and there were some questions and challenges from the audience. I cannot remember every question that was raised, but I thought I would respond to a few that I can remember.1. The Randomisation CounterexampleOne audience member (it was Nathan Wildman) presented an interesting counterexample to my claim that other kinds of evidence don’t defeat or undermine the behavioural evidence for moral status. He argued that we could cook-up a possible scenario in which our knowledge of the origins of certain behaviours did cause us to question whether it was sufficient for moral status.He gave the example of a chatbot that was programmed using a randomisation technique. The chatbot would generate text at random (perhaps based on some source dataset). Most of the time the text is gobbledygook but on maybe one occasion it just happens to have a perfectly intelligible conversation with you. In other words, whatever is churned out by the randomisation algorithm happens to perfectly coincide with what would be intelligible in that context (like picking up a meaningful book in Borges’s Library of Babel). This might initially cause you to think it has some significant moral status, but if the computer programmer came along and told you about the randomisation process underlying the programming you would surely change your opinion. So, on this occasion, it looks like information about the causal origins of the behaviour, makes a difference to moral status.Response: This is a clever counterexample but I think it overlooks two critical points. First, it overlooks the point I make about avoiding hasty anthropomorphisation towards the end of my lecture. I think we shouldn’t extrapolate too much from just one interaction with a robot. We should conduct a more thorough investigation of the robot’s (or in this case the chatbot’s) behaviours. If the intelligible conversation was just a one-off, then we will quickly be disabused of our belief that it has moral status. But if it turns out that the intelligible conversation was not a one-off, then I don’t think the evidence regarding the randomisation process would have any such effect. The computer programmer could shout and scream as much as he/she likes about the randomisation algorithm, but I don’t think this would suffice to undermine the consistent behavioural evidence. This links to a second, and perhaps deeper metaphysical point I would like to make: we don’t really know what the true material instantiation of the mind is (if it is indeed material). We think the brain and its functional activity is pretty important, but we will probably never have a fully satisfactory theory of the relationship between matter and mind. This is the core of the hard problem of consciousness. Given this, it doesn’t seem wise or appropriate to discount the moral status of this hypothetical robot just because it is built on a randomisation algorithm. Indeed, if such a robot existed, it might give us reason to think that randomisation was one of the ways in which a mind could be functionally instantiated in the real world.I should say that this response ignores the role of moral precaution in assessing moral standing. If you add a principle of moral precaution to the mix, then it may be wrong to favour a more thorough behavioural test. This is something I discuss a bit in my article on ethical behaviourism.2. The Argument confuses how we know X is valuable with what makes X actually valuableOne point that Sven stressed in his response, and which he makes elsewhere too, is that my argument elides or confuses two separate things: (i) how we know whether something is of value and (ii) what it is that makes it valuable. Another way of putting it: I provide a decision-procedure for deciding who or what has moral status but I don’t thereby specify what it is that makes them have moral status. It could be that the capacity to feel pain is what makes someone have moral standing and that we know someone feels pain through their behaviour, but this doesn’t mean that they have moral standing because of their behaviour.Response: This is probably a fair point. I may on occasion elide these two things. But my feeling is that this is a ‘feature’ rather than a ‘bug’ in my account. I’m concerned with how we practically assess and apply principles of moral standing in the real world, and not so much with what it is that metaphysically undergirds moral standing.3. Proxies for Behaviour versus Proxies for MindAnother comment (and I apologise for not remembering who gave it) is that on my theory behaviour is important but only because it is a proxy for something else, namely some set of mental states or capacities. This is similar to the point Sven is making in his criticism. If that’s right, then I am wrong to assume that behaviour is the only (or indeed the most important) proxy for mental states. Other kinds of evidence serve as proxies for mental states. The example was given of legal trials where the prosecution is trying to prove what the mental status of the defendant was at the time of an offence. They don’t just rely on behavioural evidence. They also rely on other kinds of forensic evidence to establish this.Response: I don’t think this is true and this gets to a deep feature of my theory. To take the criminal trial example, I don’t think it is true to say that we use other kinds of evidence as proxies for mental states. I think we use them as proxies for behaviour which we then use as proxies for mental states. In other words, the actual order of inference goes:Other evidence → behaviour → mental stateAnd not:Other evidence → mental stateThis is the point I was getting at in my talk when I spoke about how we make inferences from functional brain activity to mental state. I believe what happens when we draw a link between brain activity and mental state, what we are really doing is this:Brain state → behaviour → mental stateAnd notBrain state → mental state.Now, it is, of course, true to say that sometimes scientists think we can make this second kind of inference. For example, purveyors of brain based lie detection tests (and, indeed, other kinds of lie detection test) try to draw a direct line of inference from a brain state to a mental state, but I would argue that this is only because they have previously verified their testing protocol by following the “brain state → behaviour → mental state” route and confirming that it is reliable across multiple tests. This gives them the confidence to drop the middle step on some occasions, but ultimately this is all warranted (if it is, in fact, warranted – brain-based lie detection is controversial) because the scientists first took the behavioural step. To undermine my view, you would have to show that it is possible to cut out the behavioural step in this inference pattern. I don’t think this can be done, but perhaps I can be proved wrong.This is perhaps the most metaphysical aspect of my view.4. Default Settings and PracticalitiesAnother point that came up in conversation with Sven, Merel Noorman and Silvia de Conca, had to do with the default assumptions we are likely to have when dealing with robots and how this impacts on the practicalities of robots being accepting into the moral circle. In other words, even if I am right in some abstract, philosophical sense, will anyone actually follow the behavioural test I advocate? Won’t there be a lot of resistance to it in reality?Now, as I mentioned in my lecture, I am not an activist for robot rights or anything of the sort. I am interested in the general principles we should apply when settling questions of moral status; not with whether a particular being, such as a robot, has acquired moral status. That said, implicit views about the practicalities of applying the ethical behaviourist test may play an important role in some of the arguments I am making.One example of this has to do with the ‘default’ assumption we have when interpreting the behaviour of humans/animals vis-à-vis robots. We tend to approach humans and animals with an attitude of good faith, i.e. we assume their each of their outward behaviours is a sincere representation of their inner state of mind. It’s only if we receive contrary evidence that we will start to doubt the sincerity of the behaviour.But what default assumption do we have when confronting robots? It seems plausible to suggest that most people will approach them with an attitude of bad faith. They will assume that their behaviours are representative of nothing at all and will need a lot of evidence to convince them that they should be granted some weight. This suggests that (a) not all behavioural evidence is counted equally and (b) it might be very difficult, in practice, for robots to be accepted into the moral circle. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Response: I don’t see this as a criticism of ethical behaviourism but, rather, a warning to anyone who wishes to promote it. In other words, I accept that people will resist ethical behaviourism and may treat robots with greater suspicion than human or animal agents. One of the key points of this lecture and the longer academic article I wrote about the topic was to address this suspicion and skepticism. Nevertheless, the fact that there may be these practical difficulties does not mean that ethical behaviourism is incorrect. In this respect, it is worth noting that Turing was acutely aware of this problem when he originally formulated his 'Imitation Game' test. The reason why the test was purely text-based in its original form was to prevent human-centric biases affecting its operation.5. Ethical Mechanicism vs Ethical Behaviourism After I posted this article, Natesh Ganesh posted a critique of my handling of the deception objection on Twitter. He made two interesting points. First, he argued that the thought experiment I used to dismiss the deception objection was misleading and circular. If a scientist revealed the mechanisms underlying my own pain performances I would have no reason to doubt that the pain was genuine since I already know that someone with my kind of neural circuitry can experience pain. If they revealed the mechanisms underlying a robot’s pain performances things would be different because I do not yet have a reason to think that a being with that kind of mechanism can experience genuine pain. As a result, the thought experiment is circular because only somebody who already accepted ethical behaviourism would be so dismissive of the mechanistic evidence. Here’s how Natesh expresses the point:“the analogy in the last part [the response to the deception objection] seems flawed. Showing me the mechanisms of pain in entities (like humans) who we share similar mechanisms with & agree have moral standing is different from showing me the mechanisms of entities (like robots) whose moral standing we are trying to determine. Denying experience of pain in the 1st simply because I now know the circuitry would imply denying your own pain & hence moral standing. But accepting/ denying the 2nd if its a piece of code implicitly depends on whether you already accept/deny ethical behaviorism. It is just circular to appeal to that example as evidence.”He then follows up with a second point (implicit in what was just said) about the importance of mechanical similarities between entities when it comes to assessing moral standing:“I for one am more likely to [believe] a robot can experience pain if it shows the behavior & the manufacturer opened it up & showed me the circuitry and if that was similar to my own (different material perhaps) I am more likely to accept the robot experiences pain. In this case once again I needed machinery on top of behavior.”What I would say here, is that Natesh, although not completely dismissive of the importance of behaviour to assessing moral standing, is a fan of ethical mechanicism, and not ethical behaviourism. He thinks you must have mechanical similarity (equivalence?) before you can conclude that two entities share moral standing.Response: On the charge of circularity, I don’t think this is quite fair. The thought experiment I propose when responding to the deception objection is, like all thought experiments, intended to be an intuition pump. The goal is to imagine a situation in which you could describe and intervene in the mechanical underpinning of a pain performance with great precision (be it a human pain performance or otherwise) and ask whether the mere fact that you could describe the mechanism in detail or intervene in it would be make a difference to the entity’s moral standing. My intuitions suggest it wouldn’t make a difference, irrespective of the details of the mechanism (this is the point I make, above, in relation to the example given by Nathan Wildman about the robot whose behaviour is the result of a random-number generator programme). Perhaps other people’s intuitions are pumped in a different direction. That can happen but it doesn’t mean the thought experiment is circular.What about the importance of mechanisms in addition to behaviour? This is something I address in more detail in the academic paper. I have two thoughts about it. First, I could just bite the bullet and agree that the underlying mechanisms must be similar too. This would just add an additional similarity test to the assessment of moral status. There would then be similar questions as to how similar the mechanisms must be. Is it enough if they are, roughly, functionally similar or must they have the exact same sub-components and processes? If the former, then it still seems possible in principle for roboticists to create a functionally similar underlying mechanism and this could then ground moral standing for robots.Second, despite this, I would still push back against the claim that similar underlying mechanisms are necessary. This strikes me as being just a conservative prejudgment rather than a good reason for denying moral status to behaviourally equivalent entities. Why are we so confident that only entities with our neurological mechanisms (or something very similar) can experience pain (or instantiate the other mental properties relevant to moral standing)? Or, to put it less controversially, why should we be so confident that mechanical similarity undercuts behavioural similarity? If there is an entity that looks and acts like it is in pain (or has interests, a sense of personhood, agency etc), and all the behavioural tests confirm this, then why deny it moral standing because of some mechanical differences?Part of the resistance here could be that people are confusing two different claims:Claim 1: it is impossible (physically, metaphysically) for an entity that lacks sufficient mechanical similarity (with humans/animals) to have the behavioural sophistication we associate with experiencing pain, having agency etc.Claim 2: an entity that has the behavioural sophistication we associate with experiencing pain, having agency (etc) but then lacks mechanical similarity to other entities with such behavioural sophistication, should be denied moral standing because they lack mechanical similarity.Ethical behaviourism denies claim 2, but it does not, necessarily, deny claim 1. It could be the case that mechanical similarity is essential for behavioural similarity. This is something that can only be determined after conducting the requisite behavioural tests. The point, as always throughout my defence of the position, is that the behavioural evidence should be our guide. This doesn’t mean that other kinds of evidence are irrelevant but simply that they do not carry as much weight. My sense is that people who favour ethical mechanicism have a very strong intuition in favour of claim 1, which they then carry over into support for claim 2. This carry over is not justified as the two claims are not logically equivalent.Subscribe to the newsletter

Techy Tuesday
Robot Rights (S1, E2)

Techy Tuesday

Play Episode Listen Later Oct 22, 2019 17:36


this episode is about if robots should have the same rights as humans

The Misfit Nation Podcast
Ep. 107 - Robot Rights

The Misfit Nation Podcast

Play Episode Listen Later Oct 12, 2019 160:23


This Week's Topics: Rhonda Rousey is a submissive wife Why we Elon Musk is an alien How should we treat robots El Camino Movie Review Rhythm & Flow Tv Show  Mia Khalifa "speaks out" about the porn industry America's double standard about white criminals & black criminals   Leave us a VOICE MESSAGE!! Call or Text (929) 464-7348 (929) 4-MISFIT Subscribe to our Youtube https://www.youtube.com/user/MisfitNationINC Follow Crishaun the Don https://www.facebook.com/CrishaunSingh https://www.instagram.com/crishaunthedon/ https://www.youtube.com/channel/UCG7kaG2_vYdIBkaVmZwymQw Follow J Blaze https://www.instagram.com/jblaze_8/ https://soundcloud.com/jblazetheoneandonly Follow HEYZEX$ https://www.instagram.com/heyzexs/  

Philosophical Disquisitions
Episode #54 - Sebo on the Moral Problem of Other Minds

Philosophical Disquisitions

Play Episode Listen Later Feb 28, 2019


In this episode I talk to Jeff Sebo. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University.  Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. He has two co-authored books Chimpanzee Rights and Food, Animals, and the Environment. We talk about something Jeff calls the 'moral problem of other minds', which is roughly the problem of what we should to if we aren't sure whether another being is sentient or not.You can download the episode here or listen below. You can also subscribe to the show on iTunes and Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:38 - What inspired Jeff to think about the moral problem of other minds?7:55 - The importance of sentience and our uncertainty about it12:32 - The three possible responses to the moral problem of other minds: (i) the incautionary principle; (ii) the precautionary principle and (iii) the expected value principle15:26 - Understanding the Incautionary Principle20:09 - Problems with the Incautionary Principle23:14 - Understanding the Precautionary Principle: More plausible than the incautionary principle?29:20 - Is morality a zero-sum game? Is there a limit to how much we can care about other beings?35:02 - The problem of demandingness in moral theory37:06 - Other problems with the precautionary principle41:41 - The Utilitarian Version of the Expected Value Principle47:36 - The problem of anthropocentrism in moral reasoning53:22 - The Kantian Version of the Expected Value Principle59:08 - Problems with the Kantian principle1:03:54 - How does the moral problem of other minds transfer over to other cases, e.g. abortion and uncertainty about the moral status of the foetus?Relevant LinksJeff's Homepage'The Moral Problem of Other Minds' by JeffChimpanzee Ethics by Jeff and orsFood, Animals and the Environment by Jeff and Christopher Schlottman'Consider the Lobster' by David Foster Wallace'Ethical Behaviourism in the Age of the Robot' by John DanaherEpisode 48 with David Gunkel on Robot Rights  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

All You Need Is This
EPISODE 3 - Ro-Ro-Ro your Bot...Gently out of Your Way.

All You Need Is This

Play Episode Listen Later Dec 28, 2018 32:57


(*Bear not Bull) Robots, News, and A New You. This episode meanders around current events and addresses the 4 point plan you need to move forward and get out of your own way. Robot Rights and Our Compassion: https://www.nbcnews.com/mach/science/rise-smart-machines-puts-spotlight-robot-rights-ncna825791 Suzi Amis (Cameron) and Her One Meal A Day: https://omdfortheplanet.com Chris Redd (from SNL) and Ari Melber: https://www.msnbc.com/msnbc/watch/chris-redd-how-comedy-can-be-a-force-for-social-change-1395274307664 Please subscribe, rate and share. Connect with me directly! Lets Grow! @dashdoesit on IG, @dash on Twitter - dashkennedywilliams@gmail.com

60 Cycle Hum: The Guitar Podcast!

This episode was brought to you by Chase Bliss Audio. Today is Dark World day. Go to the CBA website to learn more about this new dual reverb pedal. This week's episode is brought to you by Sinasoid. Right now Sinasoid is selling the Panhandle Giving Cable in order to raise funds for those affected by Hurricane Michael in North Florida. While you're there, also check out the Slate. Slate is Sinasoid's signature cable, perfect for heavy duty use in the studio or in any live rig. This week's episode is also sponsored by Gunstreet Wiring Shop. Whether you're looking for something modern for your Strat or something classic for your Les Paul, Gunstreet has the team to provide you with expert advice on getting the right wiring kit for you.  This week Ryan talks about getting stuff from Graphtech. Steve talks about running a half-marathon. pictures of stuff 1. Blood bath 2. Ryan and Steve talk about the new BOSS amps. 3. One piece at a time 4. How many pedals is unmanageable? 5. Robotuners This week's song was sent by Gerard Becker of MIAGII and is called "High Hopes"

Philosophical Disquisitions
Episode #48 - Gunkel on Robot Rights

Philosophical Disquisitions

Play Episode Listen Later Nov 1, 2018


In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political structures in the information age and much much more. He is the author of several books, including Hacking Cyberspace, The Machine Question, Of Remixology, Gaming the System and, most recently, Robot Rights. We have a long debate/conversation about whether or not robots should/could have rights. You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:52 - Isn't the idea of robot rights ridiculous?3:37 - What is a robot anyway? Is the concept too nebulous/diverse?7:43 - Has science fiction undermined our ability to think about robots clearly?11:01 - What would it mean to grant a robot rights? (A precis of Hohfeld's theory of rights)18:32 - The four positions/modalities one could take on the idea of robot rights21:32 - The First Modality: Robots Can't Have Rights therefore Shouldn't23:37 - The EPSRC guidelines on robotics as an example of this modality26:04 - Criticisms of the EPSRC approach28:27 - Other problems with the first modality31:32 - Europe vs Japan: why the Japanese might be more open to robot 'others'34:00 - The Second Modality: Robots Can Have Rights therefore Should (some day)39:53 - A debate between myself and David about the second modality (why I'm in favour it and he's against it)47:17 - The Third Modality: Robots Can Have Rights but Shouldn't (Bryson's view)53:48 - Can we dehumanise/depersonalise robots?58:10 - The Robot-Slave Metaphor and its Discontents1:04:30 - The Fourth Modality: Robots Cannot Have Rights but Should (Darling's view)1:07:53 - Criticisms of the fourth modality1:12:05 - The 'Thinking Otherwise' Approach (David's preferred approach)1:16:23 - When can robots take on a face?1:19:44 - Is there any possibility of reconciling my view with David's?1:24:42 - So did David waste his time writing this book?  Relevant LinksDavid's HomepageRobot Rights from MIT Press, 2018 (and on Amazon)Episode 10 - Gunkel on Robots and Cyborgs'The other question: can and should robots have rights?' by David Gunkel'Facing Animals: A Relational Other-Oriented Approach to Moral Standing' by Gunkel and CoeckelberghThe Robot Rights Debate (Index) - everything I've written or said on the topic of robot rightsEPSRC Principles of RoboticsEpisode 24 - Joanna Bryson on Why Robots Should be Slaves'Patiency is not a virtue: the design of intelligent systems and systems of ethics' by Joanna BrysonRobo Sapiens Japanicus - by Jennifer Robertson #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Algocracy and Transhumanism Podcast
Episode #48 – Gunkel on Robot Rights

Algocracy and Transhumanism Podcast

Play Episode Listen Later Oct 31, 2018


In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political … More Episode #48 – Gunkel on Robot Rights

Nerd Is The New Sexy Entertainment
S-3 Ep-48 NITNS podcast Topic: Robot Rights

Nerd Is The New Sexy Entertainment

Play Episode Listen Later Aug 25, 2018 47:42


Nerd is the new sexy is our outlook on everything nerd. A lot of ranting by nerds. In the future we will be reviewing all sorts of toys, video games, movies, comic books and so forth. Season 3! Episode 48. As the Blade Runner spoof at the opening of the podcast should tell you: The podcast tonight is about Robot rights!!!? Listen to Wyldfyre1 and Sunrie ask when does a robot act more human then robot and vice versa? We also talk about the lovely world event that happened on Friday 2-9-17. This weeks Best of Craig's list: Heart shaped Potato Want to get in touch with the nerds? Give them a Podcast idea? Have something that you want them to review? Contact them below. Nerd is the new sexy Facebook page Nerd is the new sexy twitter Nerd is the new sexy Instagram Great news nerds! You can call the NITNSE staff now and leave them idea's and the like for future podcasts, streams, and events! Just dial (559) 997-6803 and leave a voice mail!

Finding Genius Podcast
David J. Gunkel - Author of "Gaming the System" & "Robot Rights".

Finding Genius Podcast

Play Episode Listen Later Jul 9, 2018 23:21


David J. Gunkel (PhD) is an award-winning educator, scholar and author, specializing in the study of information and communication technology with a focus on ethics. Formally educated in philosophy and media studies, his teaching and research synthesize the hype of high-technology with the rigor and insight of contemporary critical analysis. He is the author of over 50 scholarly journal articles and book chapters, has written and published 7 influential books, lectured and delivered award-winning papers throughout North and South America and Europe, is the managing editor and co-founder of the International Journal of Žižek Studies and co-editor of the Indiana University Press series in Digital Game Studies. He currently holds the position of Professor in the Department of Communication at Northern Illinois University (USA), and his teaching has been recognized with numerous awards, including NIU's Excellence in Undergraduate Teaching and the prestigious Presidential Teaching Professor.

Keep It Weird
The Consciousness Conundrum

Keep It Weird

Play Episode Listen Later Jun 15, 2018 83:40


Greetings Weirdos! Are you ready to hear the most infectious laugh you've ever heard??  This week Lauren and Ashley are joined by the adorable and super intelligent RACHEL THOMPSON as we chat about ARTIFICIAL INTELLIGENCE.  We talk Cyborgs, Androids, Bionic Applications, the art of Cyborgism, Robot Rights, Digitizing Consciousness, and we even try to answer the question "What is a soul?"   Where did the idea of artificial intelligence begin?  How close are we to the reality?  What are the questions that need to be answered first?  How will it work?  What does the future of humankind actually look like?  Join us and find out! Follow us on Instagram and Twitter @keepitweirdcast and our Facebook page KEEP IT WEIRD. Check out our Patreon page at www.patreon.com/keepitweirdpodcast to find ways you can donate to support the show and get BONUS episodes and clips! Don't forget to subscribe and rate us five stars while you're here! 

Everything's Great, Nothing Is Wrong
4: #29 Bored Nerds' Virus

Everything's Great, Nothing Is Wrong

Play Episode Listen Later May 7, 2018 26:18


In a continuing advocacy for Robot Rights, the discussion turns to a drone trespasser. What locations are appropriate for drones to be, and how much pizza should they be mandated to carry? Cricket hatches a plan to weaponize faeces, and Jeff, not in this episode, only here in this text, wonders: should Area 51 have been called Volume 51? Perfect Storm Ethics makes its return with the Hawthorne effect. Between Cricket's mic and Jeff's headphones, this is just the beginning of our audio problems (including the brief reappearance of the loud horse, before the horse was removed). Time is unifying, slowly.  We keep saying "droid" for some reason. Topics include: Steroid-Induced B and/or E, Where Not To Scatter Ashes, The Panopticon, Congenital Toxoplasmosis, and Guns.

Daily Renegade
Would New Robot Rights Threaten Human Rights?

Daily Renegade

Play Episode Listen Later Apr 17, 2018 13:00


http://JoshPeckDisclosure.com PLEASE SUBSCRIBE AND SHARE! Artificial Intelligence experts are slamming a new EU proposal; would the push for robot rights degrade our human rights? Find out here!  Become a Peck Patron at http://patreon.com/joshpeck If you enjoyed this free video, please consider donating to help us bring you more free content at http://joshpeckdisclosure.com/donate or, if you can't donate financially, please consider donating a few moments of your time by sharing this video, rating, and leaving a comment in the comment section below. Thank you, take care, and God bless! More Josh Peck: Patreon: http://patreon.com/joshpeck Website: http://JoshPeckDisclosure.com Steemit Blog: http://steemit.com/@joshpeck Christina Peck: http://ChristinaPeck.com Email - JoshPeckDisclosure@Gmail.com Facebook - http://facebook.com/josh.peck.5264 JoshPeckDisclosure - http://youtube.com/JoshPeckDisclosure Twitter - @JPDisclosure

Thunk Tank Podcast
Episode 14 - Artificial Intelligence, Simulation Theory, and the Singularity

Thunk Tank Podcast

Play Episode Listen Later Feb 28, 2018 130:20


Will robots and artificial intelligence one day surprass the power of human intellect? Do we all really live in an elaborate simulation coerced by beings beyond our comprehension? Are there enough beers to crack, along with these mysteries? Join us in this week's episode as we explore these and many other technological topics with our very special guest, a computer science engineer! And, as always, if you enjoy what you hear, please consider sharing and subscribing for updates! (more information listed below) This episode's brews: Coney Island Brewing Co. - Brooklyn, NY: Hard Root Beer Montauk Brewing Company - Montauk, NY: Session IPA Burrial Beer Co. - Ashville, NC: Skullsaw (Porter w/NC Sweet Potatoes) Patreon: Please Support us on Patreon! ( https://www.patreon.com/thunktankpodcast ) (Please consider supporting us! It costs almost nothing, and there are awesome prizes if you do!) Email: thunktankpodcast@gmail.com Twitter: @thunktankers This Episode's Links: Elon Musk on Simulation Theory || https://youtu.be/OEMnwI2G2U0 (Alan Watts)|| https://www.youtube.com/watch?v=9CO6M2HsoIA (Slaughterbots)// https://youtu.be/szzVlQ653as (Ricky and Morty Roy) || https://youtu.be/RDZu04v7_hc (Boston Dynamics Robots) || Simulation Theory Broken Down || Robot Rights || Quantum Computers || The Future is Coming, but Going Where? || Westworld Season One Explained (SPOILERS) ||

KAPOW Radio Show
Freedom Friday News-"Robot Rights"

KAPOW Radio Show

Play Episode Listen Later Dec 8, 2017 56:00


Artificial Intelligence creates it own A.I., the rise of the smart machines require robot rights, parents are obligated to edit their children's genes, and the new marijuana causes people to vomit and scream demonically at the same time.  

Dig In With Us!
Cap 47 - Robot Rights!

Dig In With Us!

Play Episode Listen Later Dec 7, 2017 42:11


In this capitulo, we talked about friendsgiving, Alexa, Black Friday, TV show, Harry Potter, Sophia the robot, and robot rights. Special shout out to our wonderful patrons at patreon.com/diginwithus - Karla, Kyle, Katherine and Chris

Backchat
Health insurance / Robot rights and responsibilities

Backchat

Play Episode Listen Later Jan 16, 2017 59:00


Pixel Digest: A podcast from InterVarsity's Ministry in Digital Spaces
#14 - Zelda, Retro Gaming, and Robot Rights

Pixel Digest: A podcast from InterVarsity's Ministry in Digital Spaces

Play Episode Listen Later Dec 6, 2016 56:55


Theme music: “I’m Going Bazurky” by morgantj http://dig.ccmixter.org/files/morgantj/29944 The conversation roadmap: The Legend of Zelda: Breath of the Wild gameplay https://youtu.be/6nMnc4P_DOk Nintendo Switch http://www.nintendo.com/switch Building a Retro Gaming Console http://bretsw.com/retro-gaming-console/ Wizard of Wor https://en.wikipedia.org/wiki/Wizard_of_Wor Internet Archive https://archive.org/ New Yorker article http://www.newyorker.com/magazine/2016/11/28/if-animals-have-rights-should-robots Danah Boyd https://twitter.com/zephoria

This Blows My Mind
John Stands Up For Robot Rights

This Blows My Mind

Play Episode Listen Later Oct 12, 2016 37:42


John is joined again by The Documentary Show co-host, Keith Bodayla, as they imagine a future with self-aware robots that have joined our work force. They answer a few questions but most importantly: “Will robots want vacation days?”

Witty Banter
Resin IPA – Clash of Clans, Robot Rights, Facebook Live – Witty Banter Episode 65

Witty Banter

Play Episode Listen Later Jul 1, 2016


Max Scott can’t stay away from the show, so we let him in on the discussion about the sale of Supercell, Facebook’s push for their live service, and another round of Metal or Magic. This week’s beer is Sixpoint Craft Ale’s Resin IPA.

Love Bytes: Get a Robot
RoboEthics – We Need Universal Robot Rights, Ethics And Legislation Part I

Love Bytes: Get a Robot

Play Episode Listen Later Sep 15, 2015 7:34


Roboethics Segments: Part I

Think Again – a Big Think Podcast
7. Baratunde Thurston (Comedian, Cultural Critic) – Stupidity Scaled/Robot Rights/Brand You

Think Again – a Big Think Podcast

Play Episode Listen Later Aug 1, 2015 26:23


At what point do sex robots become sex slaves? How are bandwidth and storage capacity changing our lives? Can you have a "personal brand" and "be yourself" at the same time? In this week's episode of Big Think's Think Again podcast, host Jason Gots is joined by author and tech pundit Baratunde Thurston, "a philosopher comedian fighting for the future." Interview clips from Rick Smolan, Lawrence Krauss, and Guy Kawasaki launch a discussion of human potential, social status, identity, and how Kim Kardashian's butt didn't actually "break the internet". Learn more about your ad choices. Visit megaphone.fm/adchoices

Jeff In Motion
Episode 91: Robot Rights

Jeff In Motion

Play Episode Listen Later Jan 24, 2014 26:01


Listen as Jeff doesn’t talk about the friendzone and talks about robots instead.

The Skeptics' Guide to the Universe
The Skeptics Guide #86 - Mar 14 2007

The Skeptics' Guide to the Universe

Play Episode Listen Later Mar 17, 2007 66:05


News Items: Update on the Tomb of Jesus, The Revenge of Pluto, Robot Rights, More ID Nonsense; Your E-mails and Questions: ADHD, Nerves Conduct by Sound?; Name that Logical Fallacy; Science or Fiction; Skeptical Puzzle