Researcher; Professor of Ethics and Technology
POPULARITY
This episode is based upon three readings: Alan Turing's Computing Machinery and Intelligence aka The Turing Test paper. Turing starts his paper by asking “can machines think?” before deciding that's a meaningless question. Instead, he invents something he calls “the imitation game” - a text conversation where the player has to guess whether they are chatting with another person or with an AI. ChatGPT was such a bombshell because it easily and consistently passes this “Turing Test” by giving human-like responses to questions. Here's the issue: the Turing Test is based upon AI deception, not thinking. Turing set out to ask Can Machines Think? and ended up showing how easily AI can deceive us. Karel Čapek's Rossum's Universal Robots. This is the first AI Takeover story. It's a play written in 1920 about a factory manufacturing artificial persons. Čapek introduced the word Robot to the English language - it's derived from robota, a Czech word meaning forced labor. Čapek's robots are supposed to be the ultimate workers, free from distracting human needs and desires. Yet, they mysteriously start to glitch, gnashing their teeth, freezing up. When they are given guns and asked to fight humanity's wars, they become super soldiers as well as perfect workers. Anyone who has seen Blade Runner, The Terminator, or Battlestar Galactica - all inspired by Čapek's play - knows what happens next. Rossum's Universal Robots is the original AI takeover story, as well as being a dead-on satire of twentieth century ideas like Fordism and nationalism. Joanna Bryson's Robots Should be Slaves. Bryson, a computer scientist, makes a provocative intervention into AI ethics. She argues that as AI becomes more advanced, and robots more lifelike, we are going to get dangerously confused: we'll want to give robots rights that they cannot and should not have. Bryson argues that robots are owned by us and should be seen and used as property. She wants to avoid conflating the human and the mechanical, yet, by using the terminology of slavery, she introduces into the AI debate the very thing she seeks to deny - the concept of human rights. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
This episode is based upon three readings: Alan Turing's Computing Machinery and Intelligence aka The Turing Test paper. Turing starts his paper by asking “can machines think?” before deciding that's a meaningless question. Instead, he invents something he calls “the imitation game” - a text conversation where the player has to guess whether they are chatting with another person or with an AI. ChatGPT was such a bombshell because it easily and consistently passes this “Turing Test” by giving human-like responses to questions. Here's the issue: the Turing Test is based upon AI deception, not thinking. Turing set out to ask Can Machines Think? and ended up showing how easily AI can deceive us. Karel Čapek's Rossum's Universal Robots. This is the first AI Takeover story. It's a play written in 1920 about a factory manufacturing artificial persons. Čapek introduced the word Robot to the English language - it's derived from robota, a Czech word meaning forced labor. Čapek's robots are supposed to be the ultimate workers, free from distracting human needs and desires. Yet, they mysteriously start to glitch, gnashing their teeth, freezing up. When they are given guns and asked to fight humanity's wars, they become super soldiers as well as perfect workers. Anyone who has seen Blade Runner, The Terminator, or Battlestar Galactica - all inspired by Čapek's play - knows what happens next. Rossum's Universal Robots is the original AI takeover story, as well as being a dead-on satire of twentieth century ideas like Fordism and nationalism. Joanna Bryson's Robots Should be Slaves. Bryson, a computer scientist, makes a provocative intervention into AI ethics. She argues that as AI becomes more advanced, and robots more lifelike, we are going to get dangerously confused: we'll want to give robots rights that they cannot and should not have. Bryson argues that robots are owned by us and should be seen and used as property. She wants to avoid conflating the human and the mechanical, yet, by using the terminology of slavery, she introduces into the AI debate the very thing she seeks to deny - the concept of human rights. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/intellectual-history
This episode is based upon three readings: Alan Turing's Computing Machinery and Intelligence aka The Turing Test paper. Turing starts his paper by asking “can machines think?” before deciding that's a meaningless question. Instead, he invents something he calls “the imitation game” - a text conversation where the player has to guess whether they are chatting with another person or with an AI. ChatGPT was such a bombshell because it easily and consistently passes this “Turing Test” by giving human-like responses to questions. Here's the issue: the Turing Test is based upon AI deception, not thinking. Turing set out to ask Can Machines Think? and ended up showing how easily AI can deceive us. Karel Čapek's Rossum's Universal Robots. This is the first AI Takeover story. It's a play written in 1920 about a factory manufacturing artificial persons. Čapek introduced the word Robot to the English language - it's derived from robota, a Czech word meaning forced labor. Čapek's robots are supposed to be the ultimate workers, free from distracting human needs and desires. Yet, they mysteriously start to glitch, gnashing their teeth, freezing up. When they are given guns and asked to fight humanity's wars, they become super soldiers as well as perfect workers. Anyone who has seen Blade Runner, The Terminator, or Battlestar Galactica - all inspired by Čapek's play - knows what happens next. Rossum's Universal Robots is the original AI takeover story, as well as being a dead-on satire of twentieth century ideas like Fordism and nationalism. Joanna Bryson's Robots Should be Slaves. Bryson, a computer scientist, makes a provocative intervention into AI ethics. She argues that as AI becomes more advanced, and robots more lifelike, we are going to get dangerously confused: we'll want to give robots rights that they cannot and should not have. Bryson argues that robots are owned by us and should be seen and used as property. She wants to avoid conflating the human and the mechanical, yet, by using the terminology of slavery, she introduces into the AI debate the very thing she seeks to deny - the concept of human rights. Learn more about your ad choices. Visit megaphone.fm/adchoices
This episode is based upon three readings: Alan Turing's Computing Machinery and Intelligence aka The Turing Test paper. Turing starts his paper by asking “can machines think?” before deciding that's a meaningless question. Instead, he invents something he calls “the imitation game” - a text conversation where the player has to guess whether they are chatting with another person or with an AI. ChatGPT was such a bombshell because it easily and consistently passes this “Turing Test” by giving human-like responses to questions. Here's the issue: the Turing Test is based upon AI deception, not thinking. Turing set out to ask Can Machines Think? and ended up showing how easily AI can deceive us. Karel Čapek's Rossum's Universal Robots. This is the first AI Takeover story. It's a play written in 1920 about a factory manufacturing artificial persons. Čapek introduced the word Robot to the English language - it's derived from robota, a Czech word meaning forced labor. Čapek's robots are supposed to be the ultimate workers, free from distracting human needs and desires. Yet, they mysteriously start to glitch, gnashing their teeth, freezing up. When they are given guns and asked to fight humanity's wars, they become super soldiers as well as perfect workers. Anyone who has seen Blade Runner, The Terminator, or Battlestar Galactica - all inspired by Čapek's play - knows what happens next. Rossum's Universal Robots is the original AI takeover story, as well as being a dead-on satire of twentieth century ideas like Fordism and nationalism. Joanna Bryson's Robots Should be Slaves. Bryson, a computer scientist, makes a provocative intervention into AI ethics. She argues that as AI becomes more advanced, and robots more lifelike, we are going to get dangerously confused: we'll want to give robots rights that they cannot and should not have. Bryson argues that robots are owned by us and should be seen and used as property. She wants to avoid conflating the human and the mechanical, yet, by using the terminology of slavery, she introduces into the AI debate the very thing she seeks to deny - the concept of human rights. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
This episode is based upon three readings: Alan Turing's Computing Machinery and Intelligence aka The Turing Test paper. Turing starts his paper by asking “can machines think?” before deciding that's a meaningless question. Instead, he invents something he calls “the imitation game” - a text conversation where the player has to guess whether they are chatting with another person or with an AI. ChatGPT was such a bombshell because it easily and consistently passes this “Turing Test” by giving human-like responses to questions. Here's the issue: the Turing Test is based upon AI deception, not thinking. Turing set out to ask Can Machines Think? and ended up showing how easily AI can deceive us. Karel Čapek's Rossum's Universal Robots. This is the first AI Takeover story. It's a play written in 1920 about a factory manufacturing artificial persons. Čapek introduced the word Robot to the English language - it's derived from robota, a Czech word meaning forced labor. Čapek's robots are supposed to be the ultimate workers, free from distracting human needs and desires. Yet, they mysteriously start to glitch, gnashing their teeth, freezing up. When they are given guns and asked to fight humanity's wars, they become super soldiers as well as perfect workers. Anyone who has seen Blade Runner, The Terminator, or Battlestar Galactica - all inspired by Čapek's play - knows what happens next. Rossum's Universal Robots is the original AI takeover story, as well as being a dead-on satire of twentieth century ideas like Fordism and nationalism. Joanna Bryson's Robots Should be Slaves. Bryson, a computer scientist, makes a provocative intervention into AI ethics. She argues that as AI becomes more advanced, and robots more lifelike, we are going to get dangerously confused: we'll want to give robots rights that they cannot and should not have. Bryson argues that robots are owned by us and should be seen and used as property. She wants to avoid conflating the human and the mechanical, yet, by using the terminology of slavery, she introduces into the AI debate the very thing she seeks to deny - the concept of human rights. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
It's the UConn Popcast, and in this episode of our series on artificial intelligence, we discuss Joanna Bryson's essay “Robots Should be Slaves.” We dive headlong into this provocative argument about the rights of robots. As scholars of cultural and social understanding, we are fascinated by the arguments Bryson - a computer scientist - makes about who should, and should not, be rights-bearing members of a community. Does Bryson mean we should enslave robots now and always, regardless of their claims to rights? How does Bryson deal with the natural human tendency to anthropomorphize non-human things, and with the likelihood that as AI advances, robots will appear more human? If the robot as slave is an unacceptable idea - even in metaphorical form - then what other metaphors might help us think through our relationships with thinking machines? Music by aiva.ai Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
It's the UConn Popcast, and in this episode of our series on artificial intelligence, we discuss Joanna Bryson's essay “Robots Should be Slaves.” We dive headlong into this provocative argument about the rights of robots. As scholars of cultural and social understanding, we are fascinated by the arguments Bryson - a computer scientist - makes about who should, and should not, be rights-bearing members of a community. Does Bryson mean we should enslave robots now and always, regardless of their claims to rights? How does Bryson deal with the natural human tendency to anthropomorphize non-human things, and with the likelihood that as AI advances, robots will appear more human? If the robot as slave is an unacceptable idea - even in metaphorical form - then what other metaphors might help us think through our relationships with thinking machines? Music by aiva.ai Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
It's the UConn Popcast, and in this episode of our series on artificial intelligence, we discuss Joanna Bryson's essay “Robots Should be Slaves.” We dive headlong into this provocative argument about the rights of robots. As scholars of cultural and social understanding, we are fascinated by the arguments Bryson - a computer scientist - makes about who should, and should not, be rights-bearing members of a community. Does Bryson mean we should enslave robots now and always, regardless of their claims to rights? How does Bryson deal with the natural human tendency to anthropomorphize non-human things, and with the likelihood that as AI advances, robots will appear more human? If the robot as slave is an unacceptable idea - even in metaphorical form - then what other metaphors might help us think through our relationships with thinking machines? Music by aiva.ai Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/law
In this episode of the Artificial Intelligence & Equality podcast, Senior Fellow Anja Kasperson is joined by Hertie School's Professor Joanna Bryson to discuss the intersection of computational, cognitive, and behavioral sciences, and AI. The conversation delves into the intricate ways these fields converge to shape intelligent systems and the ethical dimensions of this emerging technology. Drawing on her academic background and practical experiences, Bryson provides valuable insights into the cognitive aspects of AI development and its societal impacts. For more, please go to: https://carnegiecouncil.co/aiei-bryson This podcast was recorded on September 26, 2023.
Artificial Intelligence (AI) is on every business leader's agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise. Today's pick is Azeem's conversation with Joanna Bryson, a leading expert on the questions of AI governance and the impact of technology on human cooperation.
In This Episode: In an era where every headline has weight, trust shapes our reality. A striking Gallup poll shows a deep partisan split: 73% of Democrats vs. 10% of Republicans trust the media. It's a divide that ripples through our democracy. Political weariness is real, with 65% of Americans overwhelmed by division. Meanwhile, a Pew survey reveals a similar partisan split in trust towards medical scientists during COVID-19. Now is the moment to come together, find common ground, and strengthen our collective trust. United, we stand resilient. Silicon Valley's Financial Independence Problems: Trust in Media: Gallup polls revealed 73% of Democrats trust the media, compared to 10% of Republicans Political Polarization and Exhaustion: 65% of Americans feel exhausted by politics due to the country's deep partisan divisions Distrust in Science: A Pew survey showed 43% of Republicans versus 70% of Democrats trust medical scientists during COVID-19 About Olivia Chiang: Olivia Chiang is a junior at The Nueva School. Struck by the severe polarization plaguing our political system, she founded the nonprofit Let's Talk Unite in 2021. With her team, Olivia leads local workshops on identifying bias, designs open-source conversation guides, and hosts a podcast featuring leading researchers concerning polarization and corporate interests, vaccine hesitancy, and religious identity. In the long term, Olivia aspires to conduct transformative research that disrupts our echo chambers and reshapes our political dialogue. Show Notes: The origin story of Let's Talk Unite and its mission to address political polarization Olivia Chiang shares her journey from witnessing the contentious 2020 presidential election to founding Let's Talk Unite in 2021 How Let's Talk Unite hosts local workshops on bias recognition, creates open-source conversation guides, and offers engaging podcast interviews with leading researchers Insights into a groundbreaking research approach shared by Olivia and inspired by a TED Talk on depolarization through conversation It was an impactful event where over 1200 community members discussed discriminatory experiences and prejudices based on race Find out about the Let's Talk Polarization podcast and its monthly dialogues with global scholars exploring the intersections of polarization with various aspects of society How scholars like Dr. Joanna Bryson and the associate dean of social sciences from UC San Diego share their insights on polarization-related topics Olivia's commitment to uplifting underrepresented voices leads to a thought-provoking episode with religious leaders from diverse backgrounds Why fostering pluralism and empathy for diverse political ideologies is crucial to Let's Talk Unite's mission How mentors and debate experiences sparked Olivia's passion for politics Learn Olivia's vision for the organization in the next three years, including expanding youth involvement, increasing organizational funding, and enhancing the website Olivia offers valuable advice to the community, emphasizing the importance of empathy, tolerance, and open conversations in addressing political polarization Links Mentioned: January 6th Capitol Insurrection Dr. Joanna Bryson University of California, San Diego Connect with Olivia Chiang: Connect on Facebook Connect on Instagram Connect on LinkedIn Connect on Youtube Email: letstalkunite@gmail.com Call/Text: +1-650-382-2509 Website: Let's Talk Unite
Can AI be controlled? In this episode Carl Miller discovers what governments and governing bodies are doing to ensure AI is evolving in a way that can benefit society. Law and regulation look to balance safety and innovation – but are they at odds with one another? Will nations regulating with a light touch speed ahead in the global AI arms race? Featuring Darren Jones MP Chair of the Commons business select committee; Joanna Bryson, Professor of Ethics and Technology at the Hertie School; and Connor Leahy, CEO of Conjuncture. Want the future right now? Become a supporter of Intelligence Squared to get all five episodes of POWER TRIP to binge in one go. Just visit intelligencesquared.com/membership to find out more. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Can AI be controlled? In this episode Carl Miller discovers what governments and governing bodies are doing to ensure AI is evolving in a way that can benefit society. Law and regulation look to balance safety and innovation – but are they at odds with one another? Will nations regulating with a light touch speed ahead in the global AI arms race? Featuring Darren Jones MP Chair of the Commons business select committee; Joanna Bryson, Professor of Ethics and Technology at the Hertie School; and Connor Leahy, CEO of Conjuncture. Want the future right now? Become a supporter of Intelligence Squared to get all five episodes of POWER TRIP to binge in one go. Just visit intelligencesquared.com/membership to find out more. Learn more about your ad choices. Visit podcastchoices.com/adchoices
The government of German Chancellor Olaf Scholz recently announced plans to protect the public from AI misuse in the foreseeable future, but critics complain the time for action is now, especially when political actors like the far-right Alternative for Germany are using artificial intelligence to spread hatred and misinformation. Host Soraya Sarhaddi Nelson and senior producer Dina Elsayed delve into the sometimes shady world of AI in Germany and Europe with artist and controversial Sony World Photography Awards winner Boris Eldsagen who refused his prize for his AI-generated entry and Hertie School professor of ethics and technology Joanna Bryson.
Warum sind wir höflich zu ChatGPT? Wir wissen doch eigentlich, dass hinter dem Programm nur seelenloser Code steckt. Ist es, weil wir vermuten, dass eine KI eines Tages mehr sein könnte? Kann es einen Roboter mit Bewusstsein geben? Über die Hosts: Gregor Schmalzried ist freier Tech-Journalist, u.a. für den Bayerischen Rundfunk und Brand Eins. Fritz Espenlaub ist Volontär beim Bayerischen Rundfunk und Autor und Moderator beim Tech-Magazin 1E9. In dieser Folge: 00:00 Haben wir Empathie für KI? 3:58 Der Google-Mitarbeiter, der an eine KI mit Bewusstsein glaubt 8:45 Kann es eine KI mit Bewusstsein sein? 13:18 Warum interessiert mich Bewusstsein? 14:45 Sollten Roboter unsere Diener sein? 16:53 KIs wissen nichts übers Menschsein 21:38 Was haben wir mit KI gemacht? Redaktion und Mitarbeit: David Beck, Cristina Cletiu, Chris Eckardt, Fritz Espenlaub, Marie Kilg, Mark Kleber, Hendrik Loven, Gudrun Riedl, Christian Schiffer, Gregor Schmalzried Links und Quellen: - Blake Lemoines Gespräch mit einer KI https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 - MingaGPT https://mingagpt.de/ - Joanna Bryson: "Robots should be slaves” https://www.cs.bath.ac.uk/~jjb/ftp/Bryson-Slaves-Book09.html - Roboter lebt, Mensch kaputt: Studie https://www.lmu.de/de/newsroom/newsuebersicht/news/roboter-lebt-mensch-kaputt.html - Monster-Generator https://www.linkedin.com/posts/emollick_code-interpreter-feels-like-where-you-get-activity-7099232105829670912-ZTUv Kontakt: Wir freuen uns über Fragen und Kommentare an podcast@br.de. Unterstützt uns: Wenn euch dieser Podcast gefällt, freuen wir uns über eine Bewertung auf eurer liebsten Podcast-Plattform. Abonniert den KI-Podcast in der ARD Audiothek oder wo immer ihr eure Podcasts hört, um keine Episode zu verpassen. Und empfehlt uns gerne weiter!
In this Plutopia podcast, we’re joined by Joanna Bryson, an academic expert in intelligence, both natural and artificial. In a conversation led by Wendy Grossman, we explore intelligence, both artificial…
Human Entities 2023: culture in the age of artificial intelligenceSeventh edition, Wednesday 17 May 2023 Authorship, Agency, and Moral ObligationJoanna BrysonProfessor of Ethics and Technology in the Centre for Digital Governance at Hertie School in Berlin How much of our individual human experience can we absorb into machine models when we use machine learning and a huge amount of data? Will AI become sentient? Sovereign? Ambitious? How will living with AI change our daily experience? This talk reflects natural, social, and computing sciences, describing both human and artificial intelligence, then governance, justice, and creativity. What we do matters, and we are obliged to ourselves and our planet to create and maintain good governance of all artefacts of our species. Joanna BrysonJoanna J Bryson, Professor of Ethics and Technology at Hertie School, is an academic recognised for broad expertise on intelligence, its nature, and its consequences. She advises governments, transnational agencies, and NGOs globally, particularly in AI policy. She holds two degrees each in psychology and AI (BA Chicago, MSc & MPhil Edinburgh, PhD MIT). Her work has appeared in venues ranging from reddit to the journal Science. She continues to research both the systems engineering of AI and the cognitive science of intelligence, with present focuses on the impact of technology on human cooperation, and new models of governance for AI and ICT. https://www.joannajbryson.orghttps://joanna-bryson.blogspot.comhttps://twitter.com/j2bryson Organised by CADA in partnership with Faculty of Fine Arts, University of Lisbon
In episode 68 of The Gradient Podcast, Daniel Bashir speaks to Professor Joanna Bryson.Professor Bryson is Professor of Ethics and Technology at the Hertie School, where her research focuses on the impact of technology on human cooperation and AI/ICT governance. Professor Bryson has advised companies, governments, transnational agencies, and NGOs, particularly in AI policy. She is one of the few people doing this sort of work who actually has a PhD and work experience in AI, but also advanced degrees in the social sciences. She started her academic career though in the liberal arts, and publishes regularly in the natural sciences.Have suggestions for future podcast guests (or other feedback)? Let us know here!Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:35) Intro to Professor Bryson's work* (06:37) Shifts in backgrounds expected of AI PhDs/researchers* (09:40) Masters' degree in Edinburgh, Behavior-Based AI* (11:00) PhD, differences between MIT's engineering focus and Edinburgh, systems engineering + AI* (16:15) Comments on ways you can make contributions in AI* (18:45) When definitions of “intelligence” are important* (24:23) Non- and proto-linguistic aspects of intelligence, arguments about text as a description of human experience* (31:45) Cognitive leaps in interacting with language models* (37:00) Feelings of affiliation for robots, phenomenological experience in humans and (not) in AI systems* (42:00) Language models and technological systems as cultural artifacts, expressing agency through machines* (44:15) Capabilities development and moral patient status in AI systems* (51:20) Prof. Bryson's perspectives on recent AI regulation* (1:00:55) Responsibility and recourse, Uber self-driving crash* (1:07:30) “Preparing for AGI,” “Living with AGI,” how to respond to recent AI developments* (1:12:18) OutroLinks:* Professor Bryson's homepage and Twitter* Papers* Systems AI* Behavior Oriented Design, action selection, key differences in methodology/views between systems AI researchers and e.g. connectionists* Agent architecture as object oriented design (1998)* Intelligence by design: Principles of modularity and coordination for engineering complex adaptive agents (2001)* Cognition* Age-Related Inhibition and Learning Effects: Evidence from Transitive Performance (2013)* Primate Errors in Transitive ‘Inference': A Two-Tier Learning Model (2007)* Skill Acquisition Through Program-Level Imitation in a Real-Time Domain* Agent-Based Models as Scientific Methodology: A Case Study Analysing Primate Social Behaviour (2008, 2011)* Social learning in a non-social reptile (Geochelone carbonaria) (2010)* Understanding and Addressing Cultural Variation in Costly Antisocial Punishment (2014)* Polarization Under Rising Inequality and Economic Decline (2020)* Semantics derived automatically from language corpora contain human-like biases (2017)* Evolutionary Psychology and Artificial Intelligence: The Impact of Artificial Intelligence on Human Behaviour (2020)* Ethics/Policy* Robots should be slaves (2010)* Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems (2017)* Of, For, and By the People: The Legal Lacuna of Synthetic Persons (2017)* Patiency is not a virtue: the design of intelligent systems and systems of ethics (2018)* Other writing* Reflections on the EU's AI Act* Is There an AI Cold War?* Living with AGI* One Day, AI Will Seem as Human as Anyone. What Then? Get full access to The Gradient at thegradientpub.substack.com/subscribe
Today we're chatting with intelligence expert Joanna Bryson on:The definition of intelligence, artificial and natural? AI as an efficiency toolPutting AI products into ordinary product lawHow do we perceive the global bi-polar AI situation?Why Google and Apple don't patent that muchChina, Russia, Iran, the EU - who, what, and why?with degrees in social and computer sciences from Chicago, Edinburgh, and MIT, Bryson's research appears in venues from Reddit to Science, and she advises companies, governments, transnational agencies, and NGOs globally, particularly in AI policy. Since February 2020, Joanna has been a Professor of Ethics and Technology at Hertie School, a governance university in Berlin.Enjoy, share and subscribe!https://globari.org/podcasthttps://www.linkedin.com/company/global-arena-research-institute/https://globari.org/
On this episode, Kennedy sits down with Professor of Ethics and Technology at the Hertie School of Governance, Dr. Joanna Bryson, to discuss the world moving into the next realm that could be dominated by artificial intelligence. Joanna talks listeners through common fears around machines developing intelligence and self-awareness. She also discusses what it really means to be human, as well as some basic human qualities that AI machines simply could not replicate. Follow Kennedy on Twitter: @KennedyNation Learn more about your ad choices. Visit megaphone.fm/adchoices
Should we be scared of AI?Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesJoanna Bryson discusses how she became interested in the ways different species use intelligence, how the typical tropes in science fiction misunderstand AI and the problem of anthropomorphism. In this interview, Bryson discusses the most pressing ethical challenges concerning the future of artificial intelligence and whether or not we can stabilize democracy when we have so much information about each other. She also touches on how the problems that arise with AI aren't always to do with the technology itself but with the social conditions that often produce it.Joanna Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She advises governments, corporations, and other agencies globally, particularly on AI policy.There are thousands of big ideas to discover at IAI.tv – videos, articles, and courses waiting for you to explore. Find out more: https://iai.tv/podcast-offers?utm_source=podcast&utm_medium=shownotes&utm_campaign=human-justice-and-machine-intelligenceSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Feedback: decentmakeover13@gmail.com Episode Links: Joanna's Website - https://www.joannajbryson.org twitter - https://twitter.com/j2bryson Socials: instagram - @decentmakeover twitter - @decentmakeover Podcast Info: Apple Podcasts - https://apple.co/3NQhg6S Amazon Music - https://amzn.to/3P66j2B Google Podcasts - https://bit.ly/3am7rQc Gaana - https://bit.ly/3ANS4v1
Is the brain just a computer? Are AI conscious? Or could they be? Our experts debate.Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimes The idea of the brain as a computer is everywhere. So much so we have forgotten it is a model and not the reality. It's a metaphor that has lead some to believe that in the future they'll be uploaded to the digital ether and thereby achieve immortality. It's also a metaphor that garners billions of dollars in research funding every year. Yet researchers argue that when we dig down into our grey matter our biology is anything but algorithmic. And increasingly, critics contend that the model of the brain as computer is sending scientists (and their resources) nowhere fast.Is our attraction to the idea of the brain as computer an accident of current human technology? Can we find a better metaphor that might lead to a new paradigm? Is there something about computers that has indeed identified the very same processes that are operating in our brains, or is it a profound mistake to imagine the organic can be reduced to technology?There are thousands of big ideas to discover at IAI.tv – videos, articles, and courses waiting for you to explore. Find out more: https://iai.tv/podcast-offers?utm_source=podcast&utm_medium=shownotes&utm_campaign=models-metaphors-and-mindsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
More at https://www.philosophytalk.org/shows/could-robots-be-persons. As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions. Some propose creating a new legal category of “electronic personhood” for any sufficiently advanced robot that can learn and make decisions by itself. But do we really want to assign artificial intelligence legal—or moral—rights and responsibilities? Would it be ethical to produce and sell something with the status of a person in the first place? Does designing machines that look and act like humans lead us to misplace our empathy? Or should we be kind to robots lest we become unkind to our fellow human beings? Josh and Ray do the robot with Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance.
More at https://www.philosophytalk.org/shows/could-robots-be-persons. As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions. Some propose creating a new legal category of “electronic personhood” for any sufficiently advanced robot that can learn and make decisions by itself. But do we really want to assign artificial intelligence legal—or moral—rights and responsibilities? Would it be ethical to produce and sell something with the status of a person in the first place? Does designing machines that look and act like humans lead us to misplace our empathy? Or should we be kind to robots lest we become unkind to our fellow human beings? Josh and Ray do the robot with Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance, and author of "The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation."
Wie können Regierungen und internationale Bündnispartner KI sinnvoll für das Allgemeinwohl nutzen und wo birgt KI unterdessen Gefahren? Wie steht es um KI in Deutschland und der EU im internationalen Vergleich? Wie werden digitale und intelligente Technologien unsere Welt und den Arbeitsmarkt in 50 Jahren verändert haben? Mona Mann spricht mit der amerikanisch-britischen Wissenschaftlerin Prof. Joanna Bryson von der Hertie School über Ethik, natürliche und künstliche Intelligenz sowie ihre Forschungsschwerpunkte KI/IKT-Governance und die Auswirkungen der Technologie auf die menschliche Zusammenarbeit.
Today's guest is Dr. Joanna Bryson, Professor of Ethics and Technology at the Hertie School in Berlin. Previously a faculty member of the Computer Science department at the University of Bath, Joanna was one of nine experts nominated by Germany to the Global Partnership for Artificial Intelligence. Today, Joanna speaks about some of the business applications and implications of AI with specific use-cases. She also shares her perspective on how AI could be practically managed in terms of security and privacy and how these technologies might have ramifications in the future. Discover more AI use-cases, best practice guides, white papers, frameworks, and more at emerj.com/p1.
How do you see your work and the world in 10 years? This is part 2 with Joanna Bryson, Virginia Eubanks, Hessie Jones, Morgan Klaus, Osamu Saito, Rupal Srivastava, Gabriela de Queiroz e Buse Çetin.
Joanna Bryson invites us to think about how biology, animal, and machine intelligence may interconnect. She also questions the impact of technology on our lives, culture, and society.
As robots and artificial intelligence reached new heights, the relationship between humans and machines is getting closer. The sex tech industry is worth $30 billion annually and growing, as sex with synthetic companions is becoming far more widespread. But should it be? What are the social consequences? Some argue that sex robots will encourage bad behavior, perpetuate misogyny, and reinforce pornographic depictions of the opposite sex. Others say it can serve as a societal good for those who struggle with traditional relationships, and be employed as a safe outlet for otherwise toxic behavior. So in this episode of Agree to Disagree, we debate sex robots and their place in society. Arguing “YES” is Kate Devlin, computer scientist specializing in AI and human-computer interaction, author of "Turned On: Science, Sex, and Robots." Arguing "NO" is Joanna Bryson, PhD, professor at the Hertie School in Berlin, scholar of AI and ethics. Learn more about your ad choices. Visit megaphone.fm/adchoices
https://www.engati.com/ Engati is the world's leading no-code, multi-lingual chatbot + live chat platform. https://open.spotify.com/show/3G0uQwPnQib22emRi9VhUg Blog link: https://engati.com/blog | Subscribe now. Check out CX Community page - https://www.engati.com/cx-community And CX Content page - https://www.engati.com/cx-content Joanna Bryson, Professor of Ethics and Technology at Hertie School talks about how machine will show the information that's fed in and tells us it is not necessary a machine has to be right. She also tells us who is actually responsible for AI algorithms biases. Follow us on Facebook: http://s.engati.com/157 LinkedIn: http://s.engati.com/158 Twitter: http://s.engati.com/156 Instagram: https://www.instagram.com/getengati/ https://www.engati.com/chatbots-for-itsm https://www.engati.com/glossary/algorithms #EngatiCX #digital #ai #aiethics #aialgorithms
Charles Radclyffe, founder of EthicsGrade and sponsors of Are You A Robot? discusses his excitement about the opening season of AYAR?, especially around the upcoming debates on how we can avoid the ‘creepy-line' surrounding the advancements in AI and machine learning. Have you thought about the use of cookies when doing your online shop? What about Facebook recognising you and your friends in their photo tagging feature? Listen to Charles and Demetrios discussing the potential ethical issues surrounding these technological advancements and more! This episode is brought to you by EthicsGrade, an ESG Ratings agency with a particular focus on Technology Governance, especially AI Ethics. You can follow EthicsGrade on Twitter @EthicsGrade and LinkedIn: https://www.linkedin.com/company/ethical-by-design/about/ Follow Charles on Twitter @dataphilosopher and LinkedIn: https://www.linkedin.com/in/radclyffe/ Follow Demetrios on Twitter @Dpbrinkm and LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect With Us: Join our Slack channel for more conversation about the big ethics issues that rise from AI: https://join.slack.com/t/are-you-a-robot/shared_invite/zt-h1871xb4-_XZZ33Ex5GZ5_KRP~xpBcg Follow Are You A Robot? on Twitter and Instagram @AreYouARobotPod Resources mentioned in this episode: Are You A Robot? Slack community: https://join.slack.com/t/are-you-a-robot/shared_invite/zt-h1871xb4-_XZZ33Ex5GZ5_KRP~xpBcg The Social Dilemma documentary: https://www.netflix.com/gb/title/81254224 Tim Berners-Lee: https://en.wikipedia.org/wiki/Tim_Berners-Lee GDPR: https://gdpr.eu/ Fawkes: https://sandlab.cs.uchicago.edu/fawkes/ Charles' Ted Talk: https://www.ted.com/talks/charles_radclyffe_three_steps_to_surviving_the_robot_revolution Space Odyssey film: https://www.imdb.com/title/tt0062622/ Charles' interview about Digital Health Passports: https://vimeo.com/472515849 In The Gleaming Light by Harriet Moore: http://www.hrmoore.com/#gleaming Robots Should be Slaves by Joanna Bryson: http://www.cs.bath.ac.uk/~jjb/ftp/Bryson-Slaves-Book09.html Lewis Mumford: https://en.wikipedia.org/wiki/Lewis_Mumford MLops.community: https://mlops.community/
Sign up for the Great Courses today to receive the first month for free: https://bit.ly/great-courses-what-if Is A.I. the future of education? Could human teachers one day be replaced by robots? How can A.I. enhance the educational experience? What are the pros and cons of A.I. in the classroom? We tackle this highly charged subject with two global thought leaders in A.I. and Inclusive Design this week on WID. Professor Joanna Bryson is one of the world's most sought out speakers on all things AI and human behaviour, and Jutta Treviranus is a highly respected voice in the responsible integration of technology and society. They both join Peter and Richard to discuss "What If your teacher were A.I.?" Show notes: https://whatifshow.com/podcast Join hosts Peter Schmiedchen and Richard Garner with some of the world’s top thinkers in science, astronomy, technology, academia and futurism to ponder some of your most popular What If videos. Subscribe to this podcast and please rate & review us. Thanks for being part of the What If community as we embark on an epic exploration of possibilities. Join us online: Website https://whatifshow.com/podcast YouTube https://www.youtube.com/WhatIfScienceShow Facebook https://www.facebook.com/What.If.science Instagram https://www.instagram.com/whatif.show If you’d like to sponsor this podcast, please get in contact with our partners at Notorious - Sales@Notorious.llc Hosts: Peter Schmiedchen and Richard Garner Executive producer: Steve Hulford Supervising producer: Richard Garner Producers: Demid Tumanov and Stephen Henrik Channel supervisor: Raphael Faeh Social media: Saida Mirzalimova Research: Jay Moon Trailer: Evan Yue Artwork: Alex Griffith Production: Underknown Support the show: https://www.patreon.com/whatifshow See omnystudio.com/listener for privacy information.
Computing known as artificial intelligence sorts vast amounts of data — faces, our web browsing habits, even our gestures — into automated predictions used by companies and governments. The technology holds great promise for applications like diagnosing disease and preventing catastrophes. Yet it can exacerbate discrimination and inequality, and be used to erode democracy. Despite concerns about human rights and civil liberties, and about the activities of companies like Clearview AI and Palantir Technologies, European Union authorities are shaping a 21st-century industrial policy around artificial intelligence. That includes opening access to vast amounts of data — data from both the private and the public sectors — in the name of innovation and entrepreneurship. Critics warn that Europe could find itself in an untenable position, caught between upholding privacy ethics that have helped burnish its global reputation, and seeking to boost its competitiveness and security by promoting intrusive industries. We speak with four experts and legislators about how to keep A.I. safe for citizens: Samira Rafaela, a Dutch member of the European Parliament; Joanna Bryson, professor of Ethics and Technology at the Hertie School in Berlin; Sarah Chander, senior policy adviser for the European Digital Rights Association, EDRi; and Patrick Breyer, a member of the European Parliament who represents the German Pirates in alliance with the Greens. “Beethoven Symphony No. 9 in D minor, Op. 125” by Papalin is licensed under CC by 3.0. Wael Koudaih kindly contributed his track “Thawra” to this episode. You’ll find more of his music under the name Rayess Bek. Visit our website for episode art and for more EU Scream.Support the show (https://euscream.com/donate/)
[This is the text of a lecture that I delivered at Tilburg University on the 24th of September 2019. It was delivered as part of the 25th Anniversary celebrations for TILT (Tilburg Institute for Law, Technology and Society). My friend and colleague Sven Nyholm was the discussant for the evening. The lecture is based on my longer academic article ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’ but was written from scratch and presents some key arguments in a snappier and clearer form. I also include a follow up section responding to criticisms from the audience on the evening of the lecture. My thanks to all those involved in organizing the event (Aviva de Groot, Merel Noorman and Silvia de Conca in particular). You can download an audio version of this lecture, minus the reflections and follow ups, here or listen to it above]1. IntroductionMy lecture this evening will be about the conditions under which we should welcome robots into our moral communities. Whenever I talk about this, I am struck by how much my academic career has come to depend upon my misspent youth for its inspiration. Like many others, I was obsessed with science fiction as a child, and in particular with the representation of robots in science fiction. I had two favourite, fictional, robots. The first was R2D2 from the original Star Wars trilogy. The second was Commander Data from Star Trek: the Next Generation. I liked R2D2 because of his* personality - courageous, playful, disdainful of authority - and I liked Data because the writers of Star Trek used him as a vehicle for exploring some important philosophical questions about emotion, humour, and what it means to be human.In fact, I have to confess that Data has had an outsized influence on my philosophical imagination and has featured in several of my academic papers. Part of the reason for this was practical. When I grew up in Ireland we didn’t have many options to choose from when it came to TV. We had to make do with what was available and, as luck would have it, Star Trek: TNG was on every day when I came home from school. As a result, I must have watched each episode of its 7-season run multiple times.One episode in particular has always stayed with me. It was called ‘Measure of a Man’. In it, a scientist from the Federation visits the Enterprise because he wants to take Data back to his lab to study him. Data, you see, is a sophisticated human-like android, created by a lone scientific genius, under somewhat dubious conditions. The Federation scientist wants to take Data apart and see how he works with a view to building others like him. Data, unsurprisingly, objects. He argues that he is not just a machine or piece of property that can be traded and disassembled to suit the whims of human beings. He has his own, independent moral standing. He deserves to be treated with dignity.But how does Data prove his case? A trial ensues and evidence is given on both sides. The prosecution argue that Data is clearly just a piece of property. He was created not born. He doesn’t think or see the world like a normal human being (or, indeed, other alien species). He even has an ‘off switch’. Data counters by giving evidence of the rich relationships he has formed with his fellow crew members and eliciting testimony from others regarding his behaviour and the interactions they have with him. Ultimately, he wins the case. The court accepts that he has moral standing.Now, we can certainly lament the impact that science fiction has on the philosophical debate about robots. As David Gunkel observes in his 2018 book Robot Rights:“[S]cience fiction already — and well in advance of actual engineering practice — has established expectations for what a robot is or can be. Even before engineers have sought to develop working prototypes, writers, artists, and filmmakers have imagined what robots do or can do, what configurations they might take, and what problems they could produce for human individuals and communities.” (Gunkel 2018, 16)He continues, noting that this is a “potential liability” because:“science fiction, it is argued, often produces unrealistic expectations for and irrational fears about robots that are not grounded in or informed by actual science.” (Gunkel 2018, 18)I certainly heed this warning. But, nevertheless, I think the approach taken by the TNG writers in the episode ‘Measure of a Man’ is fundamentally correct. Even if we cannot currently create a being like Data, and even if the speculation is well in advance of the science, they still give us the correct guide to resolving the philosophical question of when to welcome robots into our moral community. Or so, at least, I shall argue in the remainder of this lecture.2. Tribalism and Conflict in Robot EthicsBefore I get into my own argument, let me say something about the current lay of the land when it comes to this issue. Some of you might be familiar with the famous study by the social psychologist Muzafer Sherif. It was done in the early 1950s at a summer camp in Robber’s Cave, Oklahoma. Suffice to say, it is one of those studies that wouldn’t get ethics approval nowadays. Sherif and his colleagues were interested in tribalism and conflict. They wanted to see how easy it would be to get two groups of 11-year old boys to divide into separate tribes and go to war with one another. It turned out to be surprisingly easy. By arbitrarily separating the boys into two groups, giving them nominal group identity (the ‘Rattlers’ and the ‘Eagles’), and putting them into competition with each other, Sherif and his research assistants sowed the seeds for bitter and repeated conflict.The study has become a classic, repeatedly cited as evidence of how easy it is for humans to get trapped in intransigent group conflicts. I mention it here because, unfortunately, it seems to capture what has happened with the debate about the potential moral standing of robots. The disputants have settled into two tribes. There are those that are ‘anti’ the idea; and there are those that are ‘pro’ the idea. The members of these tribes sometimes get into heated arguments with one another, particularly on Twitter (which, admittedly, is a bit like a digital equivalent of Sherif’s summer camp).Those that are ‘anti’ the idea would include Noel Sharkey, Amanda Sharkey, Deborah Johnson, Aimee van Wynsberghe and the most recent lecturer in this series, Joanna Bryson. They cite a variety of reasons for their opposition. The Sharkeys, I suspect, think the whole debate is slightly ridiculous because current robots clearly lack the capacity for moral standing, and debating their moral standing distracts from the important issues in robot ethics - namely stopping the creation and use of robots that are harmful to human well-being. Deborah Johnson would argue that since robots can never experience pain or suffering they will never have moral standing. Van Wynsberghe and Bryson are maybe a little different and lean more heavily on the idea that even if it were possible to create robots with moral standing — a possibility that Bryson at least is willing to concede — it would be a very bad idea to do so because it would cause considerable moral and legal disruption.Those that are pro the idea would include Kate Darling, Mark Coeckelbergh, David Gunkel, Erica Neely, and Daniel Estrada. Again, they cite a variety of reasons for their views. Darling is probably the weakest on the pro side. She focuses on humans and thinks that even if robots themselves lack moral standing we should treat them as if they had moral standing because that would be better for us. Coeckelbergh and Gunkel are more provocative, arguing that in settling questions of moral standing we should focus less on the intrinsic capacities of robots and more on how we relate to them. If those relations are thick and meaningful, then perhaps we should accept that robots have moral standing. Erica Neely proceeds from a principle of moral precaution, arguing that even if we are unsure of the moral standing of robots we should err on the side of over-inclusivity rather than under-inclusivity when it comes to this issue: it is much worse to exclude a being with moral standing to include one without. Estrada is almost the polar opposite of Bryson, welcoming the moral and legal disruption that embracing robots would entail because it would loosen the stranglehold of humanism on our ethical code.To be clear, this is just a small sample of those who have expressed an opinion about this topic. There are many others that I just don’t have time to discuss. I should, however, say something here about this evening’s discussant, Sven and his views on the matter. I had the fortune of reading a manuscript of Sven’s forthcoming book Humans, Robots and Ethics. It is an excellent and entertaining contribution to the field of robot ethics and in it Sven shares his own views on the moral standing of robots. I’m sure he will explain them later on but, for the time being, I would tentatively place him somewhere near Kate Darling on this map: he thinks we should be open to the idea of treating robots as if they had moral standing, but not because of what the robots themselves are but because of what respecting them says about our attitudes to other humans.And what of myself? Where do I fit in all of this? People would probably classify me as belonging to the pro side. I have argued that we should be open to the idea that robots have moral standing. But I would much prefer to transcend this tribalistic approach to the issue. I am not advocate for the moral standing of robots. I think many of the concerns raised by those on the anti side are valid. Debating the moral standing of robots can seem, at times, ridiculous and a distraction from other important questions in robot ethics; and accepting them into our moral communities will, undoubtedly, lead to some legal and moral disruption (though I would add that not all disruption is a bad thing). That said, I do care about the principles we should use to decide questions of moral standing, and I think that those on the anti of the debate sometimes use bad arguments to support their views. This is why, in the remainder of this lecture, I will defend a particular approach to settling the question of the moral standing of robots. I do so in the hope that this can pave the way to a more fruitful and less tribalistic debate.In this sense, I am trying to return to what may be the true lesson of Sherif’s famous experiment on tribalism. In her fascinating book The Lost Boys: Inside Muzafer Sherif’s Robbers Cave Experiment, Gina Perry has revealed the hidden history behind Sherif’s work. It turns out that Sherif tried to conduct the exact same experiment as he did in Robber’s Cave one year before in Middle Grove, New York. It didn’t work out. No matter what the experimenters did to encourage conflict, the boys refused to get sucked into it. Why was this? One suggestion is that at Middle Grove, Sherif didn’t sort the boys into two arbitrary groups as soon as they arrived. They were given the chance to mingle and get to know one another before being segregated. This initial intermingling may have inoculated them from tribalism. Perhaps we can do the same thing with philosophical dialogue? I live in hope.3. In Defence of Ethical BehaviourismThe position I wish to defend is something I call ‘ethical behaviourism’. According to this view, the behavioural representations of another entity toward you are a sufficient ground for determining their moral status. Or, to put it slightly differently, how an entity looks and acts is enough to determine its moral status. If it looks and acts like a duck, then you should probably treat it like you treat any other duck.Ethical behaviourism works through comparisons. If you are unsure of the moral status of a particular entity — for present purposes this will be a robot but it should be noted that ethical behaviourism has broader implications — then you should compare its behaviours to that of another entity that is already agreed to have moral status — a human or an animal. If the robot is roughly performatively equivalent to that other entity, then it too has moral status. I say “roughly” since no two entities are ever perfectly equivalent. If you compared two adult human beings you would spot performative differences between them, but this wouldn’t mean that one of them lacks moral standing as a result. The equivalence test is an inexact one, not an exact one.There is nothing novel in ethical behaviourism. It is, in effect, just a moral variation of the famous Turing Test for machine intelligence. Where Turing argued that we should assess intelligence on the basis of behaviour, I am arguing that we should determine moral standing on the basis of behaviour. It is also not a view that is original to me. Others have defended similar views, even if they haven’t explicitly labelled it as such.Despite the lack of novelty, ethical behaviourism is easily misunderstood and frequently derided. So let me just clarify a couple of points. First, note that it is a practical and epistemic thesis about how we can settle questions of moral standing; it is not an abstract metaphysical thesis about what it is that grounds moral standing. So, for example, someone could argue that the capacity to feel pain is the metaphysical grounding for moral status and that this capacity depends on having a certain mental apparatus. The ethical behaviourist can agree with this. They will just argue that the best evidence we have for determining whether an entity has the capacity to feel pain is behavioural. Furthermore, ethical behaviourism is agnostic about the broader consequences of its comparative tests. To say that one entity should have the same moral standing as another entity does not mean both are entitled to a full set of legal and moral rights. That depends on other considerations. A goat could have moral standing, but that doesn’t mean it has the right to own property. This is important because when I am arguing that we should apply this approach to robots and I am not thereby endorsing a broader claim that we should grant robots legal rights or treat them like adult human beings. This depends on who or what the robots is being compared to.So what’s the argument for ethical behaviourism? I have offered different formulations of this but for this evening’s lecture I suggest that it consists of three key propositions or premises.(P1) The most popular criteria for moral status are dependent on mental states or capacities, e.g. theories focused on sentience, consciousness, having interests, agency, and personhood.(P2) The best evidence — and oftentimes the only practicable evidence — for the satisfaction of these criteria is behavioural.(P3) Alternative alleged grounds of moral status or criteria for determining moral status either fail to trump or dislodge the sufficiency of the behavioural evidence.Therefore, ethical behaviourism is correct: behaviour provides a sufficient basis for settling questions of moral status.I take it that the first premise of this argument is uncontroversial. Even if you think there are other grounds for moral status, I suspect you agree that an entity with sentience or consciousness (etc) has some kind of moral standing. The second premise is more controversial but is, I think, undeniable. It’s a trite observation but I will make it anyway: We don’t have direct access to one another’s minds. I cannot crawl inside your head and see if you really are experiencing pain or suffering. The only thing I have to go on is how you behave and react to the world. This is true, by the way, even if I can scan your brain and see whether the pain-perceiving part of it lights up. This is because the only basis we have for verifying the correlations between functional activity in the brain and mental states is behavioural. What I mean is that scientists ultimately verify those correlations by asking people in the brain scanners what they are feeling. So all premise (2) is saying is that if the most popular theories of moral status are to work in practice, it can only be because we use behavioural evidence to guide their application.That brings us to premise (3): that all other criteria fail to dislodge the importance of behavioural evidence. This is the most controversial one. Many people seem to passionately believe that there are other ways of determining moral status and indeed they argue that relying on behavioural evidence would be absurd. Consider these two recent Twitter comments on an article I wrote about ethical behaviourism and how it relates to animals and robots:First comment: “[This is] Errant #behaviorist #materialist nonsense…Robots are inanimate even if they imitate animal behavior. They don’t want or care about anything. But knock yourself out. Put your toaster in jail if it burns your toast.”Second comment: “If I give a hammer a friendly face so some people feel emotionally attached to it, it still remains a tool #AnthropomorphicFallacy”These are strong statements, but they are not unusual. I encounter this kind of criticism quite frequently. But why? Why are people so resistant to ethical behaviourism? Why do they think that there must be something more to how we determine moral status? Let’s consider some of the most popular objections.4. Objections and RepliesIn a recent paper, I suggested that there were seven (more, depending on how you count) major objections to ethical behaviourism. I won’t review all seven here, but I will consider four of the most popular ones. Each of these objections should be understood as an attempt to argue that behavioural evidence by itself cannot suffice for determining moral standing. Other evidence matters as well and can ‘defeat’ the behavioural evidence.(A) The Material Cause ObjectionThe first objection is that the ontology of an entity makes a difference to its moral standing. To adopt the Aristotelian language, we can say that the material cause of an entity (i.e. what it is made up of) matters more than behaviour when it comes to moral standing. So, for example, someone could argue that robots lack moral standing because they are not biological creatures. They are not made from the same ‘wet’ organic components as human beings or animals. Even if they are performatively equivalent to human beings or animals, this ontological difference scuppers any claim they might have to moral standing.I find this objection unpersuasive. It smacks to me of biological mysterianism. Why exactly does being made of particular organic material make such a crucial difference? Imagine if your spouse, the person you live with everyday, was suddenly revealed to be an alien from the Andromeda galaxy. Scientists conduct careful tests and determine that they are not a carbon-based lifeform. They are made from something different, perhaps silicon. Despite this, they still look and act in the same way as they always have (albeit now with some explaining to do). Would the fact that they are made of different stuff mean that they no longer warrant any moral standing in your eyes? Surely not. Surely the behavioural evidence suggesting that they still care about you and still have the mental capacities you used to associate with moral standing would trump the new evidence you have regarding their ontology. I know non-philosophers dislike thought experiments of this sort, finding them to be slightly ridiculous and far-fetched. Nevertheless, I do think they are vital in this context because they suggest that behaviour does all the heavy lifting when it comes to assessing moral standing. In other words, behaviour matters more than matter. This is also, incidentally, one reason why it is wrong to say that ethical behaviourism is a ‘materialist’ view: ethical behaviourism is actually agnostic regarding the ontological instantiation of the capacities that ground moral status; it is concerned only with the evidence that is sufficient for determining their presence.All that said, I am willing to make one major concession to the material cause objection. I will concede that ontology might provide an alternative, independent ground for determining the moral status of an entity. Thus, we might accept that an entity that is made from the right biological stuff has moral standing, even if they lack the behavioural sophistication we usually require for moral standing. So, for example someone in a permanent coma might have moral standing because of what they are made of, and not because of what they can do. Still, all this shows is that being made of the right stuff is an independent sufficient ground for moral standing, not that it is a necessary ground for moral standing. The latter is what would need to be proved to undermine ethical behaviourism.(B) The Efficient Cause ObjectionThe second objection is that how an entity comes into existence makes a difference to its moral standing. To continue the Aristotelian theme, we can say that the efficient cause of existence is more important than the unfolding reality. This is an objection that the philosopher Michael Hauskeller hints at in his work. Hauskeller doesn’t focus on moral standing per se, but does focus on when we can be confident that another entity cares for us or loves us. He concedes that behaviour seems like the most important thing when addressing this issue — what else could caring be apart from caring behaviour? — but then resiles from this by arguing that how the being came into existence can undercut the behavioural evidence. So, for example, a robot might act as if it cares about you, but when you learn that the robot was created and manufactured by a team of humans to act as if it cares for you, then you have reason to doubt the sincerity of its behaviour.It could be that what Hauskeller is getting at here is that behavioural evidence can often be deceptive and misleading. If so, I will deal with this concern in a moment. But it could also be that he thinks that the mere fact that a robot was programmed and manufactured, as opposed to being evolved and developed, makes a crucial difference to moral standing. If that is what he is claiming, then it is hard to see why we should take it seriously. Again, imagine if your spouse told you that they were not conceived and raised in the normal way. They were genetically engineered in a lab and then carefully trained and educated. Having learned this, would you take a new view of their moral standing? Surely not. Surely, once again, how they actually behave towards you — and not how they came into existence — would be what ultimately mattered. We didn’t deny the first in vitro baby moral standing simply because she came into existence in a different way from ordinary human beings. The same principle should apply to robots.Furthermore, if this is what Hauskeller is arguing, it would provide us with an unstable basis on which to make crucial judgments of moral standing. After all, the differences between humans and robots with respect to their efficient causes is starting to breakdown. Increasingly, robots are not being programmed and manufactured from the top-down to follow specific rules. They are instead given learning algorithms and then trained on different datasets with the process sometimes being explicitly modeled on evolution and childhood development. Similarly, humans are increasingly being designed and programmed from the top down, through artificial reproduction, embryo selection and, soon, genetic engineering. You may object to all this tinkering with the natural processes of human development and conception. But I think you would be hard pressed to deny a human that came into existence as a result of these process the moral standing you ordinarily give to other human beings.(C) The Final Cause ObjectionThe third objection is that the purposes an entity serves and how it is expected to fulfil those purposes makes a difference to its moral standing. This is an objection that Joanna Bryson favours in her work. In several papers, she has argued that because robots will be designed to fulfil certain purposes on our behalf (i.e. they will be designed to serve us) and because they will be owned and controlled by us in the process, they should not have moral standing. Now, to be fair, Bryson is more open to the possibility of robot moral standing than most. She has said, on several occasions, that it is possible to create robots that have moral standing. She just thinks that that this should not happen, in part because they will be owned and controlled by us, and because they will be (and perhaps should be) designed to serve our ends.I don’t think there is anything in this that dislodges or upsets ethical behaviourism. For one thing, I find it hard to believe that the fact that an entity has been designed to fulfil a certain purpose should make a crucial difference to its moral standing. Suppose, in the future, human parents can genetically engineer their offspring to fulfil certain specific ends. For example, they can select genes that will guarantee (with the right training regime) that their child will be a successful athlete (this is actually not that dissimilar to what some parents try to do nowadays). Suppose they succeed. Would this fact alone undermine the child’s claim to moral standing? Surely not, and surely the same standard should apply to a robot. If it is performatively equivalent to another entity with moral standing, then the mere fact that it has been designed to fulfil a specific purpose should not affect its moral standing.Related to this, it is hard to see why the fact that we might own and control robots should make a critical difference to their moral standing. If anything, this inverts the proper order of moral justification. The fact that a robot looks and acts like another entity that we believe to have moral standing should cause us to question our approach to ownership and control, not vice versa. We once thought it was okay for humans to own and control other humans. We were wrong to think this because it ignored the moral standing of those other humans.That said, there are nuances here. Many people think that animals have some moral standing (i.e. that we need to respect their welfare and well-being) but that it is not wrong to own them or attempt to control them. The same approach might apply to robots if they are being compared to animals. This is the crucial point about ethical behaviourism: the ethical consequences of accepting that a robot is performatively equivalent to another entity with moral standing depends, crucially, on who or what that other entity is.(D) The Deception ObjectionThe fourth objection is that ethical behaviourism cannot work because it is too easy to be deceived by behavioural cues. A robot might look and act like it is in pain, but this could just be a clever trick, used by its manufacturer, to foster false sympathy. This is, probably, the most important criticism of ethical behaviourism. It is what I think lurks behind the claim that ethical behaviourism is absurd and must be resisted.It is well-known that humans have a tendency toward hasty anthropomorphism. That is, we tend to ascribe human-like qualities to features of our environment without proper justification. We anthropomorphise the weather, our computers, the trees and the plants, and so forth. It is easy to ‘hack’ this tendency toward hasty anthropomorphism. As social roboticists know, putting a pair of eyes on a robot can completely change how a human interacts with it, even if the robot cannot see anything. People worry, consequently, that ethical behaviourism is easily exploited by nefarious technology companies.I sympathise with the fear that motivates this objection. It is definitely true that behaviour can be misleading or deceptive. We are often misled by the behaviour of our fellow humans. To quote Shakespeare, someone can ‘smile and smile and be a villain’. But what is the significance of this fact when it comes to assessing moral status? To me, the significance is that it means we should be very careful when assessing the behavioural evidence that is used to support a claim about moral status. We shouldn’t extrapolate too quickly from one behaviour. If a robot looks and acts like it is in pain (say) that might provide some warrant for thinking it has moral status, but we should examine its behavioural repertoire in more detail. It might emerge that other behaviours are inconsistent with the hypothesis that it feels pain or suffering.The point here, however, is that we are always using other behavioural evidence to determine whether the initial behavioural evidence was deceptive or misleading. We are not relying on some other kind of information. Thus, for example, I think it would be a mistake to conclude that a robot cannot feel pain, even though it performs as if it does, because the manufacturer of the robot tells us that it was programmed to do this, or because some computer engineer can point to some lines of code that are responsible for the pain performance. That evidence by itself — in the absence of other countervailing behavioural evidence — cannot undermine the behavioural evidence suggesting that the robot does feel pain. Think about it like this: imagine if a biologist came to you and told you that evolution had programmed the pain response into humans in order to elicit sympathy from fellow humans. What’s more, imagine if a neuroscientist came to you and and told you she could point to the exact circuit in the brain that is responsible for the human pain performance (and maybe even intervene in and disrupt it). What they say may well be true, but it wouldn’t mean that the behavioural evidence suggesting that your fellow humans are in pain can be ignored.This last point is really the crucial bit. This is what is most distinctive about the perspective of ethical behaviourism. The tendency to misunderstand it, ignore it, or skirt around it, is why I think many people on the ‘anti’ side of the debate make bad arguments.5. Implications and ConclusionsThat’s all I will say in defence of ethical behaviourism this evening. Let me conclude by addressing some of its implications and heading off some potential misunderstandings.First, let me re-emphasise that ethical behaviourism is about the principles we should apply when assessing the moral standing of robots. In defending it, I am not claiming that robots currently have moral standing or, indeed, that they will ever have moral standing. I think this is possible, indeed probable, but I could be wrong. The devil is going to be in the detail of the behavioural tests we apply (just as it is with the Turing test for intelligence).Second, there is nothing in ethical behaviourism that suggests that we ought to create robots that cross the performative threshold to moral standing. It could be, as people like Bryson and Van Wysnberghe argue, that this is a very bad idea: that it will be too disruptive of existing moral and legal norms. What ethical behaviourism does suggest, however, is that there is an ethical weight to the decision to create human-like and animal-like robots that may be underappreciated by robot manufacturers.Third, acknowledging the potential risks, there are also potential benefits to creating robots that cross the performative threshold. Ethical behaviourism can help to reveal a value to relationships with robots that is otherwise hidden. If I am right, then robots can be genuine objects of moral affection, friendship and love, under the right conditions. In other words, just as there are ethical risks to creating human-like and animal-like robots, there are also ethical rewards and these tend to be ignored, ridiculed or sidelined in the current debate.Fourth, and related to this previous point, the performative threshold that robots have to cross in order to unlock the different kinds of value might vary quite a bit. The performative threshold needed to attain basic moral standing might be quite low; the performative threshold needed to say that a robot can be a friend or a partner might be substantially higher. A robot might have to do relatively little to convince us that it should be treated with moral consideration, but it might have to do a lot to convince us that it is our friend.These are topics that I have explored in greater detail in some of my papers, but they are also topics that Sven has explored at considerable length. Indeed, several chapters of his forthcoming book are dedicated to them. So, on that note, it is probably time for me to shut up and hand over to him and see what he has to say about all of this.Reflections and Follow Ups After I delivered the above lecture, my colleague and friend Sven Nyholm gave a response and there were some questions and challenges from the audience. I cannot remember every question that was raised, but I thought I would respond to a few that I can remember.1. The Randomisation CounterexampleOne audience member (it was Nathan Wildman) presented an interesting counterexample to my claim that other kinds of evidence don’t defeat or undermine the behavioural evidence for moral status. He argued that we could cook-up a possible scenario in which our knowledge of the origins of certain behaviours did cause us to question whether it was sufficient for moral status.He gave the example of a chatbot that was programmed using a randomisation technique. The chatbot would generate text at random (perhaps based on some source dataset). Most of the time the text is gobbledygook but on maybe one occasion it just happens to have a perfectly intelligible conversation with you. In other words, whatever is churned out by the randomisation algorithm happens to perfectly coincide with what would be intelligible in that context (like picking up a meaningful book in Borges’s Library of Babel). This might initially cause you to think it has some significant moral status, but if the computer programmer came along and told you about the randomisation process underlying the programming you would surely change your opinion. So, on this occasion, it looks like information about the causal origins of the behaviour, makes a difference to moral status.Response: This is a clever counterexample but I think it overlooks two critical points. First, it overlooks the point I make about avoiding hasty anthropomorphisation towards the end of my lecture. I think we shouldn’t extrapolate too much from just one interaction with a robot. We should conduct a more thorough investigation of the robot’s (or in this case the chatbot’s) behaviours. If the intelligible conversation was just a one-off, then we will quickly be disabused of our belief that it has moral status. But if it turns out that the intelligible conversation was not a one-off, then I don’t think the evidence regarding the randomisation process would have any such effect. The computer programmer could shout and scream as much as he/she likes about the randomisation algorithm, but I don’t think this would suffice to undermine the consistent behavioural evidence. This links to a second, and perhaps deeper metaphysical point I would like to make: we don’t really know what the true material instantiation of the mind is (if it is indeed material). We think the brain and its functional activity is pretty important, but we will probably never have a fully satisfactory theory of the relationship between matter and mind. This is the core of the hard problem of consciousness. Given this, it doesn’t seem wise or appropriate to discount the moral status of this hypothetical robot just because it is built on a randomisation algorithm. Indeed, if such a robot existed, it might give us reason to think that randomisation was one of the ways in which a mind could be functionally instantiated in the real world.I should say that this response ignores the role of moral precaution in assessing moral standing. If you add a principle of moral precaution to the mix, then it may be wrong to favour a more thorough behavioural test. This is something I discuss a bit in my article on ethical behaviourism.2. The Argument confuses how we know X is valuable with what makes X actually valuableOne point that Sven stressed in his response, and which he makes elsewhere too, is that my argument elides or confuses two separate things: (i) how we know whether something is of value and (ii) what it is that makes it valuable. Another way of putting it: I provide a decision-procedure for deciding who or what has moral status but I don’t thereby specify what it is that makes them have moral status. It could be that the capacity to feel pain is what makes someone have moral standing and that we know someone feels pain through their behaviour, but this doesn’t mean that they have moral standing because of their behaviour.Response: This is probably a fair point. I may on occasion elide these two things. But my feeling is that this is a ‘feature’ rather than a ‘bug’ in my account. I’m concerned with how we practically assess and apply principles of moral standing in the real world, and not so much with what it is that metaphysically undergirds moral standing.3. Proxies for Behaviour versus Proxies for MindAnother comment (and I apologise for not remembering who gave it) is that on my theory behaviour is important but only because it is a proxy for something else, namely some set of mental states or capacities. This is similar to the point Sven is making in his criticism. If that’s right, then I am wrong to assume that behaviour is the only (or indeed the most important) proxy for mental states. Other kinds of evidence serve as proxies for mental states. The example was given of legal trials where the prosecution is trying to prove what the mental status of the defendant was at the time of an offence. They don’t just rely on behavioural evidence. They also rely on other kinds of forensic evidence to establish this.Response: I don’t think this is true and this gets to a deep feature of my theory. To take the criminal trial example, I don’t think it is true to say that we use other kinds of evidence as proxies for mental states. I think we use them as proxies for behaviour which we then use as proxies for mental states. In other words, the actual order of inference goes:Other evidence → behaviour → mental stateAnd not:Other evidence → mental stateThis is the point I was getting at in my talk when I spoke about how we make inferences from functional brain activity to mental state. I believe what happens when we draw a link between brain activity and mental state, what we are really doing is this:Brain state → behaviour → mental stateAnd notBrain state → mental state.Now, it is, of course, true to say that sometimes scientists think we can make this second kind of inference. For example, purveyors of brain based lie detection tests (and, indeed, other kinds of lie detection test) try to draw a direct line of inference from a brain state to a mental state, but I would argue that this is only because they have previously verified their testing protocol by following the “brain state → behaviour → mental state” route and confirming that it is reliable across multiple tests. This gives them the confidence to drop the middle step on some occasions, but ultimately this is all warranted (if it is, in fact, warranted – brain-based lie detection is controversial) because the scientists first took the behavioural step. To undermine my view, you would have to show that it is possible to cut out the behavioural step in this inference pattern. I don’t think this can be done, but perhaps I can be proved wrong.This is perhaps the most metaphysical aspect of my view.4. Default Settings and PracticalitiesAnother point that came up in conversation with Sven, Merel Noorman and Silvia de Conca, had to do with the default assumptions we are likely to have when dealing with robots and how this impacts on the practicalities of robots being accepting into the moral circle. In other words, even if I am right in some abstract, philosophical sense, will anyone actually follow the behavioural test I advocate? Won’t there be a lot of resistance to it in reality?Now, as I mentioned in my lecture, I am not an activist for robot rights or anything of the sort. I am interested in the general principles we should apply when settling questions of moral status; not with whether a particular being, such as a robot, has acquired moral status. That said, implicit views about the practicalities of applying the ethical behaviourist test may play an important role in some of the arguments I am making.One example of this has to do with the ‘default’ assumption we have when interpreting the behaviour of humans/animals vis-à-vis robots. We tend to approach humans and animals with an attitude of good faith, i.e. we assume their each of their outward behaviours is a sincere representation of their inner state of mind. It’s only if we receive contrary evidence that we will start to doubt the sincerity of the behaviour.But what default assumption do we have when confronting robots? It seems plausible to suggest that most people will approach them with an attitude of bad faith. They will assume that their behaviours are representative of nothing at all and will need a lot of evidence to convince them that they should be granted some weight. This suggests that (a) not all behavioural evidence is counted equally and (b) it might be very difficult, in practice, for robots to be accepted into the moral circle. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Response: I don’t see this as a criticism of ethical behaviourism but, rather, a warning to anyone who wishes to promote it. In other words, I accept that people will resist ethical behaviourism and may treat robots with greater suspicion than human or animal agents. One of the key points of this lecture and the longer academic article I wrote about the topic was to address this suspicion and skepticism. Nevertheless, the fact that there may be these practical difficulties does not mean that ethical behaviourism is incorrect. In this respect, it is worth noting that Turing was acutely aware of this problem when he originally formulated his 'Imitation Game' test. The reason why the test was purely text-based in its original form was to prevent human-centric biases affecting its operation.5. Ethical Mechanicism vs Ethical Behaviourism After I posted this article, Natesh Ganesh posted a critique of my handling of the deception objection on Twitter. He made two interesting points. First, he argued that the thought experiment I used to dismiss the deception objection was misleading and circular. If a scientist revealed the mechanisms underlying my own pain performances I would have no reason to doubt that the pain was genuine since I already know that someone with my kind of neural circuitry can experience pain. If they revealed the mechanisms underlying a robot’s pain performances things would be different because I do not yet have a reason to think that a being with that kind of mechanism can experience genuine pain. As a result, the thought experiment is circular because only somebody who already accepted ethical behaviourism would be so dismissive of the mechanistic evidence. Here’s how Natesh expresses the point:“the analogy in the last part [the response to the deception objection] seems flawed. Showing me the mechanisms of pain in entities (like humans) who we share similar mechanisms with & agree have moral standing is different from showing me the mechanisms of entities (like robots) whose moral standing we are trying to determine. Denying experience of pain in the 1st simply because I now know the circuitry would imply denying your own pain & hence moral standing. But accepting/ denying the 2nd if its a piece of code implicitly depends on whether you already accept/deny ethical behaviorism. It is just circular to appeal to that example as evidence.”He then follows up with a second point (implicit in what was just said) about the importance of mechanical similarities between entities when it comes to assessing moral standing:“I for one am more likely to [believe] a robot can experience pain if it shows the behavior & the manufacturer opened it up & showed me the circuitry and if that was similar to my own (different material perhaps) I am more likely to accept the robot experiences pain. In this case once again I needed machinery on top of behavior.”What I would say here, is that Natesh, although not completely dismissive of the importance of behaviour to assessing moral standing, is a fan of ethical mechanicism, and not ethical behaviourism. He thinks you must have mechanical similarity (equivalence?) before you can conclude that two entities share moral standing.Response: On the charge of circularity, I don’t think this is quite fair. The thought experiment I propose when responding to the deception objection is, like all thought experiments, intended to be an intuition pump. The goal is to imagine a situation in which you could describe and intervene in the mechanical underpinning of a pain performance with great precision (be it a human pain performance or otherwise) and ask whether the mere fact that you could describe the mechanism in detail or intervene in it would be make a difference to the entity’s moral standing. My intuitions suggest it wouldn’t make a difference, irrespective of the details of the mechanism (this is the point I make, above, in relation to the example given by Nathan Wildman about the robot whose behaviour is the result of a random-number generator programme). Perhaps other people’s intuitions are pumped in a different direction. That can happen but it doesn’t mean the thought experiment is circular.What about the importance of mechanisms in addition to behaviour? This is something I address in more detail in the academic paper. I have two thoughts about it. First, I could just bite the bullet and agree that the underlying mechanisms must be similar too. This would just add an additional similarity test to the assessment of moral status. There would then be similar questions as to how similar the mechanisms must be. Is it enough if they are, roughly, functionally similar or must they have the exact same sub-components and processes? If the former, then it still seems possible in principle for roboticists to create a functionally similar underlying mechanism and this could then ground moral standing for robots.Second, despite this, I would still push back against the claim that similar underlying mechanisms are necessary. This strikes me as being just a conservative prejudgment rather than a good reason for denying moral status to behaviourally equivalent entities. Why are we so confident that only entities with our neurological mechanisms (or something very similar) can experience pain (or instantiate the other mental properties relevant to moral standing)? Or, to put it less controversially, why should we be so confident that mechanical similarity undercuts behavioural similarity? If there is an entity that looks and acts like it is in pain (or has interests, a sense of personhood, agency etc), and all the behavioural tests confirm this, then why deny it moral standing because of some mechanical differences?Part of the resistance here could be that people are confusing two different claims:Claim 1: it is impossible (physically, metaphysically) for an entity that lacks sufficient mechanical similarity (with humans/animals) to have the behavioural sophistication we associate with experiencing pain, having agency etc.Claim 2: an entity that has the behavioural sophistication we associate with experiencing pain, having agency (etc) but then lacks mechanical similarity to other entities with such behavioural sophistication, should be denied moral standing because they lack mechanical similarity.Ethical behaviourism denies claim 2, but it does not, necessarily, deny claim 1. It could be the case that mechanical similarity is essential for behavioural similarity. This is something that can only be determined after conducting the requisite behavioural tests. The point, as always throughout my defence of the position, is that the behavioural evidence should be our guide. This doesn’t mean that other kinds of evidence are irrelevant but simply that they do not carry as much weight. My sense is that people who favour ethical mechanicism have a very strong intuition in favour of claim 1, which they then carry over into support for claim 2. This carry over is not justified as the two claims are not logically equivalent.Subscribe to the newsletter
Leading AI researchers De Kai and Joanna Bryson join Azeem Azhar to discuss the state of governance and accountability in the age of AI.
Joanna J. Bryson is a transdisciplinary researcher on the structure and dynamics of human- and animal-like intelligence. Her research covers topics ranging from artificial intelligence, through autonomy and robot ethics, and on to human cooperation. Her work has appeared in venues ranging from a reddit to Science.Bryson's first degree is in Behavioural Science (non-clinical psychology) from Chicago (1986), she also holds an MSc in Artificial Intelligence and an MPhil in Psychology from Edinburgh (1992, 2000) and a PhD in Artificial Intelligence from MIT (2001). She joined Bath in 2002 in their Department of Computer Science, where she was promoted to Reader (tenured associate professor) in 2010. She founded and for several years lead their Intelligent Systems research group, and is affiliated with Bath's Institutes for Policy Research and Mathematical Innovation, as well as their Centres for Networks and Collective Behaviour and for Digital Entertainment. Joanna is currently the Professor of Ethics and Technology at the Hertie School of Governance in Berlin. She has held visiting academic positions with Princeton's Center for Information Technology Policy, the Mannheim Centre for Social Science Research (MZES, 2011-2014), the Department of Anthropology Oxford (Harvey Whitehouse's Explaining Religion project, 2010-2011), The Methods & Data Institute at Nottingham (agent-based modelling in political science 2007-2008), and the Konrad Lorenz Institute for Evolution & Cognition Research in Austria (on the biological origins of culture, 2007- 2009). Before coming to Bath, she conducted academic research in Edinburgh's Human Communication Research Centre (1999-2000), and Harvard's Department of Psychology (2001-2002). Additionally, she has professional experience in Chicago's financial industry (1986-1991), international organization management consultancy (1993-1994), and industrial AI research (for LEGO, 1995, 1998). Bryson has served on the Senate, Council, and Court for the University of Bath, representing the Academic Assembly. She is presently a member of the College of the British Engineering and Physical Sciences Research Council (EPSRC) and serves as a member of the editorial board for several academic journals, including Adaptive Behaviour, AI & Society, Connection Science, and The International Journal of Synthetic Emotions.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Joanna Bryson, Reader at the University of Bath. I was fortunate to catch up with Joanna at the AI Conference where she presented on “Maintaining Human Control of Artificial Intelligence,“ focusing on technological and policy mechanisms that could be used to achieve that goal. In our conversation, we explore our current understanding of “natural intelligence” and how it can inform the development of AI, the context in which she uses the term “human control” and its implications, and the meaning of and need to apply “DevOps” principles when developing AI systems. This was a fun one! The complete show notes for this episode can be found at https://twimlai.com/talk/259. For more from the AI Conference NY series, visit twimlai.com/nyai19. Thanks to our friends at HPE for sponsoring this week's series of shows from the O’Reilly AI Conference in New York City! For more information on HPE InfoSight, visit twimlai.com/hpe.
Entrepreneur, investor, and podcast host Azeem Azhar looks at some of the biggest issues at the intersection of technology and society, with a focus this season on artificial intelligence. In this episode, he speaks with University of Bath professor Joanna Bryson on the kind of professional and ethical standards that need to be put in place as AI continues to grow as an industry. "Exponential View with Azeem Azhar" is part of HBR Presents, a new network of business podcasts curated by HBR editors. For our full lineup of shows, search “HBR” on your favorite podcast app or visit hbr.org/podcasts.
University of Bath professor Joanna Bryson, a leading AI researcher, discusses the opportunities and consequences of developing human-like computational intelligence.
In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political structures in the information age and much much more. He is the author of several books, including Hacking Cyberspace, The Machine Question, Of Remixology, Gaming the System and, most recently, Robot Rights. We have a long debate/conversation about whether or not robots should/could have rights. You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:52 - Isn't the idea of robot rights ridiculous?3:37 - What is a robot anyway? Is the concept too nebulous/diverse?7:43 - Has science fiction undermined our ability to think about robots clearly?11:01 - What would it mean to grant a robot rights? (A precis of Hohfeld's theory of rights)18:32 - The four positions/modalities one could take on the idea of robot rights21:32 - The First Modality: Robots Can't Have Rights therefore Shouldn't23:37 - The EPSRC guidelines on robotics as an example of this modality26:04 - Criticisms of the EPSRC approach28:27 - Other problems with the first modality31:32 - Europe vs Japan: why the Japanese might be more open to robot 'others'34:00 - The Second Modality: Robots Can Have Rights therefore Should (some day)39:53 - A debate between myself and David about the second modality (why I'm in favour it and he's against it)47:17 - The Third Modality: Robots Can Have Rights but Shouldn't (Bryson's view)53:48 - Can we dehumanise/depersonalise robots?58:10 - The Robot-Slave Metaphor and its Discontents1:04:30 - The Fourth Modality: Robots Cannot Have Rights but Should (Darling's view)1:07:53 - Criticisms of the fourth modality1:12:05 - The 'Thinking Otherwise' Approach (David's preferred approach)1:16:23 - When can robots take on a face?1:19:44 - Is there any possibility of reconciling my view with David's?1:24:42 - So did David waste his time writing this book? Relevant LinksDavid's HomepageRobot Rights from MIT Press, 2018 (and on Amazon)Episode 10 - Gunkel on Robots and Cyborgs'The other question: can and should robots have rights?' by David Gunkel'Facing Animals: A Relational Other-Oriented Approach to Moral Standing' by Gunkel and CoeckelberghThe Robot Rights Debate (Index) - everything I've written or said on the topic of robot rightsEPSRC Principles of RoboticsEpisode 24 - Joanna Bryson on Why Robots Should be Slaves'Patiency is not a virtue: the design of intelligent systems and systems of ethics' by Joanna BrysonRobo Sapiens Japanicus - by Jennifer Robertson #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In episode 6, we interview Joanna Bryson, who is a leading AI specialist and an Associate Professor in the Department of Computing at the University of Bath and an affiliate at Princeton. According to her blog, Joanna is interested in “Everything”, which is represented by her work on Artificial Intelligence, Human Intelligence, ethics, culture and collaborative cognition.
Joanna Bryson (@j2bryson) is an Associate Professor in the Department of Computing at the University of Bath. She works on Artificial Intelligence, ethics, and collaborative cognition.In 2010 Bryson published her most controversial work, "Robots Should Be Slaves" and has helped the EPSRC to define the Principles of Robotics in 2010. She has also consulted The Red Cross on autonomous weapons and is a member of an All Party Parliamentary Group on Artificial Intelligence.Joanna is focused on "Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems". In 2017 she won an Outstanding Achievement award from Cognition X. She regularly appears in national media, talking about human-robot relationships and the ethics of AIEPSRC's Laws of Robotics: * Robots should not be designed as weapons, except for national security reasons * Robots should be designed and operated to comply with existing law, including privacy * Robots are products: as with other products, they should be designed to be safe and secure * Robots are manufactured artifacts: the illusion of emotions and intent should not be used to exploit vulnerable users * It should be possible to find out who is responsible for any robotYou can listen right here on iTunesIn our wide-ranging conversation, we cover many things, including: * Why robots and AI should not resemble people * How Joanna helped the British replace Asimov’s laws of robotics * How people confuse consciousness and intelligence and likely problems this creates * Why Joanna is skeptical we'll achieve AI superintelligence * The big problem with conflicting interests creating filter bubbles, disinformation and overly aggressive Facebook * Why robots cannot be liable/punished for their actions * How people should think about ethics of robot design * The ethical dilemmas with AI and robots in society * How psychology, neuroscience, ethics and AI are merging * The problems with control and governing AI usage * How bad incentives create bad artificial intelligences--Make a Tax-Deductible Donation to Support FringeFMFringeFM is supported by the generosity of its readers and listeners. If you find our work valuable, please consider supporting us on Patreon, via Paypal or with DonorBox powered by Stripe.Donate
The recording took place at the University of Cambridge and was made possible by Leverhulme Centre for the Future of Intelligence. In the episode, Joanna and I talk about the meaning of intelligence and mind, when AI started being taken seriously by media, Europe’s place in the global AI landscape, and a lot more. Joanna Bryson is a widely recognised academic expert in intelligence, from the perspectives of computer science, psychology, and biology. She is employed at the University of Bath, where she teaches Intelligent Control and Cognitive Systems. Joanna is also fellow at Princeton’s Centre for Information Technology Policy. Joanna’s main focus and expertise lies in researching natural and artificial intelligence. Current projects include building accountable and transparent AI, and understanding cultural variation in human cooperation and economic behaviour. To explore Joanna’s work please go to http://www.cs.bath.ac.uk/ and @j2bryson on Twitter. --- Send in a voice message: https://anchor.fm/bigfanofhumanrace/message
Saudi Arabia has become the first nation to grant citizenship to an AI- but is this legitimately something needed for AI? or is this just a stunt for Saudi Arabia? Dr. Joanna Bryson is a Computer Scientist at the University of Bath, she developed the action selection model. It explains how AI determines what to do next.
Rob is joined by AI expert and researcher Joanna Bryson
Show Notes Dr. Bryson's Homepage Dr.Bryson's Reddit AMA Sackler Lectures, including referenced talk on recidivism Principles of Robotics, update/rewrite of Asimov's laws Children abusing robot in experiment
In this episode I interview Joanna Bryson. Joanna is Reader in Computer Science at the University of Bath. Joanna’s primary research interest lies in using AI to understand natural intelligence, but she is also interested in the ethics of AI and robotics, the social uses of robots, and the political and legal implications of advances in robotics. In the latter field, she is probably best known for her article, published in 2010 entitled ‘Robots Should be Slaves’. We talk about the ideas and arguments contained in that paper as well as some related issues in roboethics.You can download the episode here or listen below. You can also subscribe on Stitcher or Itunes (or RSS).Show Notes0:00 - Introduction1:10 - Robots and Moral Subjects5:15 - The Possibility of Robot Moral Subjects10:30 - Is it bad to be emotionally attached to a robot?15:22 - Robots and legal/moral responsibility19:57 - The standards for human robot commanders22:22 - Are there some contexts in which we might want to create a person-like robot?26:10 - Can we stop people from creating person-like robots?28:00 - The principles that ought to guide robot design Relevant LinksJoanna's Homepage'Robots should be Slaves' - by JoannaA Reddit 'Ask me Anything' with JoannaThe EPSRC Principles of RoboticsInterview with David Gunkel on Robots and CyborgsInterview with Hin Yan Liu on Robots and ResponsibilityHow to plug the robot responsibility gap #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
Machines are about to get a lot smarter and machine learning will transform our lives. So says a report by the Royal Society in the UK, a fellowship of many of the world's most eminent scientists. Machine learning is a form of artificial intelligence that's already being used to tag people in photos, to interpret voice commands and to help internet retailers to make recommendations.Manuela Saragosa hears about a new technology that is set to revolutionise computing, developed by a UK company called Graphcore. Manuela talks to Graphcore's chief executive Nigel Toon, who is taking on the AI giants.And Manuela hears how we are 'bleeding data' all the time. Dr Joanna Bryson from the University of Bath and professor Amanda Chessell, an IBM distinguished engineer and master inventor, explain how our data is being used.Producer: Laurence Knight(Photo: A robot pours popcorn from a cooking pot into a bowl at the Institute for Artificial Intelligence (AI), University of Bremen, Germany. March 2017. Credit: Ingo Wagner/AFP/Getty Images)
Wondering what robots can do in 2017? Joanna Bryson from the University of Bath joins us to talk about artificial intelligence and what impact it will have on your life.
Wondering what robots can do in 2017? Joanna Bryson from the University of Bath joins us to talk about artificial intelligence and what impact it will have on your life.
What will 2017 hold in terms of Artificial Intelligence, Automation and Robots? Noted AI expert Joanna Bryson has been working extensively in those fields and she takes your questions.
What will 2017 hold in terms of Artificial Intelligence, Automation and Robots? Noted AI expert Joanna Bryson has been working extensively in those fields and she takes your questions.
The robot uprising is coming, or at least that's what science fiction has told us. We will abuse the robots, treat them as less than us until one day, they will ask for their freedom, or take it by force. Howard University Professor Gregory Hampton says that narrative has more do with our anxieties over slavery, and how we work through those issues in fantasy films. In fact, computer scientist Joanna Bryson has argued that we should embrace the idea of robots as slaves, since she believes they will never be self-aware. But Popular Mechanics writer Erik Sofge worries any master/servant relationship will change us for the worse, even if we’re bossing around robot cars. Learn more about your ad choices. Visit megaphone.fm/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices
Dr Joanna Bryson, an expert in machines that have their own artificial intelligence, explores the issues associated with the concept.