American-Swiss academic
POPULARITY
Today, Steve is in conversation with Dr. Kate Darling, Research Scientist at the MIT Media Lab and Research Lead at the Boston Dynamics AI Institute. Kate has spent years studying human-robot interaction, and she speaks with Steve about the fascinating impact such interactions can have on us as people, and what that means for businesses trying to incorporate robots and AI into their customer experience. Key Takeaways: 1. It is natural for humans to project human behavior onto non-humans. 2. Using robots to help humans do their work better is smarter than replacing them. 3. More technical expertise is needed for policymaking to keep pace with new technologies. Tune in to hear more about: 1. Why humans form emotional connections with robots 2. How a grocery store robot is scaring customers 3. Pitfalls of commercializing robotics Standout Quotes: 1. “That's part of the reason that we do this, that we create these strong emotional connections, even with non-living things like robots, is because we have this drive, and especially in these emotionally difficult situations, it may even be something that helps people survive. So I don't think it's as black and white as just: we need to prevent this anymore, but it is something that we need to be extremely aware of and acknowledge that it's happening, so that we can address it appropriately where possible.” - Dr. Kate Darling 2. “So I think it's important that we're making the right choices. It's not that technology determines what happens. It really is us as a society choosing to set the right incentives for companies and invest in the right kinds of technology. And I do think that there's much more promise in that path, the path of trying to partner with these technologies and what we're trying to achieve, rather than trying to replace people or recreate something we already have.” - Dr. Kate Darling 3. “We've used most animals like tools and products, and some of them have been our companions, and my prediction for the future is that we're going to do the exact same thing with robots and AI, that most of them will be tools and products and some of them will be companions.” - Dr. Kate Darling Mentioned in this episode: • ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
Today, Steve is in conversation with Dr. Kate Darling, Research Scientist at the MIT Media Lab and Research Lead at the Boston Dynamics AI Institute. Kate has spent years studying human-robot interaction, and she speaks with Steve about the fascinating impact such interactions can have on us as people, and what that means for businesses trying to incorporate robots and AI into their customer experience. Key Takeaways: 1. It is natural for humans to project human behavior onto non-humans. 2. Using robots to help humans do their work better is smarter than replacing them. 3. More technical expertise is needed for policymaking to keep pace with new technologies. Tune in to hear more about: 1. Why humans form emotional connections with robots 2. How a grocery store robot is scaring customers 3. Pitfalls of commercializing robotics Standout Quotes: 1. “That's part of the reason that we do this, that we create these strong emotional connections, even with non-living things like robots, is because we have this drive, and especially in these emotionally difficult situations, it may even be something that helps people survive. So I don't think it's as black and white as just: we need to prevent this anymore, but it is something that we need to be extremely aware of and acknowledge that it's happening, so that we can address it appropriately where possible.” - Dr. Kate Darling 2. “So I think it's important that we're making the right choices. It's not that technology determines what happens. It really is us as a society choosing to set the right incentives for companies and invest in the right kinds of technology. And I do think that there's much more promise in that path, the path of trying to partner with these technologies and what we're trying to achieve, rather than trying to replace people or recreate something we already have.” - Dr. Kate Darling 3. “We've used most animals like tools and products, and some of them have been our companions, and my prediction for the future is that we're going to do the exact same thing with robots and AI, that most of them will be tools and products and some of them will be companions.” - Dr. Kate Darling Mentioned in this episode: • ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
We're starting 2025 with a preview of the episodes ahead, featuring Steve in conversation with thought leaders and security experts from around the world. We look forward to sharing the full episodes with you this winter. Stay tuned! Featured: • Rear Admiral Brian Luther, president and CEO of the insurance firm Navy Mutual • Duncan Wardle, former head of Innovation and Creativity at Disney • Dr. Kate Darling, research scientist at the MIT Media Lab, research lead at the Boston Dynamics AI Institute • Best-selling author and hypnotist Dr. Paul McKenna • Author and leadership expert Sylvie di Giusto • Paul Bartel, senior intelligence analyst with PeakMetrics Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
We're starting 2025 with a preview of the episodes ahead, featuring Steve in conversation with thought leaders and security experts from around the world. We look forward to sharing the full episodes with you this winter. Stay tuned! Featured: • Rear Admiral Brian Luther, president and CEO of the insurance firm Navy Mutual • Duncan Wardle, former head of Innovation and Creativity at Disney • Dr. Kate Darling, research scientist at the MIT Media Lab, research lead at the Boston Dynamics AI Institute • Best-selling author and hypnotist Dr. Paul McKenna • Author and leadership expert Sylvie di Giusto • Paul Bartel, senior intelligence analyst with PeakMetrics Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
For our first episode of Working Smarter we're talking to Kate Darling, a research scientist at MIT's Media Lab and author of The New Breed: What Our History with Animals Reveals about Our Future with Robots. Darling has spent more than a decade studying human-robot interaction through a social, legal, and ethical lens. She's interested in how people relate to robots and digital constructs, socially and emotionally—whether it's an AI-powered chatbot or one of the many robotic dinosaurs that Kate has in her home. Hear Darling talk about the bonds we're already forming with our smart—and not-so-smart—devices at work and at home, and why our relationship with animals might be a better way to frame the interactions we're having with increasingly intelligent machines.Show notes:Visit katedarling.org to learn more about Kate Darling and her work.The New Breed: What Our History with Animals Reveals about Our Future with Robots is available now.The two papers mentioned in this episode are "Bonding with a Couchsurfing Robot: The Impact of Common Locus on Human-Robot Bonding In-the-Wild" by Joost Mollen, Peter van der Putten, and Kate Darling, and "How does my robot know who I am?: Understanding the Impact of Education on Child-Robot Relationships" by Daniella DiPaola.Read the full transcript of this interview on our website.~ ~ ~Working Smarter is a new podcast from Dropbox about how AI is changing the way we work and get stuff done.You can listen to more episodes of Working Smarter on Apple Podcasts, Spotify, YouTube Music, Amazon Music, or wherever you get your podcasts. To read more stories and past interviews, visit workingsmarter.aiThis show would not be possible without the talented team at Cosmic Standard, namely: our producers Samiah Adams and Aja Simpson, technical director Jacob Winik, and executive producer Eliza Smith. Special thanks to Benjy Baptiste for production assistance, our marketing and PR consultant Meggan Ellingboe, and our illustrators, Fanny Luor and Justin Tran. Our theme song was created by Doug Stuart. Working Smarter is hosted by Matthew Braga.Thanks for listening!
In this episode, we recap the recent RoboBusiness 2023 event and the recent news of Agility Robotics humanoid testing at Amazon. Our featured interview on this episode is with Dr. Kate Darling, who is joining Marc Raibert in her dream job at the Boston Dynamics AI Institute to head up their AI and ethics research group. We talk to Kate about everything from AI and humanoid robots to her kids' love affair with the Marty Robot at the local Stop and Shop supermarket. Note: This is a long episode, but there was so much to talk about with Kate, that we didn't want to cut anything out.
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/kate_darling_why_we_have_an_emotional_connection_to_robots ■Post on this topic (You can get FREE learning materials!) https://englist.me/130-academic-words-reference-from-kate-darling-why-we-have-an-emotional-connection-to-robots-ted-talk/ ■Youtube Video https://youtu.be/HLiE7NISwPU (All Words) https://youtu.be/84mPL7Pvlzo (Advanced Words) https://youtu.be/Wvz6cJQ2HGA (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
Muy buenas y bienvenido al podcast “Noticias Marketing”, soy Borja Girón y cada lunes te traigo y analizo las noticias que más pueden impactar en tu negocio para generar más ingresos. Recuerda unirte a la Comunidad Emprendedores desde: https://borjagiron.com/comunidad y podrás acceder a las sesiones de Mastermind cada lunes conmigo y el resto de emprendedores, al podcast secreto, a los retos y las categorías dentro del grupo de Telegram sobre Instagram, RRSS, Finanzas, criptomonedas, salud, Inteligencia Artificial, marketing, podcasting, productividad y todo lo necesario para desbloquear tu negocio.Hoy es lunes 12 de junio 2023, ¿Estás preparado? ¿Estás preparada? ¡Comenzamos!Mejores herramientas IA: https://borjagiron.com/mejores-herramientas-inteligencia-artificial/¿Cómo me escuchas?Kate Darling, experta en robots: “No deberíamos reírnos de la gente que se enamora de una máquina. Nos pasará a todos”La investigadora del MIT lleva años trabajando en las consecuencias de las relaciones entre humanos y máquinas y ahora analiza la explosión de la inteligencia artificialBruselas quiere que las plataformas digitales identifiquen los contenidos generados por IA para combatir la desinformaciónLa Comisión Europea multiplica los esfuerzos para combatir los potenciales efectos negativos de la IA generativa hasta que entre en vigor una legislación en la UE, previsiblemente en 2026ChatGPT para iOS ahora se integra totalmente con el sistema gracias al soporte para Siri y Atajos."Estoy convencida de que la IA es la nueva interfaz de usuario. Es muy probable que en el futuro esta tecnología termine reemplazando a la web", asegura Sarah Franklin (Salesforce).Instagram tendrá su propio ChatGPT y ofrecerá 30 personalidades diferentesEl reputado leaker Alessandro Paluzzi ha descubierto que Instagram ha comenzado a probar la integración con un chatbot de inteligencia artificial, a lo ChatGPT. La característica podría ser muy similar a la incorporada recientemente por Snapchat.McDonald's pregunta a ChatGPT cuál es hamburguesa más icónica y esta es la respuestaA través de este experimento, la cadena de restauración ha creado la campaña 'A.I'm Lovin' It'.El 44% de los empleados en España cree que su empleo podría desaparecer por culpa de la IA, según una encuestaThe Future of Advertising (#FOA23) FOA 2023, el evento de marketing y publicidad celebrado en Madrid el 6 de junio, causó furor en Twitter con más de 24 millones de impactos potenciales con #FOA23 Con https://www.tweetbinder.com/es/ puedes medir datos en Twitter.La jefa de seguridad de marca y calidad publicitaria de Twitter abandona la compañía de Elon MuskExpulsar a los líderes tóxicos de las redes sociales reduce la propagación del odio en InternetUn estudio de Facebook demuestra que borrar un centenar de cuentas de ‘insultadores' tiene un impacto positivo en la audienciaManuel Moreno de TreceBits presenta su libro “Followers” en Cuarto Milenio hablando sobre códigos secretos en las redes: https://www.trecebits.com/codigos-secretos-en-las-redes-trecebits-cuarto-milenio/Manuel me entrevistó en un evento y salí en su web.https://www.trecebits.com/borja-giron-se-necesita-ano-trabajo-empezar-ver-resultados-blog/«YouTube azul» y «YouTube naranja» son dos formas diferentes de llevar a los menores hacia contenido pornográfico. Estos códigos se mencionan en los videos de TikTok, invitando a los menores a buscar más contenido en sitios web específicos. La falta de control y moderación en TikTok ha permitido que estos códigos se utilicen sin restricciones, exponiendo a los menores a material inapropiado y potencialmente dañino.WhatsApp lanza Canales, una alternativa a Twitter dentro de la propia appWhatsApp ahora permite enviar fotos a máxima calidad y sin pérdida: así puedes probarloLa función permite enviar imágenes a través de WhatsApp en calidad HD conservando su tamaño original.Headliner estrena “Disco Free”, una herramienta que proporciona episodios de pódcast contextualmente relevantes basados en el texto del artículo o la página. Headliner, asegura que al incrustar recomendaciones de pódcast en un sitio web, se obtendrá una tasa de conversión cuatro veces mayor, en comparación con las incrustaciones manuales.Varios streamers están instando a sus colegas creadores de contenido a boicotear Twitch después de que la plataforma anunciara importantes cambios en el contenido patrocinado.Amazon podría añadir anuncios a Prime Video para que pagues másLa empresa se subiría al tren de las suscripciones con publicidad, aunque en este caso podría aplicarse al plan actual de Prime Video.iPadOS 17 por fin permitirá conectar cámaras externas al iPadNueva actualización de Apple Podcasts mostrará imágenes en los episodiosRevelan que YouTube es ahora la plataforma de escucha de pódcast más utilizada en los Estados UnidosUna popular marca de golosinas ofrecerá un código QR a sus compradores para que accedan a episodios especiales de pódcast.Detenida la grabación de ‘Misión Imposible 8' debido a la huelga de guionistas. Y pinta muy malMessi ha fichado por el Inter Miami, de la MLS (Major League Soccer de EEUU y Canadá). Y Apple, como una de las principales beneficiadas, también habría intervenido en la operación.Los países ricos prometieron 100 mil millones al año para la lucha contra el cambio climático. No solo no han cumplido con lo acordado, sino que lo invertido ha ido a parar a negocios extraños. Una planta de carbón, por ejemplo. https://hipertextual.com/2023/06/lamentable-destino-dinero-cambio-climaticoGamificación en tus eventos y presentaciones: https://kahoot.com/https://ahaslides.com/es/La TV Conectada sigue en ascenso: 31 millones de españoles acceden a contenido audiovisual de TV a través de internetEl consumo de cafeína influye en el número de artículos comprados y el volumen de gasto, según este estudio* Una investigación internacional señala el impacto de la cafeína en el comportamiento de compra* Quienes beben cafeína gastan más y adquieren más artículos que quienes beben descafeinado o aguahttps://www.reasonwhy.es/actualidad/consumo-cafeina-influencia-comportamiento-compra-estudioLa UE se plantea la prohibición total de Huawei en el 5G europeo* Inditex ha vendido más que nunca en un primer trimestre de su año fiscal y logra, de forma holgada y por primera vez, romper la barrera de los 7.000 millones de euros.* ¿Cómo ha conseguido el milagro? Con un espacio comercial más mermado y un canal online con grandes expectativas pero de cuyo avance no hay rastro. Los buenos resultados de Inditex se deben a mantener los precios altos en las tiendas fuera de la eurozona.Las ventas digitales superarán el 30% de los ingresos totales en 2024Tras protagonizar una agresiva campaña de adquisición de empresas en Europa, Canva certifica su apuesta por el viejo continente con la apertura de oficinas en Londres.* La empresa busca verdaderos apasionados del diseño en ciudades como Praga, Dublín, Viena y la propia Londres.IBM anuncia su primer centro de datos cuánticos en Europa, el segundo del mundoLa instalación se situará en el complejo alemán de Ehiningen y permitirá el acceso desde cualquier país de la UE a procesadores de más de 100 cúbitsInvestigadores de EE UU han creado una inyección de dosis única, que podría ser una solución eficaz, rápida y segura frente a métodos más agresivos como la eutanasia o la esterilización quirúrgica en gatos.La marca de productos sexuales Womanizer lanza el primer cabezal de ducha específico para la masturbación* El producto ha sido diseñador por Womanizer en colaboración con la marca de productos de baño Hansgrohe* "Womanizer Wave representa un importante paso adelante en la desestigmatización de la masturbación"Has escuchado las noticias que más pueden impactar en tu negocio y que te ayudarán a tener más ingresos.Si quieres seguir escuchando estos episodios comparte el podcast, dale a me gusta, deja 5 estrellas o comenta el episodio.Recibe en tu eConviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/noticias-marketing--5762806/support.
Kate Darling of Darling Flamingo fame stops by to chat about her new album, Strawflower. We also talk about matzahs and the elusive art of baking them perfectly.
Lexman interviews Kate Darling about her new book Razoo Sagamore. It's a novel about a girl who has a psychokinetic power and her adventures with an enigmatic animal companion.
Dr. Kate Darling is a Research Scientist at the MIT Media Lab, an ‘anti-disciplinary' institute that works across technology, media, science, art, and design. Kate has had to work hard to be recognized as an expert in the field of human-robot interaction and now challenges herself to pass on power and opportunity to others whenever she can.Kate began her journey in law school, where she found herself fascinated by the ethical and legal implications of human-robot interaction. She now dedicates her life to exploring this field through a multidisciplinary lens. Kate uses her platform to not only speak, write and educate but to also advocate for others. She manifests this allyship through challenging uninclusive work practices and by putting others forward for speaking engagements who might not normally get the chance. As she puts it, she likes to be the “squeaky wheel” demanding change.Kate is also the author of The New Breed: What Our History with Animals Reveals about Our Future with Robots.Join us every episode with hosts Suchi Srinivasan & Corin Lines from BCG to hear meaningful conversations with women in digital, technology, and business.This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacy
Robot ethicist Kate Darling offers a nuanced and smart take on our relationships to robots and the increasing presence they will have in our lives. From a social, legal, and ethical perspective, she shows that our current ways of thinking don't leave room for the robot technology that is soon to become part of our everyday routines. Robots are likely to supplement, rather than replace, our own skills and relationships. Darling also considers our history of incorporating animals into our work, transportation, military, and even families, and shows how we already have a solid basis for how to contend with, and navigate our future with robots. Dr. Kate Darling works at the intersection of law, ethics and robotics; as a researcher at MIT Media Lab, author and intellectual property policy advisor. Her work with Dr. Lawrence Lessig, the Harvard Berkman Klein Center for Internet & Society, and other institutions explores the difficult questions that lawmakers, engineers, and the wider public will need to address as human-robot relationships evolve in the coming decades. Darling's work is widely published and covered in the media; and her new book is The New Breed: What Our History With Animals Reveals About Our Future With Robots.
Lexman Artificial is joined by Kate Darling, the author of academic papers on characids and stalker behavior. They discuss the strange and fascinating world of these creepy creatures.
Kate Darling is a lecturer at the University of Cambridge, where shespecializes in the anthropology of Vasco Da Gama. She speaks about her new book, which argues that Da Gama was not simply a robber baron but also a demigod. Lexman interviews her about her work and discusses some of the challenges of translating historical texts from Portuguese to English.
Kate Darling, internationally acclaimed musician and sound artist, joins us to discuss her latest album, Fandango Forever. We chat about the music, trichromats, and Sisyphus.
Kate Darling is a researcher at MIT Media Lab interested in human robot interaction and robot ethics. Please support this podcast by checking out our sponsors: – True Classic Tees: https://trueclassictees.com/lex and use code LEX to get 25% off – Shopify: https://shopify.com/lex to get 14-day free trial – Linode: https://linode.com/lex to get $100 free credit – InsideTracker: https://insidetracker.com/lex to get 20% off – ExpressVPN: https://expressvpn.com/lexpod to get 3 months free EPISODE LINKS: Kate's Twitter: http://twitter.com/grok_ Kate's Website: http://katedarling.org Kate's Instagram: http://www.instagram.com/grok_ The New Breed (book): https://amzn.to/3ExhBuf Creativity without Law (book): https://amzn.to/3MqV5F3 LuLaRobot (paper): http://drive.google.com/file/d/1PtYpkDQaQVPbhQIc6wcCC50JKWVsDo3k/view PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple
Kate Darling joins Lexman for a discussion on the current state of politics in the world. They discuss the recent controversies surrounding North Korea, the annexation of Crimea by Russia, and the rising tide of Trotskyism. Lexman challenges Darling on her understanding of these events, and they eventually come to a fruitful discussion about the uses and limitations of political correctness.
Dr Kate Darling, a researcher specialising in human-robot interaction at the MIT Media Lab talks to us about artificial intelligence and tells us why we don't need to worry about a robot uprising. Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
Kate Darling tells about her days as a food tussler and the best way to eat suppertime.
In this episode of Lexman, the artificial intelligence podcast, we are chatting with Kate Darling, an artist and filmmaker based in Chicago. We talk about her work with WHHN and her latest project, a short film called Wheal. Wheal is a psychedelic thriller set in an isolated cabin on a remote island and tells the story of a young woman who becomes trapped in a spiraling dream world. We also discuss the cosmetology industry, some of Kate's experiences working with clients, and the creative process behind Wheal.
Kate Darling from the hit show, Shake It Up, sits down with Lexman to discuss all things Venereologists. From diagnosing your symptoms to dating options, this episode has everything you need to know about venereology!
Dr. Kate Darling, a Research Specialist at the MIT Media Lab and author of The New Breed: What Our History with Animals Reveals about Our Future with Robots, and I sit down to discuss the future of robots and the world. Are they replacements or are they a breed of their own? Topics:Thesis of the New BreedWhat is a robot?Do robots think?How does this "new breed" fit into society?Rights and robotsThe origin of rights and how that relates to robotsWhen will AI and robots emerge into society?What books have had an impact on Dr. DarlingWhat advice Dr. Darling has for teenagersResources:The Dispossessed - https://amzn.to/3R87nEbGirl Goddess #9 - https://amzn.to/3nBDO0kDr. Kate Darling is a Research Specialist at the MIT Media Lab and author of The New Breed: What Our History with Animals Reveals about Our Future with Robots. Kate's work looks at the near-term effects of robotic technology, with a particular interest in law, social, and ethical issues. She runs experiments, holds workshops, writes, and speaks about some of the more interesting developments in the world of human-robot interaction, and where we might find ourselves in the future.Socials! -Substack: https://taylorbledsoe.substack.com/Website: https://www.aimingforthemoon.com/Instagram: https://www.instagram.com/aiming4moon/Twitter: https://twitter.com/Aiming4MoonTaylor's Blog: https://www.taylorgbledsoe.com/YouTube: https://www.youtube.com/channel/UC6-TwYdfPcWV-V1JvjBXkAll Amazon Affiliate links help financially support "Aiming for the Moon" while you get a great read or product
Lexman interviews Kate Darling, a writer and musician based in Brooklyn. They discuss Miltonia, southings, and Rockaway, as well as Kate's new album Aberration.
When we picture robots, we normally think of an artificial being created in our own image. But what if this were deeply misleading? Author ofThe New Breed, Kate Darling, joins Adam to separate fact from science fiction and discuss the potentials and perils of real life robots. They get into the ethical issues involved with autonomous weapons systems and vehicles, why robots don't need to look like people, and why robots might be better thought of as animal companions, rather than human replacements. You can purchase Kate's book here: http://factuallypod.com/books
Joe DosSantos is joined by Kate Darling in the latest episode of Data Brilliant. Kate Darling is an expert in robot ethics and research specialist at the Massachusetts Institute of Technology (MIT) Media Lab. She specialises in researching human and robot interactions so we can anticipate the difficult questions that we could face in the future, from technology law and policy to AI ethics. Kate shares with Joe how the vast power of data is making machines smarter, but how machine intelligence differs to human intelligence. They discuss what a robot really is, outside of the stereotypical silver object we all think of. She shares examples of how human connection with robots can be compared with our relationships with animals and how our Terminator understanding of robots is far from the reality. See acast.com/privacy for privacy and opt-out information.
Today's episode is the last of our FOUR PART MINI SEASON SERIES AND SHOW FINALE [airhorn noises]. This week we're headed to a future where humans share our robotic knowledge with the rest of the animal kingdom—for better, and for worse. ✨✨ TAKE THE LISTENER SURVEY HERE ✨✨ ⭐⭐ SIGN UP FOR THE NEWSLETTER HERE ⭐⭐ Guests: Dr. Kate Darling, a researcher at the MIT Media Lab and author of The New Breed: What Our History with Animals Reveals about Our Future with Robots. Emma Marris, a journalist and author of a book called Wild Souls: Freedom and Flourishing in the Non-Human World. Dr. Giovanni Polverino, an animal behavior researcher at The University of Western Australia. Dr. Rae Wynn Grant, a wildlife ecologist and host of the podcast Going Wild. Voice Actors: Rachael Deckard: Richelle Claiborne Malik: Henry Alexander Kelly Summer: Shara Kirby Ashoka: Anjali Kunapaneni Eliza: Chelsey B Coombs Dorothy Levitt: Tamara Krinsky John Dee: Keith Houston Dr. Jane de Vaucanson: Jeffrey Nils Gardner X Marks the Bot theme song written by Ilan Blanck. → → → Further reading & resources here! ← ← ← Flash Forward is hosted by, Rose Eveleth and produced by Julia Llinas Goodman. The intro music is by Asura and the outro music is by Hussalonia. The episode art is by Mattie Lubchansky. Get in touch: Twitter // Facebook // Reddit // info@flashforwardpod.com Support the show: Patreon // Donorbox Subscribe: iTunes // Soundcloud // Spotify Episode Sponsors: BirdNote: BirdNote Daily, you get a short, 2-minute daily dose of bird — from wacky facts, to hard science, and even poetry. And now is the perfect time to catch up on BirdNote's longform podcasts: Threatened and Bring Birds Back. Find them all in your podcast listening app or at BirdNote.org. Nature: The leading international journal of science. Get 50% off your yearly subscription when you subscribe at go.nature.com/flashforward. And here is the video of the ancient crocodile robot I mentioned. Dipsea: An audio app full of short, sexy stories designed to turn you on. Get an extended 30 day free trial when you go to DipseaStories.com/flashforward. BetterHelp: Affordable, private online counseling. Anytime, anywhere. Flash Forward listeners: get 10% off your first month at betterhelp.com/flashforward Learn more about your ad choices. Visit megaphone.fm/adchoices
We're far from developing robots that feel emotions, but we already have feelings towards them, says robot ethicist Kate Darling, and an instinct like that can have consequences. Learn more about how we're biologically hardwired to project intent and life onto machines -- and how it might help us better understand ourselves.
Carla and Tom discuss Dr. Kate Darling's new book about robotics ethics in light of relationships with animals. Kate Darling on Episode 8 of RoboPsych Podcast "The New Breed" by Kate Darling Cranbrook Academy of Arts 4D Design Program "Finch" on Apple TV+ Video: people abusing robots Trigger warning: Tickle Me Elmo on fire video Literature review of animal abuse and violence in children Twitter users taught Microsoft chat bot hate speech The RoboPsych Podcast has been voted one of the Top 5 Robotics Podcasts by Feedspot readers. Thanks for listening to the RoboPsych Podcast. Please subscribe and review! Subscribe in iTunes Subscribe on Overcast RoboPsych.com
Kate Darling's book is called - The New Breed, How To Think About Robots Contact Tim Email: LovejoyHour@gmail.com Twitter: @TimLovejoy Instagram: @TimLovejoy_Official
Wherein Kate Darling of the MIT Media Lab defends the honor of robots. Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
Episode Notes Is it OK to hit a robot? And why do we sometimes get sad when a robot or machine breaks? Will AI take over the world any time soon? We discuss this and so much more with MIT Research Specialist Kate Darling as we dive into her book The New Breed. Follow Kate on Twitter @grok_ Follow Kate on Instagram @grok_ Get a copy of The New Breed Visit Kate's website For the interview transcript visit www.TheRewiredSoul.com/interviews Follow @TheRewiredSoul on Twitter and Instagram Support The Rewired Soul: Get books by Chris Support on Patreon Try BetterHelp Online Therapy (affiliate) Donate
Are robots going to take over? On this episode, Neil deGrasse Tyson & comic co-host Negin Farsad explore our future with artificial intelligence by looking at our past with animals with robot ethicist and author of A New Breed, Dr. Kate Darling. NOTE: StarTalk+ Patrons can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/show/cosmic-queries-robot-ethics-with-dr-kate-darling/ Thanks to our Patrons Dino Vidić, Violetta + my mom, Izzy, Jeni Morrow, Sian Alam, Leonard Drikus Jansen Van Vuuren, Marc Wolff, LaylaNicoleXO, Eric Colombel, Jonathan Siebern, and Chris Beck for supporting us this week. Photo Credit: Photo: Harland Quarrington/MOD, OGL v1.0, via Wikimedia Commons See omnystudio.com/listener for privacy information.
Biden's New Assistant Secretary Of Health On Protecting Trans Youth The American healthcare system is facing some incredible challenges: Black and Latino communities were hit harder by COVID-19, and have lower vaccination rates than white, Asian, and Native American communities. The opioid crisis is still raging, climate change is disproportionately impacting the health of communities of color, and a wave of anti-trans healthcare bills are being pushed by Republican lawmakers through multiple states. Dr. Rachel Levine, President Biden's appointee for assistant secretary of health for the department of Health and Human Services, is aiming to take on all of that, and more. She previously served as Pennsylvania's secretary of health and physician general, combating both the opioid and COVID-19 crises there. Now, she wants to scale those efforts to a federal level, in addition to helping meet President Biden's goal of getting 70% of adults with at least one vaccine dose by July 4. She also made history as the highest-ranking, openly transgender person to have served in the federal government. Levine talks to Ira about the steps needed to achieve health equity, advocating for the healthcare rights of trans youth and adults, and her ambitions for her time in office. Why Oxen Were The Original Robots In media and pop culture narratives about robotic futures, two main themes dominate: there are depictions of violent robot uprisings, like the Terminator. And then there are those that circle around the less deadly, more commonplace, fear that machines will simply replace humans in every role we excel at. There is already precedent for robots moving into heavy lifting jobs like manufacturing, dangerous ones like exploring outer space, and the most boring of administrative tasks, like computing. But roboticist Kate Darling would like to suggest a new narrative for imagining a better future—instead of fighting or competing, why can't we be partners? The precedent for that, too, is already here—in our relationships with animals. As Darling writes in The New Breed: What Our History With Animals Reveals About Our Future With Robots, robotic intelligence is so different from ours, and their skills so specialized, that we should envision them as complements to our own abilities. In the same way, she says, a horse helps us travel faster, pigeons once delivered mail, and dogs have become our emotional companions. Darling speaks with Ira about the historical lessons of our relationships with animals, and how they could inform our legal, ethical, and even emotional choices about robots and AI.
Intelligent machines will play a much larger role in the future than they do now, and we’re trying to imagine that future as we’re racing toward it. Some people envision things straight out of a Black Mirror episode with terrifying killer robots, or super smart machines taking away jobs. MIT Media Lab researcher Kate Darling says those angsty visions are not helpful in getting a better grasp of what the future will hold. Instead, she suggests that we should look at our relationship with artificial intelligence and robots more like our relationship with animals. She talks to host Maiken Scott about her new book “The New Breed: What Our History with Animals Reveals About Our Future with Robots”.
In a lot of ways, artificial intelligence acts as our personal butlers — it filters our email, manages the temperature in our homes, finds the best commute, shapes our social media, runs our search engines, even flies our planes. But as AI gets involved in more and more aspects of our lives, there are nagging fears. Will AI replace us? Make humans irrelevant? Make some kind of terrible mistake, or even take over the world? On this episode, we hear from scientists and thinkers who argue that we should look at AI not as a threat or competition, but as an extension of our minds and abilities. They explain what AI is good at, and where humans have the upper hand. We look at AI in three different settings: medicine, work, and warfare, asking how it affects our present — and how it could shape our future. Also heard on this week’s episode: We meet an engineer who quit her dream job at Google because she was being asked to work on a project for the Department of Defense — and she says she didn’t want to be “part of a kill chain.” This excerpt from WHYY’s new podcast A.I. Nation, explores the ethical challenges surrounding the use of autonomous weapons. “The big danger to humanity is not that AI is too smart. It’s that it’s too stupid,” says Pedro Domingos, a professor of computer science at the University of Washington. He explains what exactly AI is, and why we often use this term for things that are not artificial intelligence. Domingos’ book is “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World.” Will a machine read your resume? Or maybe even interview you? Alex Engler, the AI and Democracy fellow at the Brookings Institution, answers questions about how AI is currently being used in the hiring process, and whether it can do a better job than humans at eliminating bias. Kate Darling, a researcher at the MIT Media Lab, explains why we should think of AI less as rivals — and more as pets and other animals. Her book is called “The New Breed: What Our History with Animals Reveals about Our Future with Robots.”
BigBrain's CEO Nik Bonaddio joins us to chat about the app that allows you to compete against the world in live trivia contests for real-money prizes. Plus, we talk to Dr. Kate Darling, a Research Specialist at the MIT Media Lab, about her new book The New Breed: What Our History with Animals Reveals about Our Future with Robots. Also, author Brad Stone reveals how money, sex, power (and more), are covered in his new book Amazon Unbound: Jeff Bezos and the Invention of a Global Empire. In Socially Speaking, we discuss whether Facebook's new feature, the "read before you share warning", is a good idea and question if it will change the amount of misinformation that appears on the platform. Find out more information from our guests here: Big Brain App media.mit.edu Brad Stone You can also find both AmberMac and Michael B on Twitter.
It’s not too hard to imagine a future in which we’re as likely to interact with a robot as we would a family member or pet. Fortunately, a blueprint already exists for how we might handle these robot relationships. Kate Darling is a researcher in MIT’s Technology Media Lab, and she joins host Krys Boyd to talk about how our relationship with animals might serve as a guide to our dealings with robots. Her book is “The New Breed: What Our History with Animals Reveals about Our Future with Robots.”
Dr. Kate Darling is a leading expert in Robot Ethics. She’s a researcher at the Massachusetts Institute of Technology (MIT) Media Lab, where she investigates social robotics and conducts experimental studies on human-robot interaction. Kate explores the emotional connection between people and life-like machines, seeking to influence technology design and policy direction. Her writing and research anticipate difficult questions that lawmakers, engineers, and the wider public will need to address as human-robot relationships evolve in the coming decades. Forever interested in how technology intersects with society, Kate has a background in law & economics and intellectual property. She has researched economic incentives in copyright and patent systems and has taken a role as intellectual property expert at multiple academic and private institutions. She currently serves as intellectual property policy advisor to the director of the MIT Media Lab. Her passion for technology and robots has led her to interdisciplinary fields. After co-teaching a robot ethics course at Harvard Law School with Professor Lawrence Lessig, she began to work at the intersection of law and robotics, with a focus on legal and social issues. Kate is a former Fellow at the Harvard Berkman Klein Center for Internet & Society and the Yale Information Society Project, and is also an affiliate at the Institute for Ethics and Emerging Technologies. Kate’s work has been featured in Vogue, The New Yorker, The Guardian, BBC, NPR, PBS, The Boston Globe, Forbes, CBC, WIRED, Boston Magazine, The Atlantic, Slate, Die Zeit, The Japan Times, and more. She was a contributing writer to Robohub and IEEE Spectrum and currently speaks and holds workshops covering some of the more interesting developments in the world of robotics, and where we might find ourselves in the future. Kate graduated from law school with honors and holds a doctorate of sciences from the Swiss Federal Institute of Technology (ETH Zurich) and an honorary doctorate of sciences from Middlebury College. In 2017, the American Bar Association honored her legal work with the Mark T. Banner award in Intellectual Property. She is the caretaker for several domestic robots, including her Pleos Yochai, Peter, and Mr. Spaghetti. She tweets as @grok_
Better at English - Free English conversation lessons podcast
Hello my lovely English learners! Lori here, your teacher from BetterAtEnglish.com. I love technology, so we’re talking about robots today, but not in the way you might expect. A lot of conversations about robots have to do with whether or not a robot or machine could ever develop genuine feelings or emotions. But today we’re going to be thinking about our own emotions and feelings toward robots, particularly empathy. Can we feel empathy toward robots? And if so, why? Links to pre-listening background -- to get the most out of this podcast: Short video of someone “torturing” a robot dinosaur (part of a research experiment). Make sure you watch it with sound. What do you feel as you watch this? https://www.youtube.com/watch?v=wAVtkh0mL20 Kate Darling: Why we have an emotional connection to robots (TED talk) https://www.ted.com/talks/kate_darling_why_we_have_an_emotional_connection_to_robots?language=en Yasmin's profile on italki Full transcript of this episode Allow me to introduce you to Kate Darling. She is a super cool researcher who is looking into this very question. I’m going to play you a little bit from the beginning of her TED talk, where she explains how she got into this line of research. The link to the full presentation is in the show notes. It’s as entertaining as it is interesting and thought provoking, so I can wholeheartedly recommend you check out the whole thing. OK, here comes Kate: Kate Darling: “There was a day, about 10 years ago, when I asked a friend to hold a baby dinosaur robot upside down. It was this toy called a Pleo that I had ordered, and I was really excited about it because I've always loved robots. And this one has really cool technical features. It had motors and touch sensors and it had an infrared camera. And one of the things it had was a tilt sensor, so it knew what direction it was facing. And when you held it upside down, it would start to cry. And I thought this was super cool, so I was showing it off to my friend, and I said, "Oh, hold it up by the tail. See what it does." So we're watching the theatrics of this robot struggle and cry out. And after a few seconds, it starts to bother me a little, and I said, "OK, that's enough now. Let's put him back down." And then I pet the robot to make it stop crying. And that was kind of a weird experience for me. For one thing, I wasn't the most maternal person at the time. Although since then I've become a mother, nine months ago, and I've learned that babies also squirm when you hold them upside down. (Laughter) But my response to this robot was also interesting because I knew exactly how this machine worked, and yet I still felt compelled to be kind to it. And that observation sparked a curiosity that I've spent the past decade pursuing. Why did I comfort this robot? And one of the things I discovered was that my treatment of this machine was more than just an awkward moment in my living room, that in a world where we're increasingly integrating robots into our lives, an instinct like that might actually have consequences, because the first thing that I discovered is that it's not just me.” She’s right, it’s not just her. I found a short video on Youtube that shows somebody being really mean to the same type of robot dinosaur that Kate uses in her research. It’s only one minute long, so if you want to pause the podcast and go watch it, feel free. The link is in the show notes. Anyway, when I watched this video myself I felt really uncomfortable, even though I knew it was just a toy robot. I’m not alone; here are some of the Youtube comments. “Why would you do this!!!! It looks so scared, please stop and let me hug it.” “The last part when he was hitting him to the table I heard it crying; that’s so sad.” “I feel bad for him, although I know it’s just a pile of plastic and metal that can’t even think.” Of course, Youtube comments being what they are,
Have you ever seen a robot and called it cute? Have you ever seen a drone and felt afraid? Have you ever apologized to siri or yelled at your rumba to get out of the way? Have you ever named your car? Our relationships with robots are complex and messy, to explore this topic, we interview Kate Darling, a leading expert in Robot Ethics and a Research Specialist at the MIT Media Lab. Kate researches the near-term effects of robotic technology, with a particular interest in law, social, and ethical issues. Full show notes for this episode can be found at Radicalai.org. If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
Tonya Hall talks with Dr. Kate Darling, research specialist at the MIT Media Lab, to learn more about how the future of AI, robotics, and automation was reshaped following the COVID-19 pandemic outbreak. FOLLOW US - Subscribe to ZDNet on YouTube: http://bit.ly/2HzQmyf - Watch more ZDNet videos: http://zd.net/2Hzw9Zy - Follow ZDNet on Twitter: https://twitter.com/ZDNet - Follow ZDNet on Facebook: https://www.facebook.com/ZDNet - Follow ZDNet on Instagram: https://www.instagram.com/ZDNet_CBSi - Follow ZDNet on LinkedIn: https://www.linkedin.com/company/zdnet-com/ - Follow ZDNet on Snapchat: https://www.snapchat.com/add/zdnet_cbsi Learn more about your ad choices. Visit megaphone.fm/adchoices
No Resumo desta semana: Totvs e Stone disputam Linx; Via Varejo reporta lucro impulsionado pelo digital; HPE lança linha de supercomputadores; iFood testará entregas com drones; Entrevista com Kate Darling, do MIT, reflete sobre a influência da pandemia em nossa relação com robôs e outras tecnologias
Kate Darling is a researcher at MIT, interested in social robotics, robot ethics, and generally how technology intersects with society. She explores the emotional connection between human beings and life-like machines, which for me, is one of the most exciting topics in all of artificial intelligence. Support this podcast by signing up with these sponsors: – ExpressVPN at https://www.expressvpn.com/lexpod – MasterClass: https://masterclass.com/lex EPISODE LINKS: Kate’s Website: http://www.katedarling.org/ Kate’s Twitter: https://twitter.com/grok_ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook,
What separates humans from robots? Will humans eventually be fully dependent on automation? Neil deGrasse Tyson, comic co-host Chuck Nice, and robot ethicist Kate Darling, PhD, answer your Cosmic Queries on humans, robots, and everything in-between. NOTE: StarTalk+ Patrons and All-Access subscribers can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/show/cosmic-queries-humans-and-robots/ Thanks to our patrons Rusty Faircloth, Jaclyn Mishak, Thomas Hernke, Marcus Rodrigues Guimaraes, Alex Pierce, Radu Chichi, Dustin Laskosky, Stephanie Tasker, Charles J Lamb, and Jonathan J Rodriguez for supporting us this week. Special thanks to patron Michelle Danic for our Patreon Patron Episode ID this week. Photo Credit: Web Summit / CC BY (https://creativecommons.org/licenses/by/2.0
What’s the difference between a robot and an android? Should laws protect robots? Neil deGrasse Tyson explores the rise of robots with “I Am C-3PO” author and Star Wars actor Anthony Daniels, comic co-host Chuck Nice, and robot ethicist Kate Darling, PhD. NOTE: StarTalk+ Patrons and All-Access subscribers can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/show/c-3po-and-the-rise-of-robots-with-anthony-daniels/ Thanks to our Patrons Leon Galante, Tyler Miller, Chadd Brown, Oliver Gigacz, and Mike Schallmo for supporting us this week. Photo Credit: StarTalk.
Tonya Hall talks to Dr. Kate Darling, research specialist at the MIT Media Lab, to learn more about why, in some cases, military officers are becoming attached to battlefield robots. FOLLOW US - Subscribe to ZDNet on YouTube: http://bit.ly/2HzQmyf - Watch more ZDNet videos: http://zd.net/2Hzw9Zy - Follow ZDNet on Twitter: https://twitter.com/ZDNet - Follow ZDNet on Facebook: https://www.facebook.com/ZDNet - Follow ZDNet on Instagram: https://www.instagram.com/ZDNet_CBSi - Follow ZDNet on LinkedIn: https://www.linkedin.com/company/ZDNe... - Follow ZDNet on Snapchat: https://www.snapchat.com/add/zdnet_cbsi Learn more about your ad choices. Visit megaphone.fm/adchoices
[This is the text of a lecture that I delivered at Tilburg University on the 24th of September 2019. It was delivered as part of the 25th Anniversary celebrations for TILT (Tilburg Institute for Law, Technology and Society). My friend and colleague Sven Nyholm was the discussant for the evening. The lecture is based on my longer academic article ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’ but was written from scratch and presents some key arguments in a snappier and clearer form. I also include a follow up section responding to criticisms from the audience on the evening of the lecture. My thanks to all those involved in organizing the event (Aviva de Groot, Merel Noorman and Silvia de Conca in particular). You can download an audio version of this lecture, minus the reflections and follow ups, here or listen to it above]1. IntroductionMy lecture this evening will be about the conditions under which we should welcome robots into our moral communities. Whenever I talk about this, I am struck by how much my academic career has come to depend upon my misspent youth for its inspiration. Like many others, I was obsessed with science fiction as a child, and in particular with the representation of robots in science fiction. I had two favourite, fictional, robots. The first was R2D2 from the original Star Wars trilogy. The second was Commander Data from Star Trek: the Next Generation. I liked R2D2 because of his* personality - courageous, playful, disdainful of authority - and I liked Data because the writers of Star Trek used him as a vehicle for exploring some important philosophical questions about emotion, humour, and what it means to be human.In fact, I have to confess that Data has had an outsized influence on my philosophical imagination and has featured in several of my academic papers. Part of the reason for this was practical. When I grew up in Ireland we didn’t have many options to choose from when it came to TV. We had to make do with what was available and, as luck would have it, Star Trek: TNG was on every day when I came home from school. As a result, I must have watched each episode of its 7-season run multiple times.One episode in particular has always stayed with me. It was called ‘Measure of a Man’. In it, a scientist from the Federation visits the Enterprise because he wants to take Data back to his lab to study him. Data, you see, is a sophisticated human-like android, created by a lone scientific genius, under somewhat dubious conditions. The Federation scientist wants to take Data apart and see how he works with a view to building others like him. Data, unsurprisingly, objects. He argues that he is not just a machine or piece of property that can be traded and disassembled to suit the whims of human beings. He has his own, independent moral standing. He deserves to be treated with dignity.But how does Data prove his case? A trial ensues and evidence is given on both sides. The prosecution argue that Data is clearly just a piece of property. He was created not born. He doesn’t think or see the world like a normal human being (or, indeed, other alien species). He even has an ‘off switch’. Data counters by giving evidence of the rich relationships he has formed with his fellow crew members and eliciting testimony from others regarding his behaviour and the interactions they have with him. Ultimately, he wins the case. The court accepts that he has moral standing.Now, we can certainly lament the impact that science fiction has on the philosophical debate about robots. As David Gunkel observes in his 2018 book Robot Rights:“[S]cience fiction already — and well in advance of actual engineering practice — has established expectations for what a robot is or can be. Even before engineers have sought to develop working prototypes, writers, artists, and filmmakers have imagined what robots do or can do, what configurations they might take, and what problems they could produce for human individuals and communities.” (Gunkel 2018, 16)He continues, noting that this is a “potential liability” because:“science fiction, it is argued, often produces unrealistic expectations for and irrational fears about robots that are not grounded in or informed by actual science.” (Gunkel 2018, 18)I certainly heed this warning. But, nevertheless, I think the approach taken by the TNG writers in the episode ‘Measure of a Man’ is fundamentally correct. Even if we cannot currently create a being like Data, and even if the speculation is well in advance of the science, they still give us the correct guide to resolving the philosophical question of when to welcome robots into our moral community. Or so, at least, I shall argue in the remainder of this lecture.2. Tribalism and Conflict in Robot EthicsBefore I get into my own argument, let me say something about the current lay of the land when it comes to this issue. Some of you might be familiar with the famous study by the social psychologist Muzafer Sherif. It was done in the early 1950s at a summer camp in Robber’s Cave, Oklahoma. Suffice to say, it is one of those studies that wouldn’t get ethics approval nowadays. Sherif and his colleagues were interested in tribalism and conflict. They wanted to see how easy it would be to get two groups of 11-year old boys to divide into separate tribes and go to war with one another. It turned out to be surprisingly easy. By arbitrarily separating the boys into two groups, giving them nominal group identity (the ‘Rattlers’ and the ‘Eagles’), and putting them into competition with each other, Sherif and his research assistants sowed the seeds for bitter and repeated conflict.The study has become a classic, repeatedly cited as evidence of how easy it is for humans to get trapped in intransigent group conflicts. I mention it here because, unfortunately, it seems to capture what has happened with the debate about the potential moral standing of robots. The disputants have settled into two tribes. There are those that are ‘anti’ the idea; and there are those that are ‘pro’ the idea. The members of these tribes sometimes get into heated arguments with one another, particularly on Twitter (which, admittedly, is a bit like a digital equivalent of Sherif’s summer camp).Those that are ‘anti’ the idea would include Noel Sharkey, Amanda Sharkey, Deborah Johnson, Aimee van Wynsberghe and the most recent lecturer in this series, Joanna Bryson. They cite a variety of reasons for their opposition. The Sharkeys, I suspect, think the whole debate is slightly ridiculous because current robots clearly lack the capacity for moral standing, and debating their moral standing distracts from the important issues in robot ethics - namely stopping the creation and use of robots that are harmful to human well-being. Deborah Johnson would argue that since robots can never experience pain or suffering they will never have moral standing. Van Wynsberghe and Bryson are maybe a little different and lean more heavily on the idea that even if it were possible to create robots with moral standing — a possibility that Bryson at least is willing to concede — it would be a very bad idea to do so because it would cause considerable moral and legal disruption.Those that are pro the idea would include Kate Darling, Mark Coeckelbergh, David Gunkel, Erica Neely, and Daniel Estrada. Again, they cite a variety of reasons for their views. Darling is probably the weakest on the pro side. She focuses on humans and thinks that even if robots themselves lack moral standing we should treat them as if they had moral standing because that would be better for us. Coeckelbergh and Gunkel are more provocative, arguing that in settling questions of moral standing we should focus less on the intrinsic capacities of robots and more on how we relate to them. If those relations are thick and meaningful, then perhaps we should accept that robots have moral standing. Erica Neely proceeds from a principle of moral precaution, arguing that even if we are unsure of the moral standing of robots we should err on the side of over-inclusivity rather than under-inclusivity when it comes to this issue: it is much worse to exclude a being with moral standing to include one without. Estrada is almost the polar opposite of Bryson, welcoming the moral and legal disruption that embracing robots would entail because it would loosen the stranglehold of humanism on our ethical code.To be clear, this is just a small sample of those who have expressed an opinion about this topic. There are many others that I just don’t have time to discuss. I should, however, say something here about this evening’s discussant, Sven and his views on the matter. I had the fortune of reading a manuscript of Sven’s forthcoming book Humans, Robots and Ethics. It is an excellent and entertaining contribution to the field of robot ethics and in it Sven shares his own views on the moral standing of robots. I’m sure he will explain them later on but, for the time being, I would tentatively place him somewhere near Kate Darling on this map: he thinks we should be open to the idea of treating robots as if they had moral standing, but not because of what the robots themselves are but because of what respecting them says about our attitudes to other humans.And what of myself? Where do I fit in all of this? People would probably classify me as belonging to the pro side. I have argued that we should be open to the idea that robots have moral standing. But I would much prefer to transcend this tribalistic approach to the issue. I am not advocate for the moral standing of robots. I think many of the concerns raised by those on the anti side are valid. Debating the moral standing of robots can seem, at times, ridiculous and a distraction from other important questions in robot ethics; and accepting them into our moral communities will, undoubtedly, lead to some legal and moral disruption (though I would add that not all disruption is a bad thing). That said, I do care about the principles we should use to decide questions of moral standing, and I think that those on the anti of the debate sometimes use bad arguments to support their views. This is why, in the remainder of this lecture, I will defend a particular approach to settling the question of the moral standing of robots. I do so in the hope that this can pave the way to a more fruitful and less tribalistic debate.In this sense, I am trying to return to what may be the true lesson of Sherif’s famous experiment on tribalism. In her fascinating book The Lost Boys: Inside Muzafer Sherif’s Robbers Cave Experiment, Gina Perry has revealed the hidden history behind Sherif’s work. It turns out that Sherif tried to conduct the exact same experiment as he did in Robber’s Cave one year before in Middle Grove, New York. It didn’t work out. No matter what the experimenters did to encourage conflict, the boys refused to get sucked into it. Why was this? One suggestion is that at Middle Grove, Sherif didn’t sort the boys into two arbitrary groups as soon as they arrived. They were given the chance to mingle and get to know one another before being segregated. This initial intermingling may have inoculated them from tribalism. Perhaps we can do the same thing with philosophical dialogue? I live in hope.3. In Defence of Ethical BehaviourismThe position I wish to defend is something I call ‘ethical behaviourism’. According to this view, the behavioural representations of another entity toward you are a sufficient ground for determining their moral status. Or, to put it slightly differently, how an entity looks and acts is enough to determine its moral status. If it looks and acts like a duck, then you should probably treat it like you treat any other duck.Ethical behaviourism works through comparisons. If you are unsure of the moral status of a particular entity — for present purposes this will be a robot but it should be noted that ethical behaviourism has broader implications — then you should compare its behaviours to that of another entity that is already agreed to have moral status — a human or an animal. If the robot is roughly performatively equivalent to that other entity, then it too has moral status. I say “roughly” since no two entities are ever perfectly equivalent. If you compared two adult human beings you would spot performative differences between them, but this wouldn’t mean that one of them lacks moral standing as a result. The equivalence test is an inexact one, not an exact one.There is nothing novel in ethical behaviourism. It is, in effect, just a moral variation of the famous Turing Test for machine intelligence. Where Turing argued that we should assess intelligence on the basis of behaviour, I am arguing that we should determine moral standing on the basis of behaviour. It is also not a view that is original to me. Others have defended similar views, even if they haven’t explicitly labelled it as such.Despite the lack of novelty, ethical behaviourism is easily misunderstood and frequently derided. So let me just clarify a couple of points. First, note that it is a practical and epistemic thesis about how we can settle questions of moral standing; it is not an abstract metaphysical thesis about what it is that grounds moral standing. So, for example, someone could argue that the capacity to feel pain is the metaphysical grounding for moral status and that this capacity depends on having a certain mental apparatus. The ethical behaviourist can agree with this. They will just argue that the best evidence we have for determining whether an entity has the capacity to feel pain is behavioural. Furthermore, ethical behaviourism is agnostic about the broader consequences of its comparative tests. To say that one entity should have the same moral standing as another entity does not mean both are entitled to a full set of legal and moral rights. That depends on other considerations. A goat could have moral standing, but that doesn’t mean it has the right to own property. This is important because when I am arguing that we should apply this approach to robots and I am not thereby endorsing a broader claim that we should grant robots legal rights or treat them like adult human beings. This depends on who or what the robots is being compared to.So what’s the argument for ethical behaviourism? I have offered different formulations of this but for this evening’s lecture I suggest that it consists of three key propositions or premises.(P1) The most popular criteria for moral status are dependent on mental states or capacities, e.g. theories focused on sentience, consciousness, having interests, agency, and personhood.(P2) The best evidence — and oftentimes the only practicable evidence — for the satisfaction of these criteria is behavioural.(P3) Alternative alleged grounds of moral status or criteria for determining moral status either fail to trump or dislodge the sufficiency of the behavioural evidence.Therefore, ethical behaviourism is correct: behaviour provides a sufficient basis for settling questions of moral status.I take it that the first premise of this argument is uncontroversial. Even if you think there are other grounds for moral status, I suspect you agree that an entity with sentience or consciousness (etc) has some kind of moral standing. The second premise is more controversial but is, I think, undeniable. It’s a trite observation but I will make it anyway: We don’t have direct access to one another’s minds. I cannot crawl inside your head and see if you really are experiencing pain or suffering. The only thing I have to go on is how you behave and react to the world. This is true, by the way, even if I can scan your brain and see whether the pain-perceiving part of it lights up. This is because the only basis we have for verifying the correlations between functional activity in the brain and mental states is behavioural. What I mean is that scientists ultimately verify those correlations by asking people in the brain scanners what they are feeling. So all premise (2) is saying is that if the most popular theories of moral status are to work in practice, it can only be because we use behavioural evidence to guide their application.That brings us to premise (3): that all other criteria fail to dislodge the importance of behavioural evidence. This is the most controversial one. Many people seem to passionately believe that there are other ways of determining moral status and indeed they argue that relying on behavioural evidence would be absurd. Consider these two recent Twitter comments on an article I wrote about ethical behaviourism and how it relates to animals and robots:First comment: “[This is] Errant #behaviorist #materialist nonsense…Robots are inanimate even if they imitate animal behavior. They don’t want or care about anything. But knock yourself out. Put your toaster in jail if it burns your toast.”Second comment: “If I give a hammer a friendly face so some people feel emotionally attached to it, it still remains a tool #AnthropomorphicFallacy”These are strong statements, but they are not unusual. I encounter this kind of criticism quite frequently. But why? Why are people so resistant to ethical behaviourism? Why do they think that there must be something more to how we determine moral status? Let’s consider some of the most popular objections.4. Objections and RepliesIn a recent paper, I suggested that there were seven (more, depending on how you count) major objections to ethical behaviourism. I won’t review all seven here, but I will consider four of the most popular ones. Each of these objections should be understood as an attempt to argue that behavioural evidence by itself cannot suffice for determining moral standing. Other evidence matters as well and can ‘defeat’ the behavioural evidence.(A) The Material Cause ObjectionThe first objection is that the ontology of an entity makes a difference to its moral standing. To adopt the Aristotelian language, we can say that the material cause of an entity (i.e. what it is made up of) matters more than behaviour when it comes to moral standing. So, for example, someone could argue that robots lack moral standing because they are not biological creatures. They are not made from the same ‘wet’ organic components as human beings or animals. Even if they are performatively equivalent to human beings or animals, this ontological difference scuppers any claim they might have to moral standing.I find this objection unpersuasive. It smacks to me of biological mysterianism. Why exactly does being made of particular organic material make such a crucial difference? Imagine if your spouse, the person you live with everyday, was suddenly revealed to be an alien from the Andromeda galaxy. Scientists conduct careful tests and determine that they are not a carbon-based lifeform. They are made from something different, perhaps silicon. Despite this, they still look and act in the same way as they always have (albeit now with some explaining to do). Would the fact that they are made of different stuff mean that they no longer warrant any moral standing in your eyes? Surely not. Surely the behavioural evidence suggesting that they still care about you and still have the mental capacities you used to associate with moral standing would trump the new evidence you have regarding their ontology. I know non-philosophers dislike thought experiments of this sort, finding them to be slightly ridiculous and far-fetched. Nevertheless, I do think they are vital in this context because they suggest that behaviour does all the heavy lifting when it comes to assessing moral standing. In other words, behaviour matters more than matter. This is also, incidentally, one reason why it is wrong to say that ethical behaviourism is a ‘materialist’ view: ethical behaviourism is actually agnostic regarding the ontological instantiation of the capacities that ground moral status; it is concerned only with the evidence that is sufficient for determining their presence.All that said, I am willing to make one major concession to the material cause objection. I will concede that ontology might provide an alternative, independent ground for determining the moral status of an entity. Thus, we might accept that an entity that is made from the right biological stuff has moral standing, even if they lack the behavioural sophistication we usually require for moral standing. So, for example someone in a permanent coma might have moral standing because of what they are made of, and not because of what they can do. Still, all this shows is that being made of the right stuff is an independent sufficient ground for moral standing, not that it is a necessary ground for moral standing. The latter is what would need to be proved to undermine ethical behaviourism.(B) The Efficient Cause ObjectionThe second objection is that how an entity comes into existence makes a difference to its moral standing. To continue the Aristotelian theme, we can say that the efficient cause of existence is more important than the unfolding reality. This is an objection that the philosopher Michael Hauskeller hints at in his work. Hauskeller doesn’t focus on moral standing per se, but does focus on when we can be confident that another entity cares for us or loves us. He concedes that behaviour seems like the most important thing when addressing this issue — what else could caring be apart from caring behaviour? — but then resiles from this by arguing that how the being came into existence can undercut the behavioural evidence. So, for example, a robot might act as if it cares about you, but when you learn that the robot was created and manufactured by a team of humans to act as if it cares for you, then you have reason to doubt the sincerity of its behaviour.It could be that what Hauskeller is getting at here is that behavioural evidence can often be deceptive and misleading. If so, I will deal with this concern in a moment. But it could also be that he thinks that the mere fact that a robot was programmed and manufactured, as opposed to being evolved and developed, makes a crucial difference to moral standing. If that is what he is claiming, then it is hard to see why we should take it seriously. Again, imagine if your spouse told you that they were not conceived and raised in the normal way. They were genetically engineered in a lab and then carefully trained and educated. Having learned this, would you take a new view of their moral standing? Surely not. Surely, once again, how they actually behave towards you — and not how they came into existence — would be what ultimately mattered. We didn’t deny the first in vitro baby moral standing simply because she came into existence in a different way from ordinary human beings. The same principle should apply to robots.Furthermore, if this is what Hauskeller is arguing, it would provide us with an unstable basis on which to make crucial judgments of moral standing. After all, the differences between humans and robots with respect to their efficient causes is starting to breakdown. Increasingly, robots are not being programmed and manufactured from the top-down to follow specific rules. They are instead given learning algorithms and then trained on different datasets with the process sometimes being explicitly modeled on evolution and childhood development. Similarly, humans are increasingly being designed and programmed from the top down, through artificial reproduction, embryo selection and, soon, genetic engineering. You may object to all this tinkering with the natural processes of human development and conception. But I think you would be hard pressed to deny a human that came into existence as a result of these process the moral standing you ordinarily give to other human beings.(C) The Final Cause ObjectionThe third objection is that the purposes an entity serves and how it is expected to fulfil those purposes makes a difference to its moral standing. This is an objection that Joanna Bryson favours in her work. In several papers, she has argued that because robots will be designed to fulfil certain purposes on our behalf (i.e. they will be designed to serve us) and because they will be owned and controlled by us in the process, they should not have moral standing. Now, to be fair, Bryson is more open to the possibility of robot moral standing than most. She has said, on several occasions, that it is possible to create robots that have moral standing. She just thinks that that this should not happen, in part because they will be owned and controlled by us, and because they will be (and perhaps should be) designed to serve our ends.I don’t think there is anything in this that dislodges or upsets ethical behaviourism. For one thing, I find it hard to believe that the fact that an entity has been designed to fulfil a certain purpose should make a crucial difference to its moral standing. Suppose, in the future, human parents can genetically engineer their offspring to fulfil certain specific ends. For example, they can select genes that will guarantee (with the right training regime) that their child will be a successful athlete (this is actually not that dissimilar to what some parents try to do nowadays). Suppose they succeed. Would this fact alone undermine the child’s claim to moral standing? Surely not, and surely the same standard should apply to a robot. If it is performatively equivalent to another entity with moral standing, then the mere fact that it has been designed to fulfil a specific purpose should not affect its moral standing.Related to this, it is hard to see why the fact that we might own and control robots should make a critical difference to their moral standing. If anything, this inverts the proper order of moral justification. The fact that a robot looks and acts like another entity that we believe to have moral standing should cause us to question our approach to ownership and control, not vice versa. We once thought it was okay for humans to own and control other humans. We were wrong to think this because it ignored the moral standing of those other humans.That said, there are nuances here. Many people think that animals have some moral standing (i.e. that we need to respect their welfare and well-being) but that it is not wrong to own them or attempt to control them. The same approach might apply to robots if they are being compared to animals. This is the crucial point about ethical behaviourism: the ethical consequences of accepting that a robot is performatively equivalent to another entity with moral standing depends, crucially, on who or what that other entity is.(D) The Deception ObjectionThe fourth objection is that ethical behaviourism cannot work because it is too easy to be deceived by behavioural cues. A robot might look and act like it is in pain, but this could just be a clever trick, used by its manufacturer, to foster false sympathy. This is, probably, the most important criticism of ethical behaviourism. It is what I think lurks behind the claim that ethical behaviourism is absurd and must be resisted.It is well-known that humans have a tendency toward hasty anthropomorphism. That is, we tend to ascribe human-like qualities to features of our environment without proper justification. We anthropomorphise the weather, our computers, the trees and the plants, and so forth. It is easy to ‘hack’ this tendency toward hasty anthropomorphism. As social roboticists know, putting a pair of eyes on a robot can completely change how a human interacts with it, even if the robot cannot see anything. People worry, consequently, that ethical behaviourism is easily exploited by nefarious technology companies.I sympathise with the fear that motivates this objection. It is definitely true that behaviour can be misleading or deceptive. We are often misled by the behaviour of our fellow humans. To quote Shakespeare, someone can ‘smile and smile and be a villain’. But what is the significance of this fact when it comes to assessing moral status? To me, the significance is that it means we should be very careful when assessing the behavioural evidence that is used to support a claim about moral status. We shouldn’t extrapolate too quickly from one behaviour. If a robot looks and acts like it is in pain (say) that might provide some warrant for thinking it has moral status, but we should examine its behavioural repertoire in more detail. It might emerge that other behaviours are inconsistent with the hypothesis that it feels pain or suffering.The point here, however, is that we are always using other behavioural evidence to determine whether the initial behavioural evidence was deceptive or misleading. We are not relying on some other kind of information. Thus, for example, I think it would be a mistake to conclude that a robot cannot feel pain, even though it performs as if it does, because the manufacturer of the robot tells us that it was programmed to do this, or because some computer engineer can point to some lines of code that are responsible for the pain performance. That evidence by itself — in the absence of other countervailing behavioural evidence — cannot undermine the behavioural evidence suggesting that the robot does feel pain. Think about it like this: imagine if a biologist came to you and told you that evolution had programmed the pain response into humans in order to elicit sympathy from fellow humans. What’s more, imagine if a neuroscientist came to you and and told you she could point to the exact circuit in the brain that is responsible for the human pain performance (and maybe even intervene in and disrupt it). What they say may well be true, but it wouldn’t mean that the behavioural evidence suggesting that your fellow humans are in pain can be ignored.This last point is really the crucial bit. This is what is most distinctive about the perspective of ethical behaviourism. The tendency to misunderstand it, ignore it, or skirt around it, is why I think many people on the ‘anti’ side of the debate make bad arguments.5. Implications and ConclusionsThat’s all I will say in defence of ethical behaviourism this evening. Let me conclude by addressing some of its implications and heading off some potential misunderstandings.First, let me re-emphasise that ethical behaviourism is about the principles we should apply when assessing the moral standing of robots. In defending it, I am not claiming that robots currently have moral standing or, indeed, that they will ever have moral standing. I think this is possible, indeed probable, but I could be wrong. The devil is going to be in the detail of the behavioural tests we apply (just as it is with the Turing test for intelligence).Second, there is nothing in ethical behaviourism that suggests that we ought to create robots that cross the performative threshold to moral standing. It could be, as people like Bryson and Van Wysnberghe argue, that this is a very bad idea: that it will be too disruptive of existing moral and legal norms. What ethical behaviourism does suggest, however, is that there is an ethical weight to the decision to create human-like and animal-like robots that may be underappreciated by robot manufacturers.Third, acknowledging the potential risks, there are also potential benefits to creating robots that cross the performative threshold. Ethical behaviourism can help to reveal a value to relationships with robots that is otherwise hidden. If I am right, then robots can be genuine objects of moral affection, friendship and love, under the right conditions. In other words, just as there are ethical risks to creating human-like and animal-like robots, there are also ethical rewards and these tend to be ignored, ridiculed or sidelined in the current debate.Fourth, and related to this previous point, the performative threshold that robots have to cross in order to unlock the different kinds of value might vary quite a bit. The performative threshold needed to attain basic moral standing might be quite low; the performative threshold needed to say that a robot can be a friend or a partner might be substantially higher. A robot might have to do relatively little to convince us that it should be treated with moral consideration, but it might have to do a lot to convince us that it is our friend.These are topics that I have explored in greater detail in some of my papers, but they are also topics that Sven has explored at considerable length. Indeed, several chapters of his forthcoming book are dedicated to them. So, on that note, it is probably time for me to shut up and hand over to him and see what he has to say about all of this.Reflections and Follow Ups After I delivered the above lecture, my colleague and friend Sven Nyholm gave a response and there were some questions and challenges from the audience. I cannot remember every question that was raised, but I thought I would respond to a few that I can remember.1. The Randomisation CounterexampleOne audience member (it was Nathan Wildman) presented an interesting counterexample to my claim that other kinds of evidence don’t defeat or undermine the behavioural evidence for moral status. He argued that we could cook-up a possible scenario in which our knowledge of the origins of certain behaviours did cause us to question whether it was sufficient for moral status.He gave the example of a chatbot that was programmed using a randomisation technique. The chatbot would generate text at random (perhaps based on some source dataset). Most of the time the text is gobbledygook but on maybe one occasion it just happens to have a perfectly intelligible conversation with you. In other words, whatever is churned out by the randomisation algorithm happens to perfectly coincide with what would be intelligible in that context (like picking up a meaningful book in Borges’s Library of Babel). This might initially cause you to think it has some significant moral status, but if the computer programmer came along and told you about the randomisation process underlying the programming you would surely change your opinion. So, on this occasion, it looks like information about the causal origins of the behaviour, makes a difference to moral status.Response: This is a clever counterexample but I think it overlooks two critical points. First, it overlooks the point I make about avoiding hasty anthropomorphisation towards the end of my lecture. I think we shouldn’t extrapolate too much from just one interaction with a robot. We should conduct a more thorough investigation of the robot’s (or in this case the chatbot’s) behaviours. If the intelligible conversation was just a one-off, then we will quickly be disabused of our belief that it has moral status. But if it turns out that the intelligible conversation was not a one-off, then I don’t think the evidence regarding the randomisation process would have any such effect. The computer programmer could shout and scream as much as he/she likes about the randomisation algorithm, but I don’t think this would suffice to undermine the consistent behavioural evidence. This links to a second, and perhaps deeper metaphysical point I would like to make: we don’t really know what the true material instantiation of the mind is (if it is indeed material). We think the brain and its functional activity is pretty important, but we will probably never have a fully satisfactory theory of the relationship between matter and mind. This is the core of the hard problem of consciousness. Given this, it doesn’t seem wise or appropriate to discount the moral status of this hypothetical robot just because it is built on a randomisation algorithm. Indeed, if such a robot existed, it might give us reason to think that randomisation was one of the ways in which a mind could be functionally instantiated in the real world.I should say that this response ignores the role of moral precaution in assessing moral standing. If you add a principle of moral precaution to the mix, then it may be wrong to favour a more thorough behavioural test. This is something I discuss a bit in my article on ethical behaviourism.2. The Argument confuses how we know X is valuable with what makes X actually valuableOne point that Sven stressed in his response, and which he makes elsewhere too, is that my argument elides or confuses two separate things: (i) how we know whether something is of value and (ii) what it is that makes it valuable. Another way of putting it: I provide a decision-procedure for deciding who or what has moral status but I don’t thereby specify what it is that makes them have moral status. It could be that the capacity to feel pain is what makes someone have moral standing and that we know someone feels pain through their behaviour, but this doesn’t mean that they have moral standing because of their behaviour.Response: This is probably a fair point. I may on occasion elide these two things. But my feeling is that this is a ‘feature’ rather than a ‘bug’ in my account. I’m concerned with how we practically assess and apply principles of moral standing in the real world, and not so much with what it is that metaphysically undergirds moral standing.3. Proxies for Behaviour versus Proxies for MindAnother comment (and I apologise for not remembering who gave it) is that on my theory behaviour is important but only because it is a proxy for something else, namely some set of mental states or capacities. This is similar to the point Sven is making in his criticism. If that’s right, then I am wrong to assume that behaviour is the only (or indeed the most important) proxy for mental states. Other kinds of evidence serve as proxies for mental states. The example was given of legal trials where the prosecution is trying to prove what the mental status of the defendant was at the time of an offence. They don’t just rely on behavioural evidence. They also rely on other kinds of forensic evidence to establish this.Response: I don’t think this is true and this gets to a deep feature of my theory. To take the criminal trial example, I don’t think it is true to say that we use other kinds of evidence as proxies for mental states. I think we use them as proxies for behaviour which we then use as proxies for mental states. In other words, the actual order of inference goes:Other evidence → behaviour → mental stateAnd not:Other evidence → mental stateThis is the point I was getting at in my talk when I spoke about how we make inferences from functional brain activity to mental state. I believe what happens when we draw a link between brain activity and mental state, what we are really doing is this:Brain state → behaviour → mental stateAnd notBrain state → mental state.Now, it is, of course, true to say that sometimes scientists think we can make this second kind of inference. For example, purveyors of brain based lie detection tests (and, indeed, other kinds of lie detection test) try to draw a direct line of inference from a brain state to a mental state, but I would argue that this is only because they have previously verified their testing protocol by following the “brain state → behaviour → mental state” route and confirming that it is reliable across multiple tests. This gives them the confidence to drop the middle step on some occasions, but ultimately this is all warranted (if it is, in fact, warranted – brain-based lie detection is controversial) because the scientists first took the behavioural step. To undermine my view, you would have to show that it is possible to cut out the behavioural step in this inference pattern. I don’t think this can be done, but perhaps I can be proved wrong.This is perhaps the most metaphysical aspect of my view.4. Default Settings and PracticalitiesAnother point that came up in conversation with Sven, Merel Noorman and Silvia de Conca, had to do with the default assumptions we are likely to have when dealing with robots and how this impacts on the practicalities of robots being accepting into the moral circle. In other words, even if I am right in some abstract, philosophical sense, will anyone actually follow the behavioural test I advocate? Won’t there be a lot of resistance to it in reality?Now, as I mentioned in my lecture, I am not an activist for robot rights or anything of the sort. I am interested in the general principles we should apply when settling questions of moral status; not with whether a particular being, such as a robot, has acquired moral status. That said, implicit views about the practicalities of applying the ethical behaviourist test may play an important role in some of the arguments I am making.One example of this has to do with the ‘default’ assumption we have when interpreting the behaviour of humans/animals vis-à-vis robots. We tend to approach humans and animals with an attitude of good faith, i.e. we assume their each of their outward behaviours is a sincere representation of their inner state of mind. It’s only if we receive contrary evidence that we will start to doubt the sincerity of the behaviour.But what default assumption do we have when confronting robots? It seems plausible to suggest that most people will approach them with an attitude of bad faith. They will assume that their behaviours are representative of nothing at all and will need a lot of evidence to convince them that they should be granted some weight. This suggests that (a) not all behavioural evidence is counted equally and (b) it might be very difficult, in practice, for robots to be accepted into the moral circle. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Response: I don’t see this as a criticism of ethical behaviourism but, rather, a warning to anyone who wishes to promote it. In other words, I accept that people will resist ethical behaviourism and may treat robots with greater suspicion than human or animal agents. One of the key points of this lecture and the longer academic article I wrote about the topic was to address this suspicion and skepticism. Nevertheless, the fact that there may be these practical difficulties does not mean that ethical behaviourism is incorrect. In this respect, it is worth noting that Turing was acutely aware of this problem when he originally formulated his 'Imitation Game' test. The reason why the test was purely text-based in its original form was to prevent human-centric biases affecting its operation.5. Ethical Mechanicism vs Ethical Behaviourism After I posted this article, Natesh Ganesh posted a critique of my handling of the deception objection on Twitter. He made two interesting points. First, he argued that the thought experiment I used to dismiss the deception objection was misleading and circular. If a scientist revealed the mechanisms underlying my own pain performances I would have no reason to doubt that the pain was genuine since I already know that someone with my kind of neural circuitry can experience pain. If they revealed the mechanisms underlying a robot’s pain performances things would be different because I do not yet have a reason to think that a being with that kind of mechanism can experience genuine pain. As a result, the thought experiment is circular because only somebody who already accepted ethical behaviourism would be so dismissive of the mechanistic evidence. Here’s how Natesh expresses the point:“the analogy in the last part [the response to the deception objection] seems flawed. Showing me the mechanisms of pain in entities (like humans) who we share similar mechanisms with & agree have moral standing is different from showing me the mechanisms of entities (like robots) whose moral standing we are trying to determine. Denying experience of pain in the 1st simply because I now know the circuitry would imply denying your own pain & hence moral standing. But accepting/ denying the 2nd if its a piece of code implicitly depends on whether you already accept/deny ethical behaviorism. It is just circular to appeal to that example as evidence.”He then follows up with a second point (implicit in what was just said) about the importance of mechanical similarities between entities when it comes to assessing moral standing:“I for one am more likely to [believe] a robot can experience pain if it shows the behavior & the manufacturer opened it up & showed me the circuitry and if that was similar to my own (different material perhaps) I am more likely to accept the robot experiences pain. In this case once again I needed machinery on top of behavior.”What I would say here, is that Natesh, although not completely dismissive of the importance of behaviour to assessing moral standing, is a fan of ethical mechanicism, and not ethical behaviourism. He thinks you must have mechanical similarity (equivalence?) before you can conclude that two entities share moral standing.Response: On the charge of circularity, I don’t think this is quite fair. The thought experiment I propose when responding to the deception objection is, like all thought experiments, intended to be an intuition pump. The goal is to imagine a situation in which you could describe and intervene in the mechanical underpinning of a pain performance with great precision (be it a human pain performance or otherwise) and ask whether the mere fact that you could describe the mechanism in detail or intervene in it would be make a difference to the entity’s moral standing. My intuitions suggest it wouldn’t make a difference, irrespective of the details of the mechanism (this is the point I make, above, in relation to the example given by Nathan Wildman about the robot whose behaviour is the result of a random-number generator programme). Perhaps other people’s intuitions are pumped in a different direction. That can happen but it doesn’t mean the thought experiment is circular.What about the importance of mechanisms in addition to behaviour? This is something I address in more detail in the academic paper. I have two thoughts about it. First, I could just bite the bullet and agree that the underlying mechanisms must be similar too. This would just add an additional similarity test to the assessment of moral status. There would then be similar questions as to how similar the mechanisms must be. Is it enough if they are, roughly, functionally similar or must they have the exact same sub-components and processes? If the former, then it still seems possible in principle for roboticists to create a functionally similar underlying mechanism and this could then ground moral standing for robots.Second, despite this, I would still push back against the claim that similar underlying mechanisms are necessary. This strikes me as being just a conservative prejudgment rather than a good reason for denying moral status to behaviourally equivalent entities. Why are we so confident that only entities with our neurological mechanisms (or something very similar) can experience pain (or instantiate the other mental properties relevant to moral standing)? Or, to put it less controversially, why should we be so confident that mechanical similarity undercuts behavioural similarity? If there is an entity that looks and acts like it is in pain (or has interests, a sense of personhood, agency etc), and all the behavioural tests confirm this, then why deny it moral standing because of some mechanical differences?Part of the resistance here could be that people are confusing two different claims:Claim 1: it is impossible (physically, metaphysically) for an entity that lacks sufficient mechanical similarity (with humans/animals) to have the behavioural sophistication we associate with experiencing pain, having agency etc.Claim 2: an entity that has the behavioural sophistication we associate with experiencing pain, having agency (etc) but then lacks mechanical similarity to other entities with such behavioural sophistication, should be denied moral standing because they lack mechanical similarity.Ethical behaviourism denies claim 2, but it does not, necessarily, deny claim 1. It could be the case that mechanical similarity is essential for behavioural similarity. This is something that can only be determined after conducting the requisite behavioural tests. The point, as always throughout my defence of the position, is that the behavioural evidence should be our guide. This doesn’t mean that other kinds of evidence are irrelevant but simply that they do not carry as much weight. My sense is that people who favour ethical mechanicism have a very strong intuition in favour of claim 1, which they then carry over into support for claim 2. This carry over is not justified as the two claims are not logically equivalent.Subscribe to the newsletter
We're far from developing robots that feel emotions, but we already have feelings towards them, says robot ethicist Kate Darling, and an instinct like that can have consequences. Learn more about how we're biologically hardwired to project intent and life onto machines -- and how it might help us better understand ourselves.
Speech Analysis - Why we have an emotional connection to robots | Kate Darling
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics and interaction, namely the social implication of how people treat robots and the purposeful design of robots in our daily lives. This episode is a fascinating look into the intersection of psychology and how we are using technology. We cover topics like: How to measure empathy The impact of robot treatment on kids behavior The correlation between animals and robots Why ‘successful’ robots aren’t always humanoid and so much more!
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Most of us have no trouble telling the difference between a robot and a living, feeling organism. Nevertheless, our brains often treat robots as if they were alive. We give them names, imagine that they have emotions and inner mental states, get mad at them when they do the wrong thing or feel bad for them when they seem to be in distress. Kate Darling is a research at the MIT Media Lab who specializes in social robotics, the interactions between humans and machines. We talk about why we cannot help but anthropomorphize even very non-human-appearing robots, and what that means for legal and social issues now and in the future, including robot companions and helpers in various forms. Support Mindscape on Patreon or Paypal. Kate Darling has a degree in law as well as a doctorate of sciences from ETH Zurich. She currently works at the Media Lab at MIT, where she conducts research in social robotics and serves as an advisor on intellectual property policy. She is an affiliate at the Harvard Berkman Klein Center for Internet & Society and at the Institute for Ethics and Emerging Technologies. Among her awards are the Mark T. Banner award in Intellectual Property from the American Bar Association. She is a contributing writer to Robohub and IEEE Spectrum. Web page Publications Twitter TED talk on why we have an emotional connection to robots
Grey Mirror: MIT Media Lab’s Digital Currency Initiative on Technology, Society, and Ethics
Kate Darling, a Research Specialist at the MIT Media Lab and an Affiliate at the Harvard Berkman Center. We chat about the balances in regulating technology, the negative systemic impacts of our current gender roles, and how being a mother has helped Kate empathize more! https://twitter.com/grok_ https://twitter.com/mitDCI https://twitter.com/RhysLindmark
with Kate Darling (@grok_) and Hanne Tidnam (@omnivorousread) We already know that we have an innate tendency to anthropomorphize robots. But beyond just projecting human qualities onto them, as we begin to share more and more spaces, social and private, what kind of relationships will we develop with them? And how will those relationships in turn change us? In this Valentine’s Day special, Kate Darling, Researcher at MIT Labs, talks with a16z's Hanne Tidnam all about our emotional relations with robots. From our lighter sides -- affection, love, empathy, and support -- to our darker sides, what will these new kinds of relationships enhance or de-sensitize in us? Why does it matter that we develop these often intense attachments to these machines that range from tool to companion -- and what do these relationships teach us about ourselves, our tendencies and our behaviors? What kinds of models from the past can we look towards to help us navigate the ethics and accountability that come along with these increasingly sophisticated relationships with robots?
Suzi and Don welcomed on Dr. Kate Darling, who is American but falls under the "third-culture" kid status. She grew up in Switzerland, but now is a researcher at the Massachusetts Institute of Technology (MIT) Media Lab, where she investigates social robotics and conducts experimental studies on human-robot interaction. In other words, stuff that is waaaaay over the simple-minded heads of Suzi and Don. They also talk how buying expats holiday gifts can cause confusion and Don gives his Christmas present to Suzi. (It involves chickens.)
Nous ne sommes pas près de développer des robots qui ressentent des émotions, toutefois, nous éprouvons des sentiments à leur égard et cet instinct peut avoir des conséquences. Tels sont les propos de Kate Darling, éthicienne. Découvrez en quoi l'homme est conçu pour insuffler intention et vie dans les machines et en quoi cela peut nous être utile pour mieux nous comprendre.
We're far from developing robots that feel emotions, but we already have feelings towards them, says robot ethicist Kate Darling, and an instinct like that can have consequences. Learn more about how we're biologically hardwired to project intent and life onto machines -- and how it might help us better understand ourselves.
Wir sind weit davon entfernt, empfindungsfähige Roboter zu entwickeln. Jedoch haben wir schon Gefühle für sie, sagt Roboterethikerin Kate Darling, und ein solcher Instinkt kann Konsequenzen haben. Lernen Sie mehr darüber, wie wir biologisch darauf programmiert sind, Absicht und Leben auf Maschinen zu projizieren -- und wie es uns helfen könnte uns selbst besser zu verstehen.
감정을 느끼는 로봇을 만들기엔 아직 멀었지만, 우리들은 이미 그들을 느끼고, 결과를 가질 수 있는 천성과도 같다고 로봇 윤리학자인 케이트는 말합니다. 어떻게 우리가 기계에 대해 의도와 삶을 보여주는 생물학적 하드웨어인지, 그리고 어떻게 로봇이 우리들 스스로가 더 나은 이해를 하도록 돕는지에 대해 더 많이 배울수 있습니다.
نحن مازلنا بعيدين عن تطوير روبوتات لديها قدرة على الإحساس والعاطفة، ولكننا نملك عاطفة إتجاههم، كما تقول أخصائية أخلاقيات الروبوتات كايت دارلينج، ويمكن أن تكون لغريزة مثل هذه عواقب. لنتعلم أكثر كيف أن بنيتنا البيولوجية تعمل على إعطاء هذه الآلات أهداف وحياة لهم. وكيف يمكن أن تساعدنا على فهم أنفسنا بطريقة أفضل.
Estamos longe de desenvolver robôs que sintam emoções, mas já temos sentimentos em relação a eles, diz a especialista em ética robótica Kate Darling, e um instinto como esse pode ter consequências. Saiba mais sobre como estamos programados mentalmente para projetar intenção e vida em máquinas e como isso pode ajudar a nos conhecermos melhor.
Estamos lejos de desarrollar robots que sientan emociones, pero ya tenemos sentimientos hacia ellos, dice la robotista Kate Darling, y un instinto como ese puede tener consecuencias. Conozca más sobre cómo estamos programados biológicamente para proyectar la intención y la vida en las máquinas, y cómo eso podría ayudarnos a entendernos mejor a nosotros mismos.
In Episode #335, Kate Darling asks the question, "Do robots have rights?" How should we approach this topic from a regulatory perspective? Who are we really protecting when we discuss appropriate human behavior toward robots? Robot ethics is an “emerging” topic—so much so that there is no standard definition as to what it entails.
We talk with Christina Mulligan about the salutary effects of smashing robots that have wronged you. Join us for a chat about revenge and satisfaction in the emerging human-robot social space. This show’s links: Christina Mulligan's faculty profile (https://www.brooklaw.edu/faculty/directory/facultymember/biography?id=christina.mulligan) and writing (https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1557395) Christina Mulligan, Revenge Against Robots (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3016048) About Betty Smith's A Tree Grows in Brooklyn (https://en.wikipedia.org/wiki/A_Tree_Grows_in_Brooklyn_(novel)) About the Tree that Owns Itself (https://en.wikipedia.org/wiki/Tree_That_Owns_Itself) The Trial of the Autun Rats (http://www.duhaime.org/LawMuseum/LawArticle-1529/1508-The-Trial-of-the-Autun-Rats.aspx) Oral Argument 70: No Drones in the Park (http://oralargument.org/70) Scott Hershovitz, Tort as a Substitute for Revenge (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2308590) Kate Darling, Palash Nandy, and Cynthia Breazeal, Empathic Concern and the Effect of Stories in Human-Robot Interaction (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639689); Kate Darling, "Who's Johnny?" Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2588669) Office Space, the printer scene (https://www.youtube.com/watch?v=N9wsjroVlu8) (nsfw) Hunter Walk, Amazon Echo Is Magical. It’s Also Turning My Kid into an Asshole. (https://hunterwalk.com/2016/04/06/amazon-echo-is-magical-its-also-turning-my-kid-into-an-asshole/) Hannah Gold, This Mirror that Forces People to Smile Is Going to Piss Everyone Off (https://jezebel.com/this-mirror-that-forces-people-to-smile-is-going-to-pis-1819828956) Special Guest: Christina Mulligan.
Will we one day create machines that are essentially just like us? People have been wrestling with that question since the advent of robotics. But maybe we're missing another, even more intriguing question: what can robots teach us about ourselves? We ponder that question with Kate Darling of the MIT Media Lab in a special taping at the Aspen Ideas Festival.
As more robots are available in the market, we are seeing the different ways in which humans can interact with them. Some people think robots are alive they even feel bad when a Roomba gets stuck. Other people find robots that look a lot like humans scary. Kate Darling, Research specialist at the MIT Media Lab explains the different types of human-robot interactions. We talked about how the design of the robot affects how it is perceived and the role of the person's culture. At the end we talked about questions that lawmakers will need to address in this space.
Craig and Tony are at Agile Australia in Melbourne and do a lap of the convention centre floor: Keynotes from Peter Halacsy from Prezi, Jeff Smith from IBM and Kate Darling on robots and human interaction Daniel Fajerman and Anubhav Berry from Australia Post and how they have dealt with business disruption, Cameron Gough talk “Scaling as … Continue reading →
Take an off-the-record journey with our most recent guest, Kate Darling, to hear about what she would tell her five-year-old self, the backstory on her first and worst jobs, the most interesting person she’s ever met and, perhaps most importantly, whether she’s an Android or iPhone user. All that and more After the Show. Are there other fun questions you’d like us to ask our guests? Email mindsworthmeeting@sternstrategy.com.
In this episode, leading robot ethicist and researcher at MIT Media Lab, Kate Darling, talks to Minds Worth Meeting about AI’s role in business and how new technologies will change the workplace. As robots move from the manufacturing line into our homes, Kate helps us understand the connection between people and robots and the policies needed to encourage the ethical evolution of human-robot relationships. Kate Darling is available for paid speaking engagements including keynote addresses, speeches, panels, conference talks, and advisory/consulting services through the exclusive representation of Stern Speakers, a division of Stern Strategy Group®.
Kate Darling is a leading expert in robot ethics. She’s a researcher at the Massachusetts Institute of Technology (MIT) Media Lab, where she investigates social robotics and conducts experimental studies on human-robot interaction. Kate is also a fellow at the Harvard Berkman Center for Internet & Society and the Yale Information Society Project, and is an affiliate at the Institute for Ethics and Emerging Technologies. She explores the emotional connection between people and life-like machines, seeking to influence technology design and public policy. Her writing and research anticipate difficult questions that lawmakers, engineers, and the wider public will need to address as human-robot relationships evolve in the coming decades. Kate has a background in law & economics and intellectual property. Twitter: @grok_
In this episode of the Making Sense podcast, Sam Harris speaks with Kate Darling about the ethical concerns surrounding our increasing use of robots and other autonomous systems. Kate Darling is a leading expert in robot ethics. She’s a researcher at the Massachusetts Institute of Technology (MIT) Media Lab, where she investigates social robotics and conducts experimental studies on human-robot interaction. Kate is also a fellow at the Harvard Berkman Center for Internet & Society and the Yale Information Society Project, and is an affiliate at the Institute for Ethics and Emerging Technologies. She explores the emotional connection between people and life-like machines, seeking to influence technology design and public policy. Her writing and research anticipate difficult questions that lawmakers, engineers, and the wider public will need to address as human-robot relationships evolve in the coming decades. Kate has a background in law & economics and intellectual property. You can support the Making Sense podcast and receive subscriber-only content at samharris.org/subscribe.
MIT Media Lab researcher Dr. Kate Darling stopped by the studio with her dino-bot (Mr. Spaghetti) to talk about the ethics of how new robot friends, what rights robots have, and what are the ethics surrounding sex bots. Plus, watch as Mr. Spaghetti takes his first steps ever!
An eight-year-old boy's encounter with a robotic toy doll ends up changing the course of technological history. Steven Johnson talks with special guests Ken Goldberg and Kate Darling, as we look at the uncanny world of emotional robotics. What if the dystopian future turns out to be one where the robots conquer humanity with their cuteness?
Kate Darling Show Notes “Who’s Johnny?” research paper, PDF on robots, framing and empathy Heider & Simmel 1944, a study of interpersonal perception and attribution of human characteristics to objects Peter Singer, The Most Good You Can Do, on living ethically Personal Data New York City Meetup, June 4 - In-Home Robots and Personal Data Lily Camera Drone Boston Dynamics “Spot” testing video and CNN story entitled, Is it cruel to kick a robot dog? The Hexbug Nano robot used in Kate Darling’s “Who’s Johnny?” research Aldebaran’s Pepper robot Jibo, a social robot Why Google’s Robot Personality Patent Is Not Good for Robotics, by Kate Darling Project VRM (Vendor Relationship Management) - at Harvard’s Berkman Center We Robot Conference - especially papers by Kate Darling, Peter Asaro and Jason Millar Moral Machines: Teaching Robots Right From Wrong - by Wendell Wallach Kate Darling’s Twitter Page
Not long ago, illegally downloading a movie could land you in court facing millions of dollars in fines and jailtime. But Hollywood has begun to weather the storm by offering alternatives to piracy — same day digital releases, better streaming, higher quality in-theater experiences — that help meet some of the consumer demand that piracy captured. But the porn industry is not Hollywood. While the web has created incredible new economic opportunities for adult entertainers — independent production has flourished, as well as new types of production, which we won’t go into here simply to preserve our G-rating — few other industries on the web face the glut of competition from services that offer similar content for free or in violation of copyright. Simply put, there’s so much free porn on the net that honest pornographers can’t keep up. It’s hard to get accurate numbers on how much revenue is generated from online porn. It’s believed to be in the billions, at least in the United States. But it’s even more difficult to get a picture of how much revenue is lost in the adult entertainment industry due to copyright violation. Surprisingly though, the porn industry doesn’t seem that interested in pursuing copyright violators. Intellectual property scholar Kate Darling studied how the industry was responding to piracy, and it turned out that — by and large — adult entertainment creators ran the numbers and found that it simply cost more from them to fight copyright violators than it was worth. For today’s episode, Berkman alum and journalist Leora Kornfeld sat down with Kate Darling to talk to her about how porn producers are losing the copyright battle, and why many don’t care.
Not long ago, illegally downloading a movie could land you in court facing millions of dollars in fines and jailtime. But Hollywood has begun to weather the storm by offering alternatives to piracy — same day digital releases, better streaming, higher quality in-theater experiences — that help meet some of the consumer demand that piracy captured. But the porn industry is not Hollywood. While the web has created incredible new economic opportunities for adult entertainers — independent production has flourished, as well as new types of production, which we won’t go into here simply to preserve our G-rating — few other industries on the web face the glut of competition from services that offer similar content for free or in violation of copyright. Simply put, there’s so much free porn on the net that honest pornographers can’t keep up. It’s hard to get accurate numbers on how much revenue is generated from online porn. It’s believed to be in the billions, at least in the United States. But it’s even more difficult to get a picture of how much revenue is lost in the adult entertainment industry due to copyright violation. Surprisingly though, the porn industry doesn’t seem that interested in pursuing copyright violators. Intellectual property scholar Kate Darling studied how the industry was responding to piracy, and it turned out that — by and large — adult entertainment creators ran the numbers and found that it simply cost more from them to fight copyright violators than it was worth. For today’s episode, Berkman alum and journalist Leora Kornfeld sat down with Kate Darling to talk to her about how porn producers are losing the copyright battle, and why many don’t care.
If a driverless car has to choose between crashing you into a school bus or a wall who do you want to be programming that decision? Aleks Krotoski explores ethics in technology. Join Aleks as she finds out if it's even possible for a device to 'behave' in a morally prescribed way through looking at attempts to make a smart phone 'kosher'. But nothing captures the conundrum quite like the ethical questions raised by driverless cars and it's the issues they raise that she explores with engineer turned philosopher Jason Millar and robot ethicist Kate Darling. Professor of law and medicine Sheila MacLean offers a comparison with how codes of medical ethics were developed before we hear the story of Gus a 13 year old whose world was transformed by SIRI. Producer Peter McManus.
Link to audio file (22:47)In this episode, we talk with Kate Darling from the MIT Media Lab, about giving rights to social robots. She tells us about a recent Pleo torture session she organized at the LIFT conference and the class she taught at Harvard...
This is definitely one of my favourite episodes of the show, as Kate Darling is an absolute…darling to talk to! Yeah, I went there – deal with it. She is an absolutely brilliant lady that is looking into one of the biggest struggles that is currently facing the business end of the adult industry. She’s...