POPULARITY
This and all episodes at: https://aiandyou.net/ . In this special episode we are focused on the military use of AI, and making it even more special, we have not one guest but nine: Peter Asaro, co-founder and co-chair of the International Committee for Robot Arms Control; Stuart Russell, Computer Science professor at UC Berkeley, renowned co-author of the leading text on AI, and influential AI Safety expert; Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and member of the International Committee for Robot Arms Control; Tony Gillespie, author of Systems Engineering for Ethical Autonomous Systems, and a fellow in avionics and mission systems in the UK's Defence Science and Technology Laboratory; Rajiv Malhotra, author of “Artificial Intelligence and the Future of Power: 5 Battlegrounds.” and Chairman of the Board of Governors of the Center for Indic Studies at the University of Massachusetts; David Brin, scientist and science fiction author famous for the Uplift series and Earth; Roman Yampolskiy, Associate Professor of Computer Science at the University of Louisville in Kentucky and author of AI: Unexplainable, Unpredictable, Uncontrollable; Jaan Tallinn, founder of Skype and billionaire funder of the Centre for the Study of Existential Risk and the Future of Life Institute; Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI; I've collected together portions of their appearances on earlier episodes of this show to create one interwoven narrative about the military use of AI. We talk about autonomy, killer drones, ethics of hands-off decision making, treaties, the perspectives of people and countries outside the major powers, risks of losing control, data center monitoring, and more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ . Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In part two we talk about psychology of combat decisions, AI and strategic defense, and nuclear conflict destabilization. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ . Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In this first part we talk about the ethics of autonomy in weapons systems and compare human to machine decision making in combat. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
In a world driven by remarkable minds and groundbreaking achievements, Professor Denise Garcia emerges as a shining star. With an illustrious career as a full professor at one of the world's leading universities and a founding faculty member of an experiential robotics institute, her contributions to the fields of technology, peace, and sustainable development are simply awe-inspiring. As we delve into an intimate conversation with Professor Garcia, we uncover the magical moments and inflection points that propelled her onto this extraordinary path. 00:16- About Denise Garcia Denise Garcia is a Professor at Northeastern University and a founding faculty member of its Experiential Robotics Institute. She is formerly a member of the International Panel for the Regulation of Autonomous Weapons (2017-2022), currently of the Research Board of the Toda Peace Institute (Tokyo) and the Institute for Economics and Peace (Sydney), Vice-chair of the International Committee for Robot Arms Control, and member of the Institute of Electrical and Electronics Engineers Global Initiative on Ethics of Autonomous and Intelligent Systems. She was the Nobel Peace Institute Fellow in Oslo in 2017. A multiple teaching award-winner, her recent publications appeared in Nature, Foreign Affairs, International Relations, and other top journals. --- Support this podcast: https://podcasters.spotify.com/pod/show/tbcy/support
This and all episodes at: https://aiandyou.net/ . Lethal autonomous weapons are here, and we're going to see much more of them. What concerns do these raise, and what should we do about them? Those may seem like intractable problems, but Peter Asaro tackles them. He is a professor at the New School in New York, and is a philosopher of science, technology and media. His work examines artificial intelligence and robotics as a form of digital media, the ethical dimensions of algorithms and data, and the ways in which technology mediates social relations and shapes our experience of the world. In the world of autonomous weapons, he works as the co-founder and co-chair of the International Committee for Robot Arms Control. In part 2 of our interview we talk about that committee and related organizations, what they do to elevate our thinking and governance of autonomous weapons and how they do it, and we discuss the famous Slaughterbots video, plus Peter's documentary, Love Machine. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ . Lethal autonomous weapons are here, and we're going to see much more of them. What concerns do these raise, and what should we do about them? Those may seem like intractable problems, but Peter Asaro tackles them. He is a professor at the New School in New York, and is a philosopher of science, technology and media. His work examines artificial intelligence and robotics as a form of digital media, the ethical dimensions of algorithms and data, and the ways in which technology mediates social relations and shapes our experience of the world. In the world of autonomous weapons, he works as the co-founder and co-chair of the International Committee for Robot Arms Control. We talk about just what distinctions are useful when thinking about the regulation of autonomous weapons, seen through the lens of his precise and highly informed thinking. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Dr. Frank Sauer is a Senior Researcher at German Armed Forces University in Munich. His work is focused on international security, nuclear weapons, terrorism, cyber security as well as the military application of artificial intelligence (AI) and robotics. He is a member of the International Committee for Robot Arms Control and serves on the International Panel on the Regulation of Autonomous Weapons (iPRAW). Frank also co-hosts the German language podcast "Sicherheitshalber" “for the sake of security”.
Peter Asaro, co-founder of the International Committee for Robot Arms Control, has a simple solution for stopping the future proliferation of killer robots, or lethal autonomous weapons: "Ban them." What are the ethical and logistical risks of this technology? How would it change the nature of warfare? And with the U.S. and other nations currently developing killer robots, what is the state of governance?
Peter Asaro, co-founder of the International Committee for Robot Arms Control, has a simple solution for stopping the future proliferation of killer robots, or lethal autonomous weapons: "Ban them." What are the ethical and logistical risks of this technology? How would it change the nature of warfare? And with the U.S. and other nations currently developing killer robots, what is the state of governance?
In this episode, we speak with Peter Asaro. He’s a Philosopher and the vice-chair and co-founder of the International Committee for Robot Arms Control. There, he is lobbying for an international ban on lethal autonomous weapon systems, or killer robots. Are they science fiction? What are some of the technical, legal and moral issues they raise? Listen as our guest helps us navigate these complex questions. Thanks to the Future of Life Institute for giving us permission to use extracts from their Slaughter Bots video. You can watch the video, and more, at the following link: https://autonomousweapons.org
Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated. In this month’s podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre, artificial intelligence professor Toby Walsh, Article 36 founder Richard Moyes, Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty, and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro. If you don't have time to listen to the podcast in full, or if you want to skip around through the interviews, each interview starts at the timestamp below: Paul Scharre: 3:40 Toby Walsh: 40:50 Richard Moyes: 53:30 Mary Wareham & Bonnie Docherty: 1:03:35 Peter Asaro: 1:32:40
Tech's Message: News & Analysis With Nate Lanxon (Bloomberg, Wired, CNET)
Please support us on Patreon at www.patreon.com/uktech for access to our exclusive extended version of the show, weekly columns from Nate, and much more.This week on the regular version of TECH'S MESSAGE Nate and Ian discuss:- Apple Watch in Surprisingly Strong Demand at U.K. Carrier EE- Vodafone's paid zero-rating Passes are now available- Google's Pixel Failure Sees Customers Getting A Totally Unusable Phone- Sainsbury’s bets on the vinyl revival with its own record labelSPECIAL FEATURE: Interview with Noel Sharkey, Emeritus Professor of Artificial Intelligence and Robotics at the University of Sheffield. Noel, who is co-director of the Foundation for Responsible Robotics and chair of the International Committee for Robot Arms Control, joins the show to discuss his work campaigning for awareness on the problems we face by lethal autonomous weapons systems. A fascinating discussion from the man who also appears as a judge on the BBC's Robot Wars. An interview not to miss! Follow Noel on Twitter.Patreon supporters have access to our longer version of the show, which includes the above as well as additional discussions about:- EXTRA STORY: Apple And Google Give People A Fright By Sorting Their Naughty Photos- Lengthy discussion about the ups and downs of image recognition systems in phones- Long talk about Google, Apple and Opera software fanboys - and our history with them- £9,000 HDMI cables? Not on our watch. We explain why this is an example of silly tech- Outtakes and more!Access our exclusive content and support Nate and Ian's podcasting by becoming a Patron at www.patreon.com/uktech. See acast.com/privacy for privacy and opt-out information.
We’ve talked about the importance of designing a robot to make responsible decisions, but what about making responsible decisions when designing robots?In this video, Professor Noel Sharkey talks about some important ethical considerations for developing autonomous robots. Noel is Emeritus Professor of Robotics and Artificial Intelligence at The University of Sheffield, co-founder of the International Committee for Robot Arms Control and co-founder of Responsible Robotics.
In October 2016, acclaimed Professor Stephen Hawking warned against the rapid development of artificial intelligence, saying that “the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity," and predicting that robots could develop “powerful autonomous weapons” or new methods to “oppress the many.” The threat of lethal autonomous robots might sound like something out of a sci-fi movie, but the reality is that people all around the world already use robotic technology, including bomb disposal robots and attack drones in the US military, which is currently considering plans to employ thousands of robots by 2025. But while the US military is at the forefront of designing artificial intelligence software, soon we may not even need to leave our front door to see robots in action, with robot butlers and home AI systems already being rolled out as consumer goods in countries like Japan and the US. Not just for the home, these robots hold down jobs in hotels and aged care facilities. In 2015 toy company Hasbro invented a robotic cat, called Joy for All Companion Pets, to act as an alternative to therapy animals in nursing homes and retirement facilities. Although reviews of robotic therapy pets, such as Paro the Robo-Seal, have been somewhat positive (care homes with Paro don't need to worry about allergies, scratches, or feeding), this hasn't stopped the device from causing an ethical dilemma. Questions have been raised over how humane it is to entrust providing a person's emotional support to a robot. For Professor Rob Sparrow in Monash's School of Philosophical, Historical and International Studies, this is just one of the many examples where philosophical arguments can have real-world implications. His research tackles the ethics of new science and technology, including the use of domestic robots and the future of autonomous robots in the military. Professor Sparrow also wrote one of the first papers on autonomous weapon systems and co-founded the International Committee for Robot Arms Control, which brought about an international campaign to stop killer robots. He is also a Chief Investigator in the Australian Research Council Centre of Excellent for Electromaterials Science looking at the ethical and policy issues arising from the creation of structured nanomaterials, like artificial organs. He says that through our discussion about robots, we're really talking about what it means to be human. Read more at http://artsonline.monash.edu.au/news-events/killer-robots-professor-sparrow/ For more information on doing a higher degree by research, visit https://arts.monash.edu/graduate-research
In July, a sniper, later identified as Micah Xavier Johnson, opened fire at a march against fatal police shootings, held in downtown Dallas, Texas, killing 5 police officers and wounding many others. After a 45 minute gun battle and hours of negotiation with the sniper, who was holed up in a parking garage, Dallas Police Chief David Brown gave an order to his SWAT team to come up with a plan to end the mayhem before more police officers were killed. This led to the use of as robot, the Remotec Androx Mark V A-1, manufactured by Northrup Grumman and a pound of C-4 explosive, which was sent in eventually killing the sniper. Today on Lawyer 2 Lawyer, hosts J. Craig Williams and Bob Ambrogi join attorney Edward Obayashi, deputy sheriff and legal advisor for the Plumas County Sheriff's Office and Dr. Peter Asaro, assistant professor and director of graduate programs for the School of Media Studies at the New School for Public Engagement, as they take a look at the recent tragedy in Dallas, the use of robots by law enforcement, criticism, ethics, policy, and regulation when it comes to the use of robots. Attorney Edward Obayashi is deputy sheriff and legal advisor for the Plumas County sheriff's office and a licensed attorney in the State of California. Ed’s law office specializes in providing law enforcement legal services to California law enforcement agencies and he also serves as the legal advisor and a legal consultant for numerous law enforcement agencies in California. His duties include patrol, investigations, administration, training, and providing legal advice to department management and personnel. Dr. Peter Asaro is a philosopher of science, technology, and media. Dr. Asaro is assistant professor and director of graduate programs for the School of Media Studies at the New School for Public Engagement in New York City. He is the co-founder of the International Committee for Robot Arms Control and has written on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles.
Over the last decade, many first-world militaries have developed, and in some cases deployed, autonomous “killer” robots. Some proponents believe that such robots will save human lives, but another side believes that an accidental arms race of this type would yield long-term detriments that outweigh any good. University of Sheffield's Dr. Noel Sharkey stands by the latter argument. As Cofounder for the International Committee for Robot Arms Control, he has spent a good part of the last decade trying to create an international ban on such robots. In this episode, he speaks about the developments in the domain of autonomous killer robots, as well as how groups of global leaders might come together to convince nations and other global policy platforms to adhere to such an agreement for the benefit of all humankind.
Can replacing human soldiers with robot warriors save lives and make war more humane? We try to find out in this episode. But as we learn, the laws of war are not written in computer code. Modern warfare is not ready for killer robots that "decide" without human input. "When a robot gets blown up, that's another life saved." - Mark Belanger, iRobot. In this episode, we hear from the people making the robots as they show off their lethal products. We meet a former fighter pilot who touts the values of automation and likes lawyers sitting side by side with soldiers. Several experts tell us about the terrifying moral risks of letting machines think too far ahead of people in battle. We learn there could be lives to be saved, war could be made less atrocious if -- and it is a huge if -- the technology can advance side by side with the antiquated laws. In the end, we hear from the activists who want autonomous lethal weapons banned before they march on the enemy. A U.N. body has just begun to consider it. A version of this story won the German Prize for Innovation Journalism. It aired on Deutschlandfunk by Thomas Reintjes with help from Philip Banse. Quotes heard in this episode: "Maybe we can make war -- as horrible as it sounds -- less devastating to the non-combatants than it currently is." -Ronald Arkin director of the Mobile Robot Lab at Georgia Tech When to unleash the machines: "They must do better than human beings before they should be deployed in the battlefield." -Ronald Arkin On why Las Vegas could be considered a target: "With Napoleonic-era combat, you knew where the battlefield was, right? With modern warfare, modern conflict, you really don't know, where the battlefield is." -Brad Allenby, Arizona State University "Robotics has been trying to do visual recognition for... a bit more than 50 years and we can just about tell the difference between a lion and a car. So the idea of putting one of these things onto a battlefield... and thinking it should discriminate between [innocent people] and insurgents is just insane." -Noel Sharkey Professor of Artificial Intelligence and Robotics at the University of Sheffield in the U.K. "In today's warfare, a drone pilot is looking on a screen, talking to potentially five to ten other people looking at that same screen, one of which is a lawyer." -Missy Cummings Duke professor and former fighter pilot About autonomous lethal weapons: "These machines for the foreseeable future would fail to meet the requirements of international law." -Peter Asaro, International Committee for Robot Arms Control "The preemptive ban is the only thing that makes sense." -Stephen Goose, of Human Rights Watch If you like this episode why not share it with that friend of yours who always posts about military issues? To get future audio downloads of our program direct to your phone or computer, subscribe to the New Tech City podcast on iTunes, Stitcher or via RSS. It just takes a second. Thanks.