POPULARITY
We're experimenting and would love to hear from you!In this episode of 'Discover Daily', we explore Anduril Industries' groundbreaking Arsenal-1 project, a $1 billion autonomous weapons facility in Ohio that promises to create over 4,000 high-paying jobs and revolutionize military defense manufacturing. The 5-million-square-foot facility, set to begin production in July 2026, will produce advanced autonomous systems including Fury drones, Roadrunner drones, and Baracuda missiles, while generating billions in economic output.We also delve into OpenAI's development of revolutionary AI 'super agents' with PhD-level reasoning capabilities, as CEO Sam Altman prepares to brief U.S. government officials. These advanced AI systems represent a significant leap forward in autonomous task execution and problem-solving, positioning the United States at the forefront of AI innovation and economic growth in the global technology race.The episode concludes with an fascinating exploration of ELIZA's resurrection, as the world's first chatbot returns on GitHub. Originally created in the 1960s by MIT professor Joseph Weizenbaum, ELIZA's restoration involved decoding 2,600 lines of historic code, now running on an emulated IBM 7094 computer. This preservation of AI history offers valuable insights into the evolution of conversational AI and its impact on modern technology.From Perplexity's Discover Feed:https://www.perplexity.ai/page/anduril-s-1b-autonomous-weapon-fTo5xssgQYeFhKcv0df8Ywhttps://www.perplexity.ai/page/altman-to-brief-d-c-on-phd-lev-q1qYjPhrQhuyb3cwG8H2RAhttps://www.perplexity.ai/page/world-s-first-chatbot-resurrec-tJfKapPMSWmDvjC334mfkQPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
On COI #728, Kyle Anzalone updates Washington and Tel Aviv's Middle East wars. The Kyle Anzalone Show Odysee Rumble Donate LBRY Credits bTTEiLoteVdMbLS7YqDVSZyjEY1eMgW7CP Donate Bitcoin 36PP4kT28jjUZcL44dXDonFwrVVDHntsrk Donate Bitcoin Cash Qp6gznu4xm97cj7j9vqepqxcfuctq2exvvqu7aamz6 Patreon Subscribe Star YouTube Facebook Twitter MeWe Apple Podcast Amazon Music Google Podcasts Spotify iHeart Radio
On COI #728, Kyle Anzalone updates Washington and Tel Aviv's Middle East wars.
Here's the updated text with links to the websites included: AI is revolutionizing the military with autonomous drones, surveillance tech, and decision-making systems. But could these innovations spark the next global conflict? In this episode of Data Science at Home, we expose the cutting-edge tech reshaping defense—and the chilling ethical questions that follow. Don't miss this deep dive into the AI arms race!
In this episode of AI, Government, and the Future, host Marc Leh engages in an insightful conversation with Dr. Eva-Marie Muller-Stuler, Partner at EY and leader of the Data & AI practice for the Middle East and North Africa. Dr. Muller-Stuler brings her extensive experience in AI governance and data science to discuss the critical intersection of AI, ethics, and democracy.
Autonomous lethal weapons, often sensationalized as “killer robots,” are no longer confined to science fiction—they are a rapidly advancing reality in modern warfare. In this conversation, Georgetown Law Professor Mitt Regan, an expert on the laws of war and international law, delves into the profound ethical and legal implications of AI-enabled weapon systems for both current conflicts and the future of warfare. Central to the conversation are the legal frameworks governing AI-enabled weapons under international humanitarian law. Professor Regan examines principles such as distinction, proportionality, and precaution, showing how these are tested by systems that use AI to identify and engage targets. Significant gaps in legal frameworks persist, including the absence of a unified international agreement specifically addressing autonomous weaponry.Mitt Regan is a professor of law at Georgetown Law and an expert on both national security and international humanitarian law. (Credits: General 1hr | MCLE available to TalksOnLaw "Premium" or "Podcast" members. Visit www.talksonlaw.com to learn more.)
In this special extended episode of Humanities Matter, we take a closer look at Autonomous Weapons Systems and the legal frameworks surrounding their use. What exactly defines an autonomous weapon? Can an individual be held legally responsible for the consequences of deploying such a weapon? And can international law keep pace with the rapid rise of AI-driven weaponry?We answer all these questions and more in the first half of this episode with Dr. Barry de Vries, author of Individual Criminal Responsibility for Autonomous Weapons Systems in International Criminal Law, which is volume 65 in “International Human Law Series” published by De Gruyter Brill. Liked this podcast? Have thoughts on the topic? Want us to address a specific theme in the future? Write to us at podcast@brill.com. Host: Ramzi NasirGuest: Dr. Barry de Vries
Last month the Australian Defence Force unveiled a suite of weapons at the land autonomous systems and teaming demonstrations at the Puckapunyal Army Base in northern Victoria.
Why is this even debate?
In this episode of AI, Government, and the Future, host Marc Leh engages in an insightful conversation with Dr. Eva-Marie Muller-Stuler, Partner at EY and leader of the Data & AI practice for the Middle East and North Africa. Dr. Muller-Stuler brings her extensive experience in AI governance and data science to discuss the critical intersection of AI, ethics, and democracy.
A Note from James:Is our military way behind other countries in terms of using the latest technology with AI, with drones, with biotech, with cybersecurity? I think for many years we know we're behind on supersonic weapons. Are we behind on AI? How did Hamas send undetected a thousand or so paragliders into Israel without Israel detecting it? Are we behind on the AI that's in sensors? What is going on?So, with the help of Chris Kirchhoff, who wrote the book "Unit X: How the Pentagon and Silicon Valley are Transforming the Future of War," we answer these questions and more.Episode Description:In this episode, James Altucher hosts Christopher Kirchhoff to explore the critical question: Is the US military lagging behind in technology? They discuss the current technological shortcomings of the military, historical contexts, and how metrics of military power are evolving. Kirchhoff provides an insightful analysis of the Hamas attack as a case study to highlight technological vulnerabilities and failures. The conversation expands to cover the rise of drones, the innovative Replicator Initiative, and the crucial role of AI and machine learning in military operations. Kirchhoff shares his experiences bridging the gap between Silicon Valley and the Pentagon, offering a rare glimpse into the challenges and successes of modern military technology integration.What You'll Learn:Technological Shortcomings: Understand the areas where the US military is currently falling behind other nations in technology.Impact of Drones: Learn about the transformative role drones play in modern warfare and their potential to change military strategies.Replicator Initiative: Discover the Pentagon's innovative approach to building low-cost autonomous weapon systems.AI in Military Operations: Gain insights into how AI and machine learning are being integrated into military strategies and operations.Bridging Technology Gaps: Explore the challenges and successes of connecting Silicon Valley's rapid innovation with the Pentagon's strategic needs.Chapters:01:30 Introduction: Is the US Military Lagging in Technology?02:15 Current Technological Shortcomings03:20 Historical Context of Military Superiority03:59 Changing Metrics of Military Power06:42 Hamas Attack: A Case Study08:15 Technological Vulnerabilities and Failures10:22 US Military's Technological Lag11:42 The Rise of Drones in Modern Warfare14:52 The Replicator Initiative17:54 Bridging the Gap Between Silicon Valley and the Pentagon24:39 Challenges in Government Contracting28:35 Innovative Contracting Solutions31:17 Discovering Joby Aviation: The Future of Flying Cars32:24 Military Applications and Collaboration with Joby34:53 The Rise of Drones in Modern Warfare37:12 Rogue Squadron: The Military's First Commercial Drone Unit39:32 Anduril and the Future of Combat Collaborative Aircraft45:14 AI and Machine Learning in Military Operations51:31 Ethical Issues in Military Technology01:04:02 Strategic Stability and the Future of Warfare01:09:35 Conclusion: Bridging Silicon Valley and the MilitaryAdditional Resources:Unit X: How the Pentagon and Silicon Valley are Transforming the Future of WarJoby AviationAnduril IndustriesDefense Innovation Unit (DIU)DARPA ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to “The James Altucher Show” wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn
In this episode, we discuss the state of autonomous weapons systems adoption in Ukraine (00:55), our takeaways from the Supreme Court's decision to overturn the Chevron Doctrine and the implications for AI regulation (17:35), the delayed deployment of Apple Intelligence in the EU (30:55), and a breakdown of Nvidia's deal to sell its technology to data centers in the Middle East (41:30).
In this podcast, AI&F Advisor Rabbi Mois Navon draws on his 35-year career as a pioneering AI computer design engineer to discuss whether battlefield decisions by lethal autonomous weapons that seek to apply ostensible “moral algorithms” are consistent with principles of human dignity. Rabbi Mois draws on his essay Autonomous Weapons Systems and Battlefield Dignity - A Jewish Perspective in the July 2023 book Alexa, How Do You Feel about Religion?" Technology, Digitization and Artificial Intelligence in the Focus of AI, co-edited by our AI&F Advisor Anna Puzo. Our host Pablo Ruz Salmones also explores with Rabbi Mois his front row seat on AI research in Israel since the 1990s, and the AI ethics conversation there. Production: Ed Melick, Pablo A. Ruz Salmones and Penny Yuen Host: Pablo A. Ruz Salmones Editing: Isabelle Braconnot Music from #UppbeatLicense code: 1ZHLF7FMCNHU39
The AI Breakdown: Daily Artificial Intelligence News and Discussions
A reading and discussion inspired by: https://www.washingtonpost.com/opinions/2024/06/25/ai-weapon-us-tech-companies/ Learn how to use AI with the world's biggest library of fun and useful tutorials: https://besuper.ai/ Use code 'youtube' for 50% off your first month. The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown
Paul Scharre, Executive Vice President and Director of Studies at Center for a New American Security, joins ACME General Corp to talk about his recent trip to Ukraine and his observations of and predictions for autonomous weapons and artificial intelligence. In addition to his work at CNAS, Paul is the award-winning author of Army of None: Autonomous Weapons and the Future of War and Four Battlegrounds: Power in the Age of Artificial Intelligence.
Armin Krishanan, PhD, is an Associate Professor for Security Studies, East Carolina University. Dr. Krishnan is the author of the following books: (2024) Fifth Generation Warfare: Dominating the Human Domain, London: Routledge (2018) Why Paramilitary Operations Fail, New York: Palgrave Macmillan (2016) Military Neuroscience and the Coming Age of Neurowarfare, London: Routledge (2012) Gezielte Tötung: Die Zukunft des Krieges [Targeted Killing: The Future of War], Berlin: Matthes & Seitz Berlin Verlag (2009) Killer Robots: The Legality and Ethicality of Autonomous Weapons, Farnham, Surrey: Ashgate (2008) War as Business: Technological Change and Military Services Contracting, Farnham, Surrey: Ashgate For two extra episodes each month, and exclusive content please visit: patreon.com/thenickbryantpodcast nickbryantnyc.com EpsteinJustice.com
Top AI engineers at defence technology companies defended the need for autonomous weapons on Thursday, amid a push for a ban on so-called "killer robots". More than 115 countries and 250 non-governmental organisations are calling for an international treaty to ban weapons that use artificial intelligence to identify and engage human targets, technology which United Nations Secretary-General Antonio Guterres has called "morally repugnant."
Right now, militaries around the globe are investing heavily in the use of AI weapons and drones. From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. RECOMMENDED MEDIAFour Battlegrounds: Power in the Age of Artificial Intelligence: Paul's book on the future of AI in war, which came out in 2023.Army of None: Autonomous Weapons and the Future of War: Paul's 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul's article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.The night the world almost almost ended: A BBC documentary about Stanislav Petrov's decision not to start nuclear war.AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.RECOMMENDED YUA EPISODESThe AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoCan We Govern AI? with Marietje SchaakeBig Food, Big Tech and Big AI with Michael MossThe Invisible Cyber-War with Nicole PerlrothYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
We speak with Father Afonso about the use of weapon systems in warfare. The discussion explores a thought experiment involving two states at war and the deployment of autonomous drones capable of targeting terrorists without human intervention. Key topics include the legality and morality of such systems, the potential for reducing civilian casualties, the challenge of machine errors leading to war crimes, and the shifting responsibility in decision-making from humans to machines. The conversation also addresses the technological advances in AI, the issues of accountability and transparency with autonomous systems, and the philosophical implications of removing human judgment from warfare. We examine potential future scenarios where autonomous weapons could be misused by non-state actors or influenced by automation bias, raising critical questions about the control, ethics, and implications of advanced AI in military and civilian contexts. [00:00] Introduction to the Episode and Guest Co-Host Jimmy Mullen [00:12] Exploring Autonomous Weapon Systems with Father Afonso [00:55] The Ethical and Legal Quandaries of Autonomous Weapons [03:01] The Evolution of Warfare and Technology's Role [06:15] The Human Element in Modern Warfare and AI [08:19] The Philosophical and Legal Challenges of Autonomous Weapons [21:08] Addressing Accountability and the Future of AI in Warfare [30:17] Defining AI and Its Legal Challenges [31:05] The Future of AI and Moral Agency [31:51] AI's Emotional Intelligence and Superintelligence [32:50] Military Strategy and AI: A New Conscience? [34:21] The Limitations of AI in Understanding Common Sense [36:13] The Human Element in AI Development [37:03] Autonomous Weapons Systems and the Essence of Humanity [39:11] AI on the Battlefield: Efficiency vs. Ethics [43:37] The Unpredictability of AI and Warfare [50:45] AI's Role Beyond the Battlefield [56:27] The Collective Mind and Autonomous Systems --- Send in a voice message: https://podcasters.spotify.com/pod/show/braininavat/message
Bloomberg Daybreak Weekend with Tom Busby takes a look at some of the stories we'll be tracking in the coming week. In the US – a look at next week's Fed meeting and a preview of Apple earnings. In the UK – a look at the Vienna Conference on Autonomous Weapons Systems. In Asia - a preview of Samsung earnings. See omnystudio.com/listener for privacy information.
Bloomberg Daybreak Weekend with Tom Busby takes a look at some of the stories we'll be tracking in the coming week. In the US – a look at next week's Fed meeting and a preview of Apple earnings. In the UK – a look at the Vienna Conference on Autonomous Weapons Systems. In Asia - a preview of Samsung earnings. See omnystudio.com/listener for privacy information.
Sierra Leone's President Julius Maada Bio has urged regional leaders to forge common understanding and develop strategies to address the challenges posed by Autonomous Weapons Systems. What are autonomous weapons and what sort of threat do they pose to West African countries?Also how the remains of thousands of enslaved Africans on St Helena is reconnecting this island to the rest of the world.And the Nigerian artist causing outrage with his depictions of black people. What's behind his depiction and why is he doing it?Presenter: Audrey Brown Producers: Nyasha Michelle, Victor Sylver and Patricia Whitehorne Technical Producer: Jonny Hall Senior Producer: Karnie Sharp Editors: Andre Lombard and Alice Muthengi
Part of the ongoing debate about the lawfulness of autonomy in military systems in the manner in which the technology integrates with and interacts with its human masters. The term Meaningful Human Control (or MHC) has garnered particular relevance in this debate. Today we speak with Dr Lena Trabucco about her upcoming OpinioJuris Symposium on Military AI and the Law of Armed Conflict, co-edited with Dr Magda Packholska, on this issue, as well as her work on legal challenges associated with emerging technology more broadly.Lena is a research fellow, a visiting scholar at the Stockton Center for International Law at the US Naval War College, and research fellow at the Technology, Law and Security Program at American University College of Law and the University of Copenhagen. Her research focuses on the intersection of international law and emerging military technology, particularly autonomous weapon systems. She has multiple projects examining human control throughout an autonomous weapon system life cycle. Previously, she was a post-doctoral researcher at the Centre for Military Studies at the University of Copenhagen. Lena received a PhD in law from the University of Copenhagen and a PhD in international relations from Northwestern University.Additional resources:Opinio Juris Symposium on Military AI and the Law of Armed ConflictKevin Jon Heller, 'The Concept of 'The Human' in the Critique of Autonomous Weapons', 14 Harvard National Security Journal (2023)Magdalena Pacholska, 'Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective', Israel Law Review (2022), 1–21Rebecca Crootof, 'A Meaningful Floor for 'Meaningful Human Control', Temple International & Comparative Law Journal, Vol. 30, 2016Kenneth Payne's SubstackKiller Robot Cocktail Party (Lena and Brad Boyd's substack)
Autonomous weapons are no longer science fiction - and they're becoming a top priority for major military powers. Anna Hehir of the Future of Life Institute says we need an international treaty to ban some of the most dangerous autonomous weapons, and that we have a unique window now to do just that. Plus: Axios co-founder Mike Allen on how Washington is thinking about AI and weapons of war, behind the scenes. Guests: Anna Hehir, autonomous weapons lead at the Future of Life Institute; Axios co-founder Mike Allen, author of Axios AM and Axios PM Credits: 1 big thing is produced by Niala Boodhoo, Alexandra Botti, and Jay Cowit. Music is composed by Alex Sugiura and Jay Cowit. You can reach us at podcasts@axios.com. You can send questions, comments and story ideas as a text or voice memo to Niala at 202-918-4893. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, Lauren Sanders is speaking with fellow LFW researcher, Renato Wolf about the issue of determining where legal obligations lie in the conduct of attacks, carried out by AWS. In particular they delve into his research about the Art 57 term ‘those who plan or decide attacks' to see how that maps onto AWS, how that features in the AWS debate and what needs to be thought about in operationalising this legal obligation.Renato Wolf is an international lawyer who served as a legal adviser to the Swiss Armed Forces for nearly ten years. He was deployed to Kosovo and Bosnia-Herzegovina, and then spent two years working as a legal advisor to the Disarmament, Demobilization and Reintegration Unit at the United Nations Headquarters in New York. He holds a Bachelor of Law and a Master of Law with a specialization in international law from the University of Bern, Switzerland, as well as a Master of Arts in War in the Modern World from the King's College London. He is currently a research fellow at the University of Queensland and finalizing his PhD on the legal review of autonomous weapons.
Artificial Intelligence is changing warfare. Fully autonomous weapons that can make their own decisions about what to target or whom to kill are already in limited use today. But the pace of development of AI suggests that these so-called "Killer Robots" may become more and more common in future armed conflicts. Meanwhile, the speed at which new AI technology is coming online far exceeds the ability of the international community to come up with guardrails that might place controls on fully autonomous weapon systems. Guest Paul Scharre is the Executive Vice President and Director of Studies at the Center for a New American Security. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence. We kick off by discussing what we mean by autonomous weapons before having a longer conversation about what the future holds for AI in warfare, nuclear security, and how to secure sensible regulations on autonomous weapons and military uses of AI. Paul Scharre in Foreign Affairs
The recent war in Ukraine and the Israel-Hamas conflict in the Middle East have provided valuable lessons when it comes to the nature of future military modernisation and defense innovation. While traditional warfare remains relevant, we are seeing a shift towards multi-domain operations that are more kinetic and are increasingly utilizing autonomous weapons. As India seeks to boost its defence exports, its domestic industry will have a role to play. Several factors have played a role in the recent spike in defence exports—regulatory initiatives, enhanced capability of the private sector, and the desire among other countries to diversify away from their traditional defence partners and look at India instead. Additionally, the INDUS-X initiative between India and the U.S. has also been noted as a welcome mechanism to encourage further defence cooperation in other emerging technologies such as quantum computing, space, cyber-security, and deep-sea technology.What are the potential opportunities that international collaborations present? Will India's domestic industry be able to meet the armed forces' requirements? In this episode of Interpreting India, Ajay Kumar and Konark Bhandari talk about the current status of defence innovation in India and the emerging trends in this space. Episode ContributorsAjay Kumar is a nonresident senior fellow at Carnegie India. He served as the defense secretary of India between August 2019 and October 2022 and is the longest serving secretary in the Ministry of Defence, where he also served as secretary in the Department of Defence Production. As defense secretary, Ajay Kumar was the key architect for several major transformations, including the restructuring of higher defense management in the country and formation of the chief of defense staff in the Indian Armed Forces; the launch of the Agniveer scheme—a historic reform in recruitment, training, and retention of personnel in the Armed Forces. Konark Bhandari is a fellow with Carnegie India. He is a lawyer who has researched certain areas in the digital economy, focusing primarily on approaches to antitrust regulation of companies in the digital realm. He had earlier worked at India's antitrust regulator, the Competition Commission of India (CCI), where he worked closely with senior officials on a variety of matters. He is also an avid follower of the regulation of the space technology ecosystem and is keen to contribute to that discipline. Additional ReadingsINDUS-X: Charting the Way Ahead for India-U.S. Defense Industrial Cooperation, by Konark Bhandari. Can a Defense Innovation Bridge Elevate India-U.S. Defense Cooperation? by Konark Bhandari. Sameer Lalwani on India-U.S. Defense-Industrial Cooperation.U.S.-India Defense Innovation Collaboration: Building on a Promising Start by Konark Bhandari. Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.
Following Hamas's attacks on Oct. 7, the Israeli military retaliated with a relentless and devastating air war. By mid-December, Israeli forces had struck more than 22,000 targets in Gaza, and the Israeli military said it had used artificial intelligence to select many of them. The targeting system, called “The Gospel” by the IDF, was not the first time a military used AI on the battlefield, but the high number of civilian casualties raised red flags for many.Lawfare Managing Editor Tyler McBrien sat down with Lauren Kahn, a Senior Research Analyst at Georgetown's Center for Security and Emerging Technology (CSET) who focuses on the national security applications of artificial intelligence. They discussed how autonomous and AI-enabled weapons are being used and will be used in war, “the current ground rules for the age of AI in warfare,” and why Lauren favors confidence-building measures and other incremental steps, rather than an all-out ban. And despite running through a few nightmare scenarios, we learned why Lauren remains hopeful for the responsible and ethical use of AI for defense. Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This and all episodes at: https://aiandyou.net/ . Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In part two we talk about psychology of combat decisions, AI and strategic defense, and nuclear conflict destabilization. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ . Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In this first part we talk about the ethics of autonomy in weapons systems and compare human to machine decision making in combat. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
From November 6, 2013: On October 25, 2013, the Hoover Institution held a day-long media colloquium out at Stanford University for a first-rate group of journalists focused on national security legal issues and the work of Hoover's Jean Perkins Task Force on National Security and Law. The first session of the day, Ben's talk on Speaking the Law, ran as episode of the podcast on Sunday. The second session, the subject of this episode, was a talk by Matt and Ken about autonomous weapons and the calls for a preemptive ban in international law on so-called "killer robots." The session has been edited both for length and because not all of the journalists consented to having their remarks made public.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
In Episode 11 of the "Relentless Podcast with Kyle Becker," we delve into the rapidly evolving landscape of artificial intelligence and its profound impact on society. From the eerie parallels drawn with iconic sci-fi films like "Terminator 2" and episodes of "Black Mirror," to the real-world advancements and ethical dilemmas posed by AI and AGI (Artificial General Intelligence), this podcast is a deep dive into the intersection of technology, policy, and human values.We explore how the explosion of the information age and the advent of AI are reshaping our world, touching on topics such as the role of AI in law enforcement, the potential deployment of AI-controlled drones in warfare, and the growing concerns over digital dehumanization.This episode also sheds light on the political and social implications of AI in the context of current global events and the debate surrounding the use of autonomous weapons. Join us as we navigate through the complex maze of technological advancements, ethical considerations, and the future of human-AI interaction. Whether you're a tech enthusiast, a policy maker, or simply curious about the future of AI, this episode offers valuable insights and stimulates a crucial conversation about the path we are paving for tomorrow's world.Find Kyle on Twitter at @KyleNABecker for breaking news, analysis, and more.Visit BeckerNews.com, your destination for stories that cut through the noise.Join the community at TheKyleBecker.Substack.com for exclusive content and engaging discussions.Brought to you by Becker News LLC, this podcast is a must-listen for anyone seeking a fresh, independent voice in the media landscape.
Plus, AI to be embedded in iOS 18 Biden and Xi to Join Forces: Pledging to Ban AI in Autonomous Weapons and Nuclear Warhead Control iOS 18 to Introduce Gen AI and More Unmasking Executives: Investors Turn to AI for the Real Story Powerful Chip Inspired by Human Brain Promises Efficient AI Performance DC Faces the Highest Job Losses to AI in the US, Study Finds --- Send in a voice message: https://podcasters.spotify.com/pod/show/aidaily/message
On This Week in Google, Leo Laporte, Jeff Jarvis, and Paris Martineau examine President Biden's new executive order focused on developing AI in a safe, secure and responsible way, and Leo's epiphany about the real danger being humans weaponizing AI. For the full episode, visit twit.tv/twig/740 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: GO.ACILEARNING.COM/TWIT
On This Week in Google, Leo Laporte, Jeff Jarvis, and Paris Martineau examine President Biden's new executive order focused on developing AI in a safe, secure and responsible way, and Leo's epiphany about the real danger being humans weaponizing AI. For the full episode, visit twit.tv/twig/740 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: GO.ACILEARNING.COM/TWIT
On This Week in Google, Leo Laporte, Jeff Jarvis, and Paris Martineau examine President Biden's new executive order focused on developing AI in a safe, secure and responsible way, and Leo's epiphany about the real danger being humans weaponizing AI. For the full episode, visit twit.tv/twig/740 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: GO.ACILEARNING.COM/TWIT
Artificial Intelligence and the impact it is having on modern combat is the subject of a fascinating new book by Paul Scharre titled “Four Battlegrounds; Power in the Age of Artificial Intelligence. In his book, Scharre argues that four key elements will define this struggle: data, computing power, talent & institutions. Scharre is a former Army Ranger who also worked for the Office of the Secretary of Defense where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. He currently serves as Vice President and Director of Studies at the Center for New American Security. His award winning Book “Army of None; Autonomous Weapons and the Future of War” was selected by Bill Gates as one of the top five books of 2018. Much of Paul's book deals with the potential dark sides of Artificial Intelligence. One example is the growing sophistication of voice replication technologies that are being used to conduct scams. Another is China's use of facial recognition technology to subjugate their people. Like any tool, AI can be used for good purposes or evil. China's embrace of Artificial Intelligence research and deployment is the subject of much of our discussion. China produces more AI scientists than any country in the world but many come to the US. If China were to move on Taiwan it would significantly disrupt the world economy. Personally, I always want to have a human being in the command chain somewhere. However, Paul says we are rapidly moving toward completely autonomous weapons systems. AI tools can be powerful, but they are also still very brittle. Paul cites a somewhat humorous example of how several Marines defeated a “battlebot”. He also warns about a “race to the bottom” in regard to safety of AI technology. TAKEAWAY: One of our greatest disadvantages in this technology race is government bureaucracy. Paul calls it “lethal to innovation.”
AI, machines, and killer robots, oh my! Elon Musk and 116 people sent a letter to the UN asking that Autonomous Weapons be banned. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/vault-swn-1
AI, machines, and killer robots, oh my! Elon Musk and 116 people sent a letter to the UN asking that Autonomous Weapons be banned. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/vault-swn-1
AI, machines, and killer robots, oh my! Elon Musk and 116 people sent a letter to the UN asking that Autonomous Weapons be banned. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/vault-swn-1
AI, machines, and killer robots, oh my! Elon Musk and 116 people sent a letter to the UN asking that Autonomous Weapons be banned. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/vault-swn-1
We continue our conversation with philosopher Don Howard who has very practical ideas about, and projects engaged in advancing the principles of ethics in robotics and artificial intelligence. For dystopians, prof Howard provides hope that Silicon Valley's rising innovators are embracing ethics.
We continue our conversation with philosopher Don Howard who has very practical ideas about, and projects engaged in advancing the principles of ethics in robotics and artificial intelligence. For dystopians, prof Howard provides hope that Silicon Valley’s rising innovators are embracing ethics. Source
We continue our conversation with philosopher Don Howard who has very practical ideas about, and projects engaged in advancing the principles of ethics in robotics and artificial intelligence. For dystopians, prof Howard provides hope that Silicon Valley's rising innovators are embracing ethics.
One of Hollywood's favorite SciFi themes, robo-wars, is in the real-world a serious topic with features and implications different from fevered movie scripts and clickbait. In this first of a two-part episode, we are joined by Don Howard, the brilliant Notre Dame professor of philosophy and ethics of technologies. “In Defense of (Virtuous) Autonomous Weapons.” […]
One of Hollywood’s favorite SciFi themes, robo-wars, is in the real-world a serious topic with features and implications different from fevered movie scripts and clickbait. In this first of a two-part episode, we are joined by Don Howard, the brilliant Notre Dame professor of philosophy and ethics of technologies. Source
One of Hollywood's favorite SciFi themes, robo-wars, is in the real-world a serious topic with features and implications different from fevered movie scripts and clickbait. In this first of a two-part episode, we are joined by Don Howard, the brilliant Notre Dame professor of philosophy and ethics of technologies.“In Defense of (Virtuous) Autonomous Weapons.” Don Howard, Notre Dame Journal on Emerging Technologies, November 2022.In Defense of (Virtuous) Autonomous Systems, Don A. Howard, Dakota Digital Review, February 21, 2023Real Robots in Our Near Future: The Rise of Capable Industrial Automatons, Dakota Digital Review, Mark P. Mills, March 9, 2023This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5816500/advertisement
With new and emerging technologies, we hear a lot about killer drones, driverless tanks and autonomous airplanes on the modern battlefield.One issue of particular concern is the use of what are officially known as lethal autonomous weapons systems (LAWS), which can select and engage targets with force, without human involvement - raising a raft of security, ethical and legal concerns.What are countries doing to regulate LAWS? And how can international law and the UN respond to this challenge?To explore these and other questions, the Group of Governmental Experts on LAWS began meeting at the UN in 2017, as Mélanie Régimbal, Chief of the UN Office for Disarmament Affairs in Geneva, explains to UN News's Nancy Sarkis.
Drones that can find their own targets already exist, making machine-versus-machine conflict just a software update away.
Drones that can find their own targets already exist, making machine-versus-machine conflict just a software update away.