POPULARITY
Step inside Darnleys Cyber Café for a thought-provoking journey from The Terminator's Skynet to the eerie futures of Black Mirror. We explore how pop culture's AI nightmares echo today's breakthroughs and what they might mean for tomorrow. Grab a coffee, press play, and question how close fiction is to becoming your reality.Click here to send future episode recommendationSupport the showSubscribe now to Darnley's Cyber Cafe and stay informed on the latest developments in the ever-evolving digital landscape.
Living brain cells playing Pong. Skincare that harvests DNA. Holograms for psychological warfare. All funded by a CIA venture firm spending your tax dollars.
The episode opens with Bhatt framing the global stakes: from drones on the battlefield to AI-powered early warning systems, militaries worldwide are racing to integrate AI, often citing strategic necessity in volatile security environments. Mohan underscores that AI in conflict cannot be characterized in a single way, applications range from decision-support systems and logistics to disinformation campaigns and border security.The conversation explores two categories of AI-related risks:Inherent risks: design flaws, bias in datasets, adversarial attacks, and human–machine trust calibration.Applied risks: escalation through miscalculation, misuse in targeting, and AI's role as a force multiplier for nuclear and cyber threats.On governance, Mohan explains the fragmentation of current disarmament processes, where AI intersects with multiple regimes, nuclear, cyber, conventional arms, yet lacks a unified framework. She highlights ongoing debates at the UN's Group of Governmental Experts (GGE) on LAWS, where consensus has been stalled over definitions, human-machine interaction, and whether regulation should be voluntary or treaty-based.International humanitarian law (IHL) remains central, with discussions focusing on how principles like distinction, proportionality, and precaution can apply to autonomous systems. Mohan also emphasizes a “life-cycle approach” to weapon assessment, extending legal and ethical oversight from design to deployment and decommissioning.A significant portion of the conversation turns to gender and bias, an area Mohan has advanced through her research at UNIDIR. She draws attention to how gendered and racial biases encoded in AI systems can manifest in conflict, stressing the importance of diversifying participation in both technology design and disarmament diplomacy.Looking forward, Mohan cites UN Secretary-General António Guterres's call for a legally binding instrument on autonomous weapons by 2026. She argues that progress will depend on multi-stakeholder engagement, national strategies on AI, and confidence-building measures between states. The episode closes with a reflection on the future of warfare as inseparable from governance innovation—shifting from arms reduction to resilience, capacity-building, and responsible innovation.Episode ContributorsShimona Mohan is an associate researcher on Gender & Disarmament and Security & Technology at UNIDIR in Geneva, Switzerland. She was named among Women in AI Ethics' “100 Brilliant Women in AI Ethics for 2024.” Her areas of focus include the multifarious intersections of security, emerging technologies (in particular AI and cybersecurity), gender, and disarmament. Charukeshi Bhatt is a research analyst at Carnegie India, where her work focuses on the intersection of emerging technologies and international security. Her current research explores how advancements in technologies such as AI are shaping global disarmament frameworks and security norms.ReadingsGender and Lethal Autonomous Weapons Systems, UNIDIR Factsheet Political Declaration on Responsible Military Use of AI and Autonomy, US Department of StateAI in the Military Domain: A Briefing Note for States by Giacomo Persi Paoli and Yasmin AfinaUnderstanding the Global Debate on Lethal Autonomous Weapons Systems: An Indian Perspective by Charukeshi Bhatt and Tejas Bharadwaj Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.
Autonomous weapons exist in a strange territory between Pentagon procurement contracts and Hollywood blockbusters, between actual military systems and speculative futures. For this week's Liminal Library, I spoke with Jascha Bareis, co-editor of The Realities of Autonomous Weapons (Bristol UP, 2025), about how these dual existences shape international relations and cultural imagination. The collection examines autonomous weapons not just as military hardware but as psychological tools that reshape power dynamics through their mere possibility. These systems epitomize what the editors call "the fluidity of violence"—warfare that dissolves traditional boundaries between human decision and machine action, between targeted strikes and algorithmic inevitability. Bareis and his contributors trace fascinating connections between fictional representations and military doctrine—how Terminator narratives influence Pentagon planning while actual weapons development feeds back into artistic imagination. The book wrestles with maintaining "meaningful human control" over systems designed to operate faster than human thought, a challenge that grows more urgent as militaries worldwide race toward greater autonomy. Each chapter reveals how thoroughly we need to rethink human-machine relationships in warfare, from the gendered coding of robot soldiers in film to the way AI imaginaries differ between Silicon Valley and New Delhi. Autonomous weapons force us to confront uncomfortable realities about agency, violence, and the increasingly blurred line between human judgment and algorithmic certainty. Links: A Clean Kill? the role of Patriot in the Gulf War Statement delivered by Germany on Working Definition of LAWS / “Definition of Systems under Consideration” The Silicon Valley venture capitalists who want to ‘move fast and break things' in the defence industry Hype Studies 'The Gatekeepers' documentary Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Autonomous weapons exist in a strange territory between Pentagon procurement contracts and Hollywood blockbusters, between actual military systems and speculative futures. For this week's Liminal Library, I spoke with Jascha Bareis, co-editor of The Realities of Autonomous Weapons (Bristol UP, 2025), about how these dual existences shape international relations and cultural imagination. The collection examines autonomous weapons not just as military hardware but as psychological tools that reshape power dynamics through their mere possibility. These systems epitomize what the editors call "the fluidity of violence"—warfare that dissolves traditional boundaries between human decision and machine action, between targeted strikes and algorithmic inevitability. Bareis and his contributors trace fascinating connections between fictional representations and military doctrine—how Terminator narratives influence Pentagon planning while actual weapons development feeds back into artistic imagination. The book wrestles with maintaining "meaningful human control" over systems designed to operate faster than human thought, a challenge that grows more urgent as militaries worldwide race toward greater autonomy. Each chapter reveals how thoroughly we need to rethink human-machine relationships in warfare, from the gendered coding of robot soldiers in film to the way AI imaginaries differ between Silicon Valley and New Delhi. Autonomous weapons force us to confront uncomfortable realities about agency, violence, and the increasingly blurred line between human judgment and algorithmic certainty. Links: A Clean Kill? the role of Patriot in the Gulf War Statement delivered by Germany on Working Definition of LAWS / “Definition of Systems under Consideration” The Silicon Valley venture capitalists who want to ‘move fast and break things' in the defence industry Hype Studies 'The Gatekeepers' documentary Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/military-history
Autonomous weapons exist in a strange territory between Pentagon procurement contracts and Hollywood blockbusters, between actual military systems and speculative futures. For this week's Liminal Library, I spoke with Jascha Bareis, co-editor of The Realities of Autonomous Weapons (Bristol UP, 2025), about how these dual existences shape international relations and cultural imagination. The collection examines autonomous weapons not just as military hardware but as psychological tools that reshape power dynamics through their mere possibility. These systems epitomize what the editors call "the fluidity of violence"—warfare that dissolves traditional boundaries between human decision and machine action, between targeted strikes and algorithmic inevitability. Bareis and his contributors trace fascinating connections between fictional representations and military doctrine—how Terminator narratives influence Pentagon planning while actual weapons development feeds back into artistic imagination. The book wrestles with maintaining "meaningful human control" over systems designed to operate faster than human thought, a challenge that grows more urgent as militaries worldwide race toward greater autonomy. Each chapter reveals how thoroughly we need to rethink human-machine relationships in warfare, from the gendered coding of robot soldiers in film to the way AI imaginaries differ between Silicon Valley and New Delhi. Autonomous weapons force us to confront uncomfortable realities about agency, violence, and the increasingly blurred line between human judgment and algorithmic certainty. Links: A Clean Kill? the role of Patriot in the Gulf War Statement delivered by Germany on Working Definition of LAWS / “Definition of Systems under Consideration” The Silicon Valley venture capitalists who want to ‘move fast and break things' in the defence industry Hype Studies 'The Gatekeepers' documentary Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/film
Autonomous weapons exist in a strange territory between Pentagon procurement contracts and Hollywood blockbusters, between actual military systems and speculative futures. For this week's Liminal Library, I spoke with Jascha Bareis, co-editor of The Realities of Autonomous Weapons (Bristol UP, 2025), about how these dual existences shape international relations and cultural imagination. The collection examines autonomous weapons not just as military hardware but as psychological tools that reshape power dynamics through their mere possibility. These systems epitomize what the editors call "the fluidity of violence"—warfare that dissolves traditional boundaries between human decision and machine action, between targeted strikes and algorithmic inevitability. Bareis and his contributors trace fascinating connections between fictional representations and military doctrine—how Terminator narratives influence Pentagon planning while actual weapons development feeds back into artistic imagination. The book wrestles with maintaining "meaningful human control" over systems designed to operate faster than human thought, a challenge that grows more urgent as militaries worldwide race toward greater autonomy. Each chapter reveals how thoroughly we need to rethink human-machine relationships in warfare, from the gendered coding of robot soldiers in film to the way AI imaginaries differ between Silicon Valley and New Delhi. Autonomous weapons force us to confront uncomfortable realities about agency, violence, and the increasingly blurred line between human judgment and algorithmic certainty. Links: A Clean Kill? the role of Patriot in the Gulf War Statement delivered by Germany on Working Definition of LAWS / “Definition of Systems under Consideration” The Silicon Valley venture capitalists who want to ‘move fast and break things' in the defence industry Hype Studies 'The Gatekeepers' documentary Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/national-security
Autonomous weapons exist in a strange territory between Pentagon procurement contracts and Hollywood blockbusters, between actual military systems and speculative futures. For this week's Liminal Library, I spoke with Jascha Bareis, co-editor of The Realities of Autonomous Weapons (Bristol UP, 2025), about how these dual existences shape international relations and cultural imagination. The collection examines autonomous weapons not just as military hardware but as psychological tools that reshape power dynamics through their mere possibility. These systems epitomize what the editors call "the fluidity of violence"—warfare that dissolves traditional boundaries between human decision and machine action, between targeted strikes and algorithmic inevitability. Bareis and his contributors trace fascinating connections between fictional representations and military doctrine—how Terminator narratives influence Pentagon planning while actual weapons development feeds back into artistic imagination. The book wrestles with maintaining "meaningful human control" over systems designed to operate faster than human thought, a challenge that grows more urgent as militaries worldwide race toward greater autonomy. Each chapter reveals how thoroughly we need to rethink human-machine relationships in warfare, from the gendered coding of robot soldiers in film to the way AI imaginaries differ between Silicon Valley and New Delhi. Autonomous weapons force us to confront uncomfortable realities about agency, violence, and the increasingly blurred line between human judgment and algorithmic certainty. Links: A Clean Kill? the role of Patriot in the Gulf War Statement delivered by Germany on Working Definition of LAWS / “Definition of Systems under Consideration” The Silicon Valley venture capitalists who want to ‘move fast and break things' in the defence industry Hype Studies 'The Gatekeepers' documentary Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/public-policy
Autonomous weapons exist in a strange territory between Pentagon procurement contracts and Hollywood blockbusters, between actual military systems and speculative futures. For this week's Liminal Library, I spoke with Jascha Bareis, co-editor of The Realities of Autonomous Weapons (Bristol UP, 2025), about how these dual existences shape international relations and cultural imagination. The collection examines autonomous weapons not just as military hardware but as psychological tools that reshape power dynamics through their mere possibility. These systems epitomize what the editors call "the fluidity of violence"—warfare that dissolves traditional boundaries between human decision and machine action, between targeted strikes and algorithmic inevitability. Bareis and his contributors trace fascinating connections between fictional representations and military doctrine—how Terminator narratives influence Pentagon planning while actual weapons development feeds back into artistic imagination. The book wrestles with maintaining "meaningful human control" over systems designed to operate faster than human thought, a challenge that grows more urgent as militaries worldwide race toward greater autonomy. Each chapter reveals how thoroughly we need to rethink human-machine relationships in warfare, from the gendered coding of robot soldiers in film to the way AI imaginaries differ between Silicon Valley and New Delhi. Autonomous weapons force us to confront uncomfortable realities about agency, violence, and the increasingly blurred line between human judgment and algorithmic certainty. Links: A Clean Kill? the role of Patriot in the Gulf War Statement delivered by Germany on Working Definition of LAWS / “Definition of Systems under Consideration” The Silicon Valley venture capitalists who want to ‘move fast and break things' in the defence industry Hype Studies 'The Gatekeepers' documentary Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
Autonomous weapons exist in a strange territory between Pentagon procurement contracts and Hollywood blockbusters, between actual military systems and speculative futures. For this week's Liminal Library, I spoke with Jascha Bareis, co-editor of The Realities of Autonomous Weapons (Bristol UP, 2025), about how these dual existences shape international relations and cultural imagination. The collection examines autonomous weapons not just as military hardware but as psychological tools that reshape power dynamics through their mere possibility. These systems epitomize what the editors call "the fluidity of violence"—warfare that dissolves traditional boundaries between human decision and machine action, between targeted strikes and algorithmic inevitability. Bareis and his contributors trace fascinating connections between fictional representations and military doctrine—how Terminator narratives influence Pentagon planning while actual weapons development feeds back into artistic imagination. The book wrestles with maintaining "meaningful human control" over systems designed to operate faster than human thought, a challenge that grows more urgent as militaries worldwide race toward greater autonomy. Each chapter reveals how thoroughly we need to rethink human-machine relationships in warfare, from the gendered coding of robot soldiers in film to the way AI imaginaries differ between Silicon Valley and New Delhi. Autonomous weapons force us to confront uncomfortable realities about agency, violence, and the increasingly blurred line between human judgment and algorithmic certainty. Links: A Clean Kill? the role of Patriot in the Gulf War Statement delivered by Germany on Working Definition of LAWS / “Definition of Systems under Consideration” The Silicon Valley venture capitalists who want to ‘move fast and break things' in the defence industry Hype Studies 'The Gatekeepers' documentary Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
In this episode Jack Altman, CEO of Lattice and host of Uncapped, interviews Marc Andreessen on how venture capital is evolving — from small seed funds to billion-dollar barbell strategies — and why today's most important tech companies don't just build tools, they replace entire industries. They cover:The end of “picks and shovels” investingWhy missing a great company matters more than backing a bad oneThe power law math behind fund size and asymmetric returnsAI as the next computing platform — and a test for Western civilizationPreference falsification, media power, and what founders can't say out loudThis is a conversation about ambition at scale, the structure of modern venture, and the deep forces reshaping startups, innovation, and power.Resources: Listen to more from Uncapped: https://linktr.ee/uncappedpodFind Jack on Xhttps://x.com/jaltmaFind Marc on X: https://x.com/pmarcaFind Uncapped on X: https://x.com/uncapped_podTimecodes: 00:00 What You Can't Say 01:20 Founders, Funders, and the Future 02:00 Fund Size and Power Law Math 06:45 From Tools to Full Stack Startups 10:00 Market Sizing and Asymmetric Bets 13:00 Public Markets Mirror Venture Dynamics 17:00 The Barbell Strategy in Venture 20:00 The Conflict Dilemma in Venture 25:00 Staying in Early-Stage Venture 29:30 The Death of the Middle 32:00 Why It's So Rare to Build a New Top VC Firm 35:00 The Case for Power in Venture 37:45 Limiting Factors for Big Companies 41:00 AI as the Next Computing Platform 45:30 Betting on Startups, Not Incumbents 48:00 How a16z Thinks About Risk 51:00 Building a Top-Tier GP Team 55:00 Taste, Timing, and Getting Into the Scene 57:00 Raising Capital Is the Easy Part 1:00:30 AI's Existential Stakes 1:05:00 Autonomous Weapons, Ethics, and War 1:11:00 Tech, Government, and Power 1:13:00 Media, Mistrust, and Narrative Collapse 1:24:00 Preference Falsification and Cultural Cascades 1:32:00 The Thought Experiment 1:33:00 Career Advice for Young Builders 1:35:00 Marc vs. the Huberman Protocol 1:39:30 What Would Prove You Right? Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Professor Guy Sinclair explains how the UN has, over time, diverged from its original Charter, adapting in response to global shifts—sometimes enhancing its relevance, but at other times straining its legitimacy. Sinclair outlines four key factors driving institutional change: external shocks, internal dynamics, legal mechanisms, and shared narratives. As new technologies like AI and autonomous weapons reshape global threats, he anticipates further evolution in how international organizations operate. We also discuss the strain on the UN's development agenda—especially the Sustainable Development Goals—as geopolitical shocks and donor fatigue undercut progress. Sinclair addresses concerns about the UN's diminished authority in peace and security, citing the growing irrelevance of Security Council authorization. He warns that the increasing role of private and state donors may skew UN priorities, challenging its legitimacy. Finally, we explore whether collective security remains a viable goal and whether reform should be gradual or transformative. Through it all, Sinclair urges a critical but hopeful approach, recognizing the UN's enduring potential amid deep systemic challenges.Learn more on GlobalGovernanceForum.org
We're experimenting and would love to hear from you!In this episode of 'Discover Daily', we explore Anduril Industries' groundbreaking Arsenal-1 project, a $1 billion autonomous weapons facility in Ohio that promises to create over 4,000 high-paying jobs and revolutionize military defense manufacturing. The 5-million-square-foot facility, set to begin production in July 2026, will produce advanced autonomous systems including Fury drones, Roadrunner drones, and Baracuda missiles, while generating billions in economic output.We also delve into OpenAI's development of revolutionary AI 'super agents' with PhD-level reasoning capabilities, as CEO Sam Altman prepares to brief U.S. government officials. These advanced AI systems represent a significant leap forward in autonomous task execution and problem-solving, positioning the United States at the forefront of AI innovation and economic growth in the global technology race.The episode concludes with an fascinating exploration of ELIZA's resurrection, as the world's first chatbot returns on GitHub. Originally created in the 1960s by MIT professor Joseph Weizenbaum, ELIZA's restoration involved decoding 2,600 lines of historic code, now running on an emulated IBM 7094 computer. This preservation of AI history offers valuable insights into the evolution of conversational AI and its impact on modern technology.From Perplexity's Discover Feed:https://www.perplexity.ai/page/anduril-s-1b-autonomous-weapon-fTo5xssgQYeFhKcv0df8Ywhttps://www.perplexity.ai/page/altman-to-brief-d-c-on-phd-lev-q1qYjPhrQhuyb3cwG8H2RAhttps://www.perplexity.ai/page/world-s-first-chatbot-resurrec-tJfKapPMSWmDvjC334mfkQPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
On COI #728, Kyle Anzalone updates Washington and Tel Aviv's Middle East wars. The Kyle Anzalone Show Odysee Rumble Donate LBRY Credits bTTEiLoteVdMbLS7YqDVSZyjEY1eMgW7CP Donate Bitcoin 36PP4kT28jjUZcL44dXDonFwrVVDHntsrk Donate Bitcoin Cash Qp6gznu4xm97cj7j9vqepqxcfuctq2exvvqu7aamz6 Patreon Subscribe Star YouTube Facebook Twitter MeWe Apple Podcast Amazon Music Google Podcasts Spotify iHeart Radio
On COI #728, Kyle Anzalone updates Washington and Tel Aviv's Middle East wars.
Here's the updated text with links to the websites included: AI is revolutionizing the military with autonomous drones, surveillance tech, and decision-making systems. But could these innovations spark the next global conflict? In this episode of Data Science at Home, we expose the cutting-edge tech reshaping defense—and the chilling ethical questions that follow. Don't miss this deep dive into the AI arms race!
In this episode of AI, Government, and the Future, host Marc Leh engages in an insightful conversation with Dr. Eva-Marie Muller-Stuler, Partner at EY and leader of the Data & AI practice for the Middle East and North Africa. Dr. Muller-Stuler brings her extensive experience in AI governance and data science to discuss the critical intersection of AI, ethics, and democracy.
Autonomous lethal weapons, often sensationalized as “killer robots,” are no longer confined to science fiction—they are a rapidly advancing reality in modern warfare. In this conversation, Georgetown Law Professor Mitt Regan, an expert on the laws of war and international law, delves into the profound ethical and legal implications of AI-enabled weapon systems for both current conflicts and the future of warfare. Central to the conversation are the legal frameworks governing AI-enabled weapons under international humanitarian law. Professor Regan examines principles such as distinction, proportionality, and precaution, showing how these are tested by systems that use AI to identify and engage targets. Significant gaps in legal frameworks persist, including the absence of a unified international agreement specifically addressing autonomous weaponry.Mitt Regan is a professor of law at Georgetown Law and an expert on both national security and international humanitarian law. (Credits: General 1hr | MCLE available to TalksOnLaw "Premium" or "Podcast" members. Visit www.talksonlaw.com to learn more.)
Last month the Australian Defence Force unveiled a suite of weapons at the land autonomous systems and teaming demonstrations at the Puckapunyal Army Base in northern Victoria.
Why is this even debate?
In this episode of AI, Government, and the Future, host Marc Leh engages in an insightful conversation with Dr. Eva-Marie Muller-Stuler, Partner at EY and leader of the Data & AI practice for the Middle East and North Africa. Dr. Muller-Stuler brings her extensive experience in AI governance and data science to discuss the critical intersection of AI, ethics, and democracy.
A Note from James:Is our military way behind other countries in terms of using the latest technology with AI, with drones, with biotech, with cybersecurity? I think for many years we know we're behind on supersonic weapons. Are we behind on AI? How did Hamas send undetected a thousand or so paragliders into Israel without Israel detecting it? Are we behind on the AI that's in sensors? What is going on?So, with the help of Chris Kirchhoff, who wrote the book "Unit X: How the Pentagon and Silicon Valley are Transforming the Future of War," we answer these questions and more.Episode Description:In this episode, James Altucher hosts Christopher Kirchhoff to explore the critical question: Is the US military lagging behind in technology? They discuss the current technological shortcomings of the military, historical contexts, and how metrics of military power are evolving. Kirchhoff provides an insightful analysis of the Hamas attack as a case study to highlight technological vulnerabilities and failures. The conversation expands to cover the rise of drones, the innovative Replicator Initiative, and the crucial role of AI and machine learning in military operations. Kirchhoff shares his experiences bridging the gap between Silicon Valley and the Pentagon, offering a rare glimpse into the challenges and successes of modern military technology integration.What You'll Learn:Technological Shortcomings: Understand the areas where the US military is currently falling behind other nations in technology.Impact of Drones: Learn about the transformative role drones play in modern warfare and their potential to change military strategies.Replicator Initiative: Discover the Pentagon's innovative approach to building low-cost autonomous weapon systems.AI in Military Operations: Gain insights into how AI and machine learning are being integrated into military strategies and operations.Bridging Technology Gaps: Explore the challenges and successes of connecting Silicon Valley's rapid innovation with the Pentagon's strategic needs.Chapters:01:30 Introduction: Is the US Military Lagging in Technology?02:15 Current Technological Shortcomings03:20 Historical Context of Military Superiority03:59 Changing Metrics of Military Power06:42 Hamas Attack: A Case Study08:15 Technological Vulnerabilities and Failures10:22 US Military's Technological Lag11:42 The Rise of Drones in Modern Warfare14:52 The Replicator Initiative17:54 Bridging the Gap Between Silicon Valley and the Pentagon24:39 Challenges in Government Contracting28:35 Innovative Contracting Solutions31:17 Discovering Joby Aviation: The Future of Flying Cars32:24 Military Applications and Collaboration with Joby34:53 The Rise of Drones in Modern Warfare37:12 Rogue Squadron: The Military's First Commercial Drone Unit39:32 Anduril and the Future of Combat Collaborative Aircraft45:14 AI and Machine Learning in Military Operations51:31 Ethical Issues in Military Technology01:04:02 Strategic Stability and the Future of Warfare01:09:35 Conclusion: Bridging Silicon Valley and the MilitaryAdditional Resources:Unit X: How the Pentagon and Silicon Valley are Transforming the Future of WarJoby AviationAnduril IndustriesDefense Innovation Unit (DIU)DARPA ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to “The James Altucher Show” wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn
In this episode, we discuss the state of autonomous weapons systems adoption in Ukraine (00:55), our takeaways from the Supreme Court's decision to overturn the Chevron Doctrine and the implications for AI regulation (17:35), the delayed deployment of Apple Intelligence in the EU (30:55), and a breakdown of Nvidia's deal to sell its technology to data centers in the Middle East (41:30).
In this podcast, AI&F Advisor Rabbi Mois Navon draws on his 35-year career as a pioneering AI computer design engineer to discuss whether battlefield decisions by lethal autonomous weapons that seek to apply ostensible “moral algorithms” are consistent with principles of human dignity. Rabbi Mois draws on his essay Autonomous Weapons Systems and Battlefield Dignity - A Jewish Perspective in the July 2023 book Alexa, How Do You Feel about Religion?" Technology, Digitization and Artificial Intelligence in the Focus of AI, co-edited by our AI&F Advisor Anna Puzo. Our host Pablo Ruz Salmones also explores with Rabbi Mois his front row seat on AI research in Israel since the 1990s, and the AI ethics conversation there. Production: Ed Melick, Pablo A. Ruz Salmones and Penny Yuen Host: Pablo A. Ruz Salmones Editing: Isabelle Braconnot Music from #UppbeatLicense code: 1ZHLF7FMCNHU39
The AI Breakdown: Daily Artificial Intelligence News and Discussions
A reading and discussion inspired by: https://www.washingtonpost.com/opinions/2024/06/25/ai-weapon-us-tech-companies/ Learn how to use AI with the world's biggest library of fun and useful tutorials: https://besuper.ai/ Use code 'youtube' for 50% off your first month. The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown
Paul Scharre, Executive Vice President and Director of Studies at Center for a New American Security, joins ACME General Corp to talk about his recent trip to Ukraine and his observations of and predictions for autonomous weapons and artificial intelligence. In addition to his work at CNAS, Paul is the award-winning author of Army of None: Autonomous Weapons and the Future of War and Four Battlegrounds: Power in the Age of Artificial Intelligence.
Armin Krishanan, PhD, is an Associate Professor for Security Studies, East Carolina University. Dr. Krishnan is the author of the following books: (2024) Fifth Generation Warfare: Dominating the Human Domain, London: Routledge (2018) Why Paramilitary Operations Fail, New York: Palgrave Macmillan (2016) Military Neuroscience and the Coming Age of Neurowarfare, London: Routledge (2012) Gezielte Tötung: Die Zukunft des Krieges [Targeted Killing: The Future of War], Berlin: Matthes & Seitz Berlin Verlag (2009) Killer Robots: The Legality and Ethicality of Autonomous Weapons, Farnham, Surrey: Ashgate (2008) War as Business: Technological Change and Military Services Contracting, Farnham, Surrey: Ashgate For two extra episodes each month, and exclusive content please visit: patreon.com/thenickbryantpodcast nickbryantnyc.com EpsteinJustice.com
Top AI engineers at defence technology companies defended the need for autonomous weapons on Thursday, amid a push for a ban on so-called "killer robots". More than 115 countries and 250 non-governmental organisations are calling for an international treaty to ban weapons that use artificial intelligence to identify and engage human targets, technology which United Nations Secretary-General Antonio Guterres has called "morally repugnant."
Right now, militaries around the globe are investing heavily in the use of AI weapons and drones. From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. RECOMMENDED MEDIAFour Battlegrounds: Power in the Age of Artificial Intelligence: Paul's book on the future of AI in war, which came out in 2023.Army of None: Autonomous Weapons and the Future of War: Paul's 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul's article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.The night the world almost almost ended: A BBC documentary about Stanislav Petrov's decision not to start nuclear war.AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.RECOMMENDED YUA EPISODESThe AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoCan We Govern AI? with Marietje SchaakeBig Food, Big Tech and Big AI with Michael MossThe Invisible Cyber-War with Nicole PerlrothYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
We speak with Father Afonso about the use of weapon systems in warfare. The discussion explores a thought experiment involving two states at war and the deployment of autonomous drones capable of targeting terrorists without human intervention. Key topics include the legality and morality of such systems, the potential for reducing civilian casualties, the challenge of machine errors leading to war crimes, and the shifting responsibility in decision-making from humans to machines. The conversation also addresses the technological advances in AI, the issues of accountability and transparency with autonomous systems, and the philosophical implications of removing human judgment from warfare. We examine potential future scenarios where autonomous weapons could be misused by non-state actors or influenced by automation bias, raising critical questions about the control, ethics, and implications of advanced AI in military and civilian contexts. [00:00] Introduction to the Episode and Guest Co-Host Jimmy Mullen [00:12] Exploring Autonomous Weapon Systems with Father Afonso [00:55] The Ethical and Legal Quandaries of Autonomous Weapons [03:01] The Evolution of Warfare and Technology's Role [06:15] The Human Element in Modern Warfare and AI [08:19] The Philosophical and Legal Challenges of Autonomous Weapons [21:08] Addressing Accountability and the Future of AI in Warfare [30:17] Defining AI and Its Legal Challenges [31:05] The Future of AI and Moral Agency [31:51] AI's Emotional Intelligence and Superintelligence [32:50] Military Strategy and AI: A New Conscience? [34:21] The Limitations of AI in Understanding Common Sense [36:13] The Human Element in AI Development [37:03] Autonomous Weapons Systems and the Essence of Humanity [39:11] AI on the Battlefield: Efficiency vs. Ethics [43:37] The Unpredictability of AI and Warfare [50:45] AI's Role Beyond the Battlefield [56:27] The Collective Mind and Autonomous Systems --- Send in a voice message: https://podcasters.spotify.com/pod/show/braininavat/message
Bloomberg Daybreak Weekend with Tom Busby takes a look at some of the stories we'll be tracking in the coming week. In the US – a look at next week's Fed meeting and a preview of Apple earnings. In the UK – a look at the Vienna Conference on Autonomous Weapons Systems. In Asia - a preview of Samsung earnings. See omnystudio.com/listener for privacy information.
Bloomberg Daybreak Weekend with Tom Busby takes a look at some of the stories we'll be tracking in the coming week. In the US – a look at next week's Fed meeting and a preview of Apple earnings. In the UK – a look at the Vienna Conference on Autonomous Weapons Systems. In Asia - a preview of Samsung earnings. See omnystudio.com/listener for privacy information.
Sierra Leone's President Julius Maada Bio has urged regional leaders to forge common understanding and develop strategies to address the challenges posed by Autonomous Weapons Systems. What are autonomous weapons and what sort of threat do they pose to West African countries?Also how the remains of thousands of enslaved Africans on St Helena is reconnecting this island to the rest of the world.And the Nigerian artist causing outrage with his depictions of black people. What's behind his depiction and why is he doing it?Presenter: Audrey Brown Producers: Nyasha Michelle, Victor Sylver and Patricia Whitehorne Technical Producer: Jonny Hall Senior Producer: Karnie Sharp Editors: Andre Lombard and Alice Muthengi
Autonomous weapons are no longer science fiction - and they're becoming a top priority for major military powers. Anna Hehir of the Future of Life Institute says we need an international treaty to ban some of the most dangerous autonomous weapons, and that we have a unique window now to do just that. Plus: Axios co-founder Mike Allen on how Washington is thinking about AI and weapons of war, behind the scenes. Guests: Anna Hehir, autonomous weapons lead at the Future of Life Institute; Axios co-founder Mike Allen, author of Axios AM and Axios PM Credits: 1 big thing is produced by Niala Boodhoo, Alexandra Botti, and Jay Cowit. Music is composed by Alex Sugiura and Jay Cowit. You can reach us at podcasts@axios.com. You can send questions, comments and story ideas as a text or voice memo to Niala at 202-918-4893. Learn more about your ad choices. Visit megaphone.fm/adchoices
Artificial Intelligence is changing warfare. Fully autonomous weapons that can make their own decisions about what to target or whom to kill are already in limited use today. But the pace of development of AI suggests that these so-called "Killer Robots" may become more and more common in future armed conflicts. Meanwhile, the speed at which new AI technology is coming online far exceeds the ability of the international community to come up with guardrails that might place controls on fully autonomous weapon systems. Guest Paul Scharre is the Executive Vice President and Director of Studies at the Center for a New American Security. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence. We kick off by discussing what we mean by autonomous weapons before having a longer conversation about what the future holds for AI in warfare, nuclear security, and how to secure sensible regulations on autonomous weapons and military uses of AI. Paul Scharre in Foreign Affairs
The recent war in Ukraine and the Israel-Hamas conflict in the Middle East have provided valuable lessons when it comes to the nature of future military modernisation and defense innovation. While traditional warfare remains relevant, we are seeing a shift towards multi-domain operations that are more kinetic and are increasingly utilizing autonomous weapons. As India seeks to boost its defence exports, its domestic industry will have a role to play. Several factors have played a role in the recent spike in defence exports—regulatory initiatives, enhanced capability of the private sector, and the desire among other countries to diversify away from their traditional defence partners and look at India instead. Additionally, the INDUS-X initiative between India and the U.S. has also been noted as a welcome mechanism to encourage further defence cooperation in other emerging technologies such as quantum computing, space, cyber-security, and deep-sea technology.What are the potential opportunities that international collaborations present? Will India's domestic industry be able to meet the armed forces' requirements? In this episode of Interpreting India, Ajay Kumar and Konark Bhandari talk about the current status of defence innovation in India and the emerging trends in this space. Episode ContributorsAjay Kumar is a nonresident senior fellow at Carnegie India. He served as the defense secretary of India between August 2019 and October 2022 and is the longest serving secretary in the Ministry of Defence, where he also served as secretary in the Department of Defence Production. As defense secretary, Ajay Kumar was the key architect for several major transformations, including the restructuring of higher defense management in the country and formation of the chief of defense staff in the Indian Armed Forces; the launch of the Agniveer scheme—a historic reform in recruitment, training, and retention of personnel in the Armed Forces. Konark Bhandari is a fellow with Carnegie India. He is a lawyer who has researched certain areas in the digital economy, focusing primarily on approaches to antitrust regulation of companies in the digital realm. He had earlier worked at India's antitrust regulator, the Competition Commission of India (CCI), where he worked closely with senior officials on a variety of matters. He is also an avid follower of the regulation of the space technology ecosystem and is keen to contribute to that discipline. Additional ReadingsINDUS-X: Charting the Way Ahead for India-U.S. Defense Industrial Cooperation, by Konark Bhandari. Can a Defense Innovation Bridge Elevate India-U.S. Defense Cooperation? by Konark Bhandari. Sameer Lalwani on India-U.S. Defense-Industrial Cooperation.U.S.-India Defense Innovation Collaboration: Building on a Promising Start by Konark Bhandari. Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.
Following Hamas's attacks on Oct. 7, the Israeli military retaliated with a relentless and devastating air war. By mid-December, Israeli forces had struck more than 22,000 targets in Gaza, and the Israeli military said it had used artificial intelligence to select many of them. The targeting system, called “The Gospel” by the IDF, was not the first time a military used AI on the battlefield, but the high number of civilian casualties raised red flags for many.Lawfare Managing Editor Tyler McBrien sat down with Lauren Kahn, a Senior Research Analyst at Georgetown's Center for Security and Emerging Technology (CSET) who focuses on the national security applications of artificial intelligence. They discussed how autonomous and AI-enabled weapons are being used and will be used in war, “the current ground rules for the age of AI in warfare,” and why Lauren favors confidence-building measures and other incremental steps, rather than an all-out ban. And despite running through a few nightmare scenarios, we learned why Lauren remains hopeful for the responsible and ethical use of AI for defense. Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This and all episodes at: https://aiandyou.net/ . Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In part two we talk about psychology of combat decisions, AI and strategic defense, and nuclear conflict destabilization. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
This and all episodes at: https://aiandyou.net/ . Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In this first part we talk about the ethics of autonomy in weapons systems and compare human to machine decision making in combat. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
From November 6, 2013: On October 25, 2013, the Hoover Institution held a day-long media colloquium out at Stanford University for a first-rate group of journalists focused on national security legal issues and the work of Hoover's Jean Perkins Task Force on National Security and Law. The first session of the day, Ben's talk on Speaking the Law, ran as episode of the podcast on Sunday. The second session, the subject of this episode, was a talk by Matt and Ken about autonomous weapons and the calls for a preemptive ban in international law on so-called "killer robots." The session has been edited both for length and because not all of the journalists consented to having their remarks made public.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
In Episode 11 of the "Relentless Podcast with Kyle Becker," we delve into the rapidly evolving landscape of artificial intelligence and its profound impact on society. From the eerie parallels drawn with iconic sci-fi films like "Terminator 2" and episodes of "Black Mirror," to the real-world advancements and ethical dilemmas posed by AI and AGI (Artificial General Intelligence), this podcast is a deep dive into the intersection of technology, policy, and human values.We explore how the explosion of the information age and the advent of AI are reshaping our world, touching on topics such as the role of AI in law enforcement, the potential deployment of AI-controlled drones in warfare, and the growing concerns over digital dehumanization.This episode also sheds light on the political and social implications of AI in the context of current global events and the debate surrounding the use of autonomous weapons. Join us as we navigate through the complex maze of technological advancements, ethical considerations, and the future of human-AI interaction. Whether you're a tech enthusiast, a policy maker, or simply curious about the future of AI, this episode offers valuable insights and stimulates a crucial conversation about the path we are paving for tomorrow's world.Find Kyle on Twitter at @KyleNABecker for breaking news, analysis, and more.Visit BeckerNews.com, your destination for stories that cut through the noise.Join the community at TheKyleBecker.Substack.com for exclusive content and engaging discussions.Brought to you by Becker News LLC, this podcast is a must-listen for anyone seeking a fresh, independent voice in the media landscape.
On This Week in Google, Leo Laporte, Jeff Jarvis, and Paris Martineau examine President Biden's new executive order focused on developing AI in a safe, secure and responsible way, and Leo's epiphany about the real danger being humans weaponizing AI. For the full episode, visit twit.tv/twig/740 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: GO.ACILEARNING.COM/TWIT
On This Week in Google, Leo Laporte, Jeff Jarvis, and Paris Martineau examine President Biden's new executive order focused on developing AI in a safe, secure and responsible way, and Leo's epiphany about the real danger being humans weaponizing AI. For the full episode, visit twit.tv/twig/740 Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: GO.ACILEARNING.COM/TWIT
AI, machines, and killer robots, oh my! Elon Musk and 116 people sent a letter to the UN asking that Autonomous Weapons be banned. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/vault-swn-1
We continue our conversation with philosopher Don Howard who has very practical ideas about, and projects engaged in advancing the principles of ethics in robotics and artificial intelligence. For dystopians, prof Howard provides hope that Silicon Valley's rising innovators are embracing ethics.
We continue our conversation with philosopher Don Howard who has very practical ideas about, and projects engaged in advancing the principles of ethics in robotics and artificial intelligence. For dystopians, prof Howard provides hope that Silicon Valley’s rising innovators are embracing ethics. Source
We continue our conversation with philosopher Don Howard who has very practical ideas about, and projects engaged in advancing the principles of ethics in robotics and artificial intelligence. For dystopians, prof Howard provides hope that Silicon Valley's rising innovators are embracing ethics.
One of Hollywood's favorite SciFi themes, robo-wars, is in the real-world a serious topic with features and implications different from fevered movie scripts and clickbait. In this first of a two-part episode, we are joined by Don Howard, the brilliant Notre Dame professor of philosophy and ethics of technologies. “In Defense of (Virtuous) Autonomous Weapons.” […]
One of Hollywood’s favorite SciFi themes, robo-wars, is in the real-world a serious topic with features and implications different from fevered movie scripts and clickbait. In this first of a two-part episode, we are joined by Don Howard, the brilliant Notre Dame professor of philosophy and ethics of technologies. Source
One of Hollywood's favorite SciFi themes, robo-wars, is in the real-world a serious topic with features and implications different from fevered movie scripts and clickbait. In this first of a two-part episode, we are joined by Don Howard, the brilliant Notre Dame professor of philosophy and ethics of technologies.“In Defense of (Virtuous) Autonomous Weapons.” Don Howard, Notre Dame Journal on Emerging Technologies, November 2022.In Defense of (Virtuous) Autonomous Systems, Don A. Howard, Dakota Digital Review, February 21, 2023Real Robots in Our Near Future: The Rise of Capable Industrial Automatons, Dakota Digital Review, Mark P. Mills, March 9, 2023This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5816500/advertisement