POPULARITY
Over the past decade, discussions surrounding artificial intelligence (AI) in the military domain have largely focused on autonomous weapon systems. This is partially due to the ongoing debates of the Group of Governmental Experts on Lethal Autonomous Weapons Systems of the Convention on Certain Conventional Weapons. While autonomous weapon systems are indeed a pressing concern, the critical reality is that AI is hastily deployed to gather intelligence and, even more worrisome, to support militaries to select and engage targets. As AI-based decision support systems (AI DSS) are increasingly used in contemporary battlefields, Jimena Sofía Viveros Álvarez, member of the United Nations Secretary General's High-Level Advisory Body on AI, REAIM Commissioner and OECD.AI Expert, advocates against the reliance on these technologies in supporting the target identification, selection and engagement cycle as their risks and inefficacies are a permanent fact which cannot be ignored, for they actually risk exacerbating civilian suffering.
With new and emerging technologies, we hear a lot about killer drones, driverless tanks and autonomous airplanes on the modern battlefield.One issue of particular concern is the use of what are officially known as lethal autonomous weapons systems (LAWS), which can select and engage targets with force, without human involvement - raising a raft of security, ethical and legal concerns.What are countries doing to regulate LAWS? And how can international law and the UN respond to this challenge?To explore these and other questions, the Group of Governmental Experts on LAWS began meeting at the UN in 2017, as Mélanie Régimbal, Chief of the UN Office for Disarmament Affairs in Geneva, explains to UN News's Nancy Sarkis.
Andy and Dave discuss the latest in AI news and research, starting with the US Department of Defense creating a new position of the Chief Digital and AI Officer, subsuming the Joint AI Center, the Defense Digital Service, and the office of the Chief Data Officer [0:32]. Member states of UNESCO adopt the first-ever global agreement on the ethics of AI, which includes recommendations on protecting data, banning social scoring and mass surveillance, helping to monitor and evaluate, and protecting the environment [3:26]. The European Digital Rights and 119 civil society organizations launch a collective call for an AI Act to articulate fundamental rights (for humans) regarding AI technology and research [6:02]. The Future of Life Institute releases Slaughterbots 2.0: “if human: kill()” ahead of the 3rd session in Geneva of the Group of Governmental Experts discussing lethal autonomous weapons systems [7:15]. In research, Xenobots 3.0, the living robots made from frog cells, demonstrate the ability to replicate themselves kinematically, at least for a couple of generations (extended to four generations by using an evolutionary algorithm to model ideal structures for replication) [12:23]. And researchers from DeepMind, Oxford, and Sydney demonstrate the ability to collaborate with machine learning algorithms to discover new results in mathematics (in knot theory and representation theory); though another researcher attempts to dampen the utility of the claims. [17:57] And finally, Dr. Mike Stumborg joins Dave and Andy to discuss research in Human-Machine Teaming, why it's important, and where the research will be going [21:44].
Andy and Dave discuss the latest in AI news, including an overview of Tesla's “AI Day,” which among other things, introduced the Dojo supercomputers specialized for ML, the HydraNet single deep-learning model architecture, and a “humanoid robot,” the Tesla Bot. Researchers at Brown University introduce neurograins, grain-of-salt-sized wireless neural sensors, for which they use nearly 50 to record neural activity in a rodent. The Associated Press reports on the flaws in ShotSpotter's AI gunfire detection system, and one case which used such evidence to send a man to jail for almost a year before a judge dismissed the case. The Department of the Navy releases its Science and Technology Strategy for Intelligent Autonomous Systems (publicly available), including an Execution Plan (available only through government channels). The National AI Research Resource Task Force extends its deadline for public comment in order to elicit more responses. The Group of Governmental Experts on Certain Conventional Weapons holds its first 2021 session for the discussion of lethal autonomous weapons systems; their agenda has moved on to promoting a common understanding and definition of LAWS. And Stanford's Center for Research on Foundation Models publishes a manifesto: On the Opportunities and Risks of Foundation Models, seeking to establish high level principles on massive models (such as GPT3) upon which many other AI capabilities build. In research, Georgie Institute of Technology, Cornell University, and IBM Research AI examine how the “who” in Explainable AI (e.g., people with or without a background in AI) shapes the perception of AI explanations. And Alvy Ray Smith pens the book of the week, with A Biography of the Pixel, examining the pixel as the “organizing principle of all pictures, from cave paintings to Toy Story.” Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video
Discussion with MEP Marina Kaljurand, former Estonian Foreign Minister with interests in cybersecurity, transatlantic cooperation and digitalisation on all levels.What you'll hear:Estonia as an e-nation: “nobody has so far proved that online voting services are less secure than offline.” Estonia is the only country in the world that gives its citizens rights to vote online.The digital revolution has come to stay, smart ones are taking advantage and facing challenges. The majority of international law was written before the IT revolution but still, there are principles that should apply also to the digital cyber and online world.For centuries the state was the only one dealing with security (nuclear, chemical and conventional weapons) but cyber is different. The private sector owns critical infrastructure, is providing online services, and has the brightest IT geeks. We are at a stage where the government has to work with others.The topic of cybersecurity has come out of the basement. And there are more and more politicians who have overcome the stereotypes and are ready to accept that that's our new reality. We can't escape it. We don't write “internet” with a capital “I” anymore, that's a sign!Europe's investment into R&D: “No regulation is better than bad regulation, and less regulation is better than more regulation” How will transatlantic cooperation change with Biden? “We are natural partners. it's in the European DNA and in the US DNA to cooperate. Yes, we are competitors, like all good neighbours, but it's a positive competition, so I think that together we can be much stronger facing the new challenges, for example coming from China.”UN Digital Roadmap: I hope the roadmap will be a push and I hope that at some point we will come to a better organisation for digital cooperation globally. And it has to be on all levels, global, regional, state-to-state. Last year we celebrated 75 years of the UN and for the first time, the declaration included digital cooperation and cybersecurity. The UN security council is still stuck in 1945, it does not reflect today's international affairs, but that's the only thing we have todayWill COVID help build trust between sectors?: I would argue that the government has all the tools, all the opportunities to provide that they do care about people and they do bring digital into people's lives with best intentions doing it responsibly and respecting all their fundamental and human rights. The chance is there, whether governments will take advantage or not.Marina Kaljurand is a member of the European Parliament, served as Estonian Foreign Minister, and Ambassador to several countries, including Russia during the cyber attack on Estonia in 2007 and to the United States, during the 2013 Snowden leak of highly-classified information from the NSA. She has played an important role as expert and negotiator in the accession negotiations of Estonia to the European Union and to the OECD.Marina is a member of the UN Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, and Member of the UN high-level panel on digital cooperation.GARI is a research institute that uses advanced technology, such as AI with Big Data, to visualise, understand and create the ability to manage globalisation.
In this episode, Dr Simon McKenzie talks with Dr Natalia Jevglevskaja about the obligation to review new weapons found in Article 36 of Additional Protocol 1 to the Geneva Conventions. They discuss what the weapons review obligation requires, the kinds of technologies it applies to, and the different approaches states take to fulfilling the obligation. They also discuss some of its limitations and the challenges posed by recent developments in machine processing and artificial intelligence.Dr Natalia Jevglevskaja is a Research Fellow at the University of New South Wales at the Australian Defence Force Academy in Canberra. Natalia's research interests include law of armed conflict, human rights law and comparative law. Natalia has a PhD from the University of Melbourne, holds an LL.M in Public International Law from the University of Utrecht (2013), awarded with cum laude to mark outstanding achievement and completed her undergraduate studies in law at the University of Heidelberg (2011).Suggested further reading:International Committee of the Red Cross, ‘A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures to Implement Article 36 of Additional Protocol I of 1977' (2006) 88 International Review of the Red Cross 931Boulanin, Vincent and Maaike Verbruggen, SIPRI Compendium on Article 36 Reviews (December 2017) SIPRI The Australian Article 36 Review Process, Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, UN Doc CCW/GGE.2/2018/WP.6
A new Chatham House mini-series on the Undercurrents podcast feed explores the ongoing international debate on how to govern cyberspace. In the first episode of Who Rules Cyberspace? Ben Horton and Joyce Hakmeh, from the International Security Programme at Chatham House, trace the recent history of cyberspace and outline the major dividing lines that characterise debates at the United Nations and beyond. For the rest of this week Ben and Joyce will be interviewing a wide range of individuals from governments, the private sector and civil society to find out the latest thinking on how cyberspace can be made a safe and prosperous tool for societies throughout the world. At the time of publishing this episode two resolutions have been circulated in the United Nations General Assembly which will soon be voted on by member states. One, proposed by the United States, calls on states to wait for the completion of the current Open-Ended Working Group (OEWG) and Group of Governmental Experts mandate. Another, backed by Russia, calls for the establishment of a new OEWG and casts further uncertainty on the future of these negotiations at the UN. Read the Journal of Cyber Policy article: The vital role of international law in the framework for responsible state behaviour in cyberspace Watch the video: How Can We Work Towards a Stable #Cyberspace4all?
A new Chatham House mini-series on the Undercurrents podcast feed explores the ongoing international debate on how to govern cyberspace. In the first episode of Who Rules Cyberspace? Ben Horton and Joyce Hakmeh, from the International Security Programme at Chatham House, trace the recent history of cyberspace and outline the major dividing lines that characterise debates at the United Nations and beyond. For the rest of this week Ben and Joyce will be interviewing a wide range of individuals from governments, the private sector and civil society to find out the latest thinking on how cyberspace can be made a safe and prosperous tool for societies throughout the world. At the time of publishing this episode two resolutions have been circulated in the United Nations General Assembly which will soon be voted on by member states. One, proposed by the United States, calls on states to wait for the completion of the current Open-Ended Working Group (OEWG) and Group of Governmental Experts mandate. Another, backed by Russia, calls for the establishment of a new OEWG and casts further uncertainty on the future of these negotiations at the UN. Read the Journal of Cyber Policy article: The vital role of international law in the framework for responsible state behaviour in cyberspace Watch the video: How Can We Work Towards a Stable #Cyberspace4all?
In Episode 21, our podcast guest brings you ideas from the perspective of a diplomat. The UN Geneva Library & Archives Director, Francesco Pisano, sits down with Jivan Gjorgjinski, a diplomat who served for 3 years in Geneva as Head (chargé d'affaires) of the Permanent Mission of the Republic of North Macedonia to the UN in Geneva from June 2016 to July 2019. In this discussion, he shares what it was like working in multilateral diplomacy in Geneva, and what this means in action, giving particular highlights from two key experiences: chairing the 2018 Meeting of States Parties to the Biological Weapons Convention (BWC), and the 2019 CCW GG on LAWS, or the Convention on Certain Conventional Weapons Group of Governmental Experts on lethal autonomous weapons systems. He explains more about these legal instruments and why they are key examples of multilateralism in action. He also looks at some critical questions: the role of small-state diplomats in the UN, the role of and opportunity for small states in multilateralism, and how diversity, creativity, and finding common ground come into play in multilateralism. You'll even hear a bit about why we should be more like a sci-fi series you might know well! To follow Jivan Gjorgjinski on Twitter, head here: https://twitter.com/jivan_gj You can also find out more about the Biological Weapons Convention: https://bit.ly/2VPkiRf and the Convention on Certain Conventional Weapons: https://bit.ly/2VPkiRf at the UN Geneva website. We also have Library Research Guides on Biological Weapons & Chemical Weapons, check them out here as part of the Disarmament series: https://libraryresources.unog.ch/?b=s. Content: Speakers: Jivan Gjorgjinski & Francesco Pisano. Host: Natalie Alexander. Editor and Sound Editor: Natalie Alexander. Image: Jivan Gjorgjinski. Recorded & produced at the UN Geneva Library & Archives.
„Sicherheitshalber“ ist der Podcast zur sicherheitspolitischen Lage in Deutschland, Europa und der Welt. In Folge 15 sprechen Thomas Wiegold, Ulrike Franke, Frank Sauer und Carlo Masala mit Sarah Kirchberger, Leiterin der Abteilung Strategische Entwicklung in Asien-Pazifik am Institut für Sicherheitspolitik an der Universität Kiel. Die Diskussion dreht sich um Chinas militärische Projekte und Fähigkeiten sowie um die Zukunft Taiwans und des südchinesischen Meers. Am Ende steht wie immer der „Sicherheitshinweis“, der kurze Fingerzeig auf aktuelle, sicherheitspolitisch einschlägige Themen und Entwicklungen - diesmal zu den französischen Militär-Ambitionen im Weltall, dem Stand der Diskussion zur Straße von Hormuz, dem Abzug britischer Truppen aus Deutschland und zwei Hinweisen für den rüstungskontrollpolitischen Kalender im Monat August. China (mit Sarah Kirchberger) 00:01:35 Sicherheitshinweise 00:56:35 Unser Shop ist online: https://shop.spreadshirt.de/sicherheitshalbershop/ Erwähnte und weiterführende Literatur: Sicherheitshalber Aktuell: Bilanz der Amtszeit von Ministerin von der Leyen: https://augengeradeaus.net/2019/07/sicherheitshalber-der-podcast-aktuell-eine-kurze-bilanz-der-amtszeit-von-der-leyens/ Chinas militärische Fähigkeiten und politische Ambitionen Unser Gast: Dr. Sarah Kirchberger: https://www.ispk.uni-kiel.de/en/center-for-asia-pacific-strategy-and-security/team/dr-sarah-kirchberger Sarah Kirchberger (2015), Assessing China’s Naval Power: Technological Innovation, Economic Constraints, and Strategic Implications. Berlin: Springer. ISBN 9783662471265 Text des neuen chinesischen Weissbuches: http://www.xinhuanet.com/english/2019-07/24/c_138253389.htm Tai Ming Cheung (2009), Fortifying China: The Struggle to Build a Modern Defense Economy, Ithaca & London: Cornell University Press. ISBN 9780801446924 Kai-Fu Lee, AI Superpowers, China, Silicon Valley and the New World Order, https://aisuperpowers.com Augengeradeaus, Wenn die Chinesen nach Feldkirchen kommen, 10.7.2019, https://augengeradeaus.net/2019/07/wenn-die-chinesen-nach-feldkirchen-kommen/ Huawei row: China formally arrests Canada detainees, BBC 16 May 2019 https://www.bbc.co.uk/news/world-us-canada-48302455 Sarah Kirchberger (2018), „Militär und Sicherheitspolitik der Volksrepublik China“, in: Kerwer, Jürgen und Röming, Angelika (Hrsg.): Die Volksrepublik China - Partner und Rivale, Wiesbaden: Hessische Landeszentrale für politische Bildung 2018, S. 149-195, http://www.hlz.hessen.de/uploads/tx_userhlzpub/X396-China.pdf Sarah Kirchberger und Patrick O’Keeffe (2019): "Chinas schleichende Annexion im Südchinesischen Meer - die strategischen Hintergründe“, in: SIRIUS 2019, Bd. 3 Heft 1, S. 3-20. https://doi.org/10.1515/sirius-2019-1002 Sicherheitshinweise Thomas: Waffen im Weltraum und die französische "Space Force" Augengeradeaus, Merkposten: Frankreich will „aktive Verteidigung“ im Weltraum, 25.07.2019, https://augengeradeaus.net/2019/07/merkposten-frankreich-will-aktive-verteidigung-im-weltraum/ Frank: Rüstungskontrollpolitische Termine im August 2019 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) https://www.unog.ch/80256EE600585943/(httpPages)/5535B644C2AE8F28C1258433002BBF14?OpenDocument Dr. Ulrich Kühn kurz und knapp auf Twitter zur "Post-INF-Welt" https://mobile.twitter.com/DrUlrichKuehn/status/1152905045005996032 Rike: Abzug des britischen Militärs aus Deutschland “Abschied mit besonderen Ehren. Britisches Instandsetzungsbataillon aus Paderborn erhält Fahnenband” Westfalen Blatt, 9 Juli 2019, https://www.westfalen-blatt.de/OWL/Kreis-Paderborn/Paderborn/3870901-Britisches-Instandsetzungsbataillon-aus-Paderborn-erhaelt-Fahnenband-Abschied-mit-besonderen-Ehren Carlo: Straße von Hormuz Interview mit RBB: Politexperte: Die Seewege müssen offen gehalten werden, 29.07.2019 https://www.inforadio.de/programm/schema/sendungen/int/201907/29/358864.html
The Institute of Electrical and Electronics Engineers (IEEE) has released its first edition of Ethically Aligned Design (EAD1e), a nearly 300-page report involving thousands of global experts; the report covers 8 major principles including transparency, accountability, and awareness of misuse. DARPA announces the Artificial Social Intelligence for Successful Teams program, which will attempt to help AI build shared mental models and understand the intentions, expectations, and emotions of its human counterparts. DARPA also announced a program to design chips for Real Time Machine Learning (RTML), which will generate optimized hardware design configurations and standard code, based on the objectives of the specific ML algorithms and systems. The U.S. Army awarded a $152M contract to QinetiQ North America for producing “backpack-sized” robots; the common robotic system individual (CRS(I)) is a remotely operated, unmanned ground vehicle. The White House has launched a site to highlight AI initiatives. Anduril Industries gets a Project MAVEN contract to support the Joint AI Center. And the 2019 Turing Award goes to neural network pioneers Hinton, LeCun, and Bengio. Researchers at Johns Hopkins demonstrate that humans can decipher adversarial images; that is, they can “think like machines” and anticipate how image classifiers will incorrectly identify unrecognizable images. A group of researchers at MIT, Columbia, Cornell, and Harvard demonstrate “particle robots” inspired by biological cells; these robots can’t move, but can pulsate from a size of 6in to about 9in, and as a collective, they can demonstrate movement and other collective behavior (even with a 20% failure of the components). Researchers at the Harbin Institute of Technology and Machine State University control a swarm of “microbots” (here, single grains of hematite) through application of different magnetic fields. And researchers use honey bees (in Austria) and zebrafish (in Switzerland) to influence each other’s collective behavior through robotic mediation. A report from the Interregional Crime and Justice Research Institute released a report on AI in law enforcement, from a recent meeting organized by INTERPOL. DefenseOne publishes a report from Tucker, Glass, and Bendett, on how the U.S. military services are using AI. An e-book from Frontiers in Robotics and AI collects 13 papers on the topic of “Consciousness in Humanoid Robots.” Andy highlights a book from 2007, “Artificial General Intelligence,” which claims to be the first to codify the use of AGI as a term-of-art. MIT Tech Review’s EnTech Digital 2019 has released the videos from its 25-26 March event. And DARPA has released more videos from its AI Colloquium. The U.N. Group of Governmental Experts is meeting in Geneva to discuss lethal autonomous weapons systems (LAWS). A short story from Husain and Cole describes a hypothetical future war in Europe between Russian and NATO forces. And Ian McDonald pens a story that captures the life of military drone pilots in Sanjeev and Robotwallah.
Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases its Unmanned Systems Integrated Roadmap 2017-2042; Google announces DataSet Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life; and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability poison the training data set of an neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpenAI, Berkley and Edinburgh research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”
Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases its Unmanned Systems Integrated Roadmap 2017-2042; Google announces DataSet Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life; and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability poison the training data set of an neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpenAI, Berkley and Edinburgh research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”
On this episode of the Defense & Aerospace Report Interviews Podcast, sponsored by L3 Technologies, Samuel Bendett, an associate research analyst in the Center for Naval Analyses’ Russia Program and a Russian studies fellow at the American Foreign Policy Council, discusses Russian autonomous vehicles (including its Storm unmanned ground vehicle concept and its Cephalopod unmanned underwater vehicle), takeaways from the Falcon Hunt unmanned aerial vehicle competition that was held during the 2018 International Army Games in Russia, and the Group of Governmental Experts on Lethal Autonomous Weapons Systems’ upcoming meeting in Geneva, Switzerland, during an August 2018 interview with Defense & Aerospace Report Editor Vago Muradian in Washington.
The black letter law and articles in this episode are: Cyber Mercenaries: The State, Hackers, and Power https://www.cambridge.org/core/books/cyber-mercenaries/B685B7555E1C52FBE5DFE6F6594A1C00 Computer Fraud and Abuse Act, 18 U.S.C. § 1030 https://www.law.cornell.edu/uscode/text/18/1030 U.S.-China Cyber Agreement (October 16, 2015) https://fas.org/sgp/crs/row/IN10376.pdf Tallinn Manual on the International Law Applicable to Cyber Warfare http://csef.ru/media/articles/3990/3990.pdf Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (not available as PDF) https://ccdcoe.org/tallinn-manual-20-international-law-applicable-cyber-operations-be-launched.html United Nations International Group of Governmental Experts on Information Security (GGE) https://www.un.org/disarmament/group-of-governmental-experts/ Mutually Assured Disruption – Report, National Committee on American Foreign Policy, (January 12, 2018) https://www.ncafp.org/12606-2/ “Protecting Financial Data in Cyberspace: Precedent for Further Progress on Cyber Norms?” Just Security (August 24, 2017) https://www.justsecurity.org/44411/protecting-financial-data-cyberspace-precedent-progress-cyber-norms “Toward a Global Norm Against Manipulating the Integrity of Financial Data” CEIP http://carnegieendowment.org/2017/03/27/toward-global-norm-against-manipulating-integrity-of-financial-data-pub-68403 Convention on Cybercrime https://rm.coe.int/1680081561 Presidential Policy Directive 41 https://obamawhitehouse.archives.gov/the-press-office/2016/07/26/presidential-policy-directive-united-states-cyber-incident Fact sheet on the PPD https://obamawhitehouse.archives.gov/the-press-office/2016/07/26/fact-sheet-presidential-policy-directive-united-states-cyber-incident-1 Cyber Executive Order https://www.whitehouse.gov/presidential-actions/presidential-executive-order-strengthening-cybersecurity-federal-networks-critical-infrastructure/ Cyber Sanctions https://www.treasury.gov/resource-center/sanctions/Programs/pages/cyber.aspx Cyber Sanctions for Russian hackers in election interference https://home.treasury.gov/news/press-releases/sm0312 Tim Maurer is the Co-Director of the Cyber Policy Institute at the Carnegie Endowment for International Peace http://carnegieendowment.org/experts/1086
On this episode, my guest Dr. Waheguru Pal Singh (W.P.S.) Sidhu and I discuss Iran, North Korea, and nuclear proliferation. Dr. Waheguru Pal Singh Sidhu is Visiting Professor at New York University’s Center for Global Affairs and Non-Resident Fellow at NYU’s Center on International Cooperation (CIC), as well as Non-Resident Senior Fellow at Brookings. Prior to coming to CIC, he served as Vice President of Programs at the EastWest Institute in New York, and as Director of the New Issues in Security program at the Geneva Centre for Security Policy (GCSP). Dr. Sidhu has researched, written, and taught extensively on the United Nations and regionalism, peace operations, Southern Asia, confidence-building-measures, disarmament, arms control, and non-proliferation issues. His recent publications include: The Iraq Crisis and World Order: Structural, Institutional and Normative Challenges; Arms Control after Iraq: Normative and Operational Challenges; Kashmir: New Voices, New Approaches; and China and India: Cooperation or Conflict? He has also published in leading international journals, including Arms Control Today, Asian Survey, Disarmament Diplomacy, Disarmament Forum, International Peacekeeping, Jane's Intelligence Review, Politique Etrangere, and the Bulletin of Atomic Scientists. Dr. Sidhu was the consultant to the first, second, and third United Nations Panel of Governmental Experts on Missiles in 2001-2002, 2004 and 2007-2008 respectively. He was also appointed as a member of the Resource Group set up to assist the United Nations High-Level Panel on Threats, Challenges and Change in 2004. Dr. Sidhu earned his Ph.D. from the University of Cambridge. He holds a Masters in International Relations from the School of International Studies, Jawaharlal Nehru University, New Delhi and a Bachelor's degree in History from St. Stephen's College, Delhi University, India.