AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, and discusses the technological and military implications. Join Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field. The views expressed here are those of the c…
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February, but performed in December, A joint Dept of Defense team performed 12 flight tests (over 17 hours) in which AI agents piloted Lockheed Martin's X-62A VISTA, an F-16 variant. Andy provides a run-down of a large number of recent ChatGPT-related stories. Wolfram “explains” how ChatGPT works. Paul Scharre publishes Four Battlegrounds: Power in the Age of AI. And to come full circle, we began this podcast 6 years ago with the story of AlphaGo beating the world champion. So we close the podcast with news that a non-professional Go player, Kellin Pelrine, beat a top AI system 14 games to one having discovered a ‘not super-difficult method for humans to beat the machines. A heartfelt thanks to you all for listening over the years!
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force publishes its final report, in which it details its plans for a national research infrastructure, as well as its request for $2.6 billion over 6 years to fund the initiatives. DARPA announces the Autonomous Multi-domain Adaptive Swarms-of-Swarms (AMASS) program, a much larger effort (aiming for thousands of autonomous entities) than its previous OFFSET program. And finally, from the Naval Postgraduate School's Energy Academic Group, Kristen Fletcher and Marina Lesse join to discuss their research and efforts in autonomous systems and maritime law and policy, to include a discussion about the DoDD 3000.09 update and the high-altitude balloon incident. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Strategic Enforcement Plan to target AI-based hiring bias. The US Department of State establishes the Office of the Special Envoy for Critical and Emerging Technology to bring “additional technology policy expertise, diplomatic leadership, and strategic direction to the Department's approach to critical and emerging technologies.” Google calls in its founders, Larry Page and Sergey Brin, to help with the potential threat over ChatGPT and other AI technology. Researchers from Northwestern University publish research that demonstrates how ChatGPT can write fake research paper abstracts that can pass plagiarism checkers, and that human reviewers were only able to correctly identify 68% of the generated abstracts. Wolfram publishes an essay on a way to combine the computational powers of ChatGPT with Wolfram|Alpha. CheckPoint Research demonstrates how cybercriminals can use ChatGPT for nefarious exploits (including people without any experience in generating malicious tools). Researchers at Carnegie Mellon demonstrate that full body tracking is now possible using only WiFi signals, with comparable performance to image-based approaches. Microsoft introduces VALL-E, a text-to-speech AI model that can mimic anyone's voice with only three seconds of sample input. The Cambridge Handbook of Responsible AI is the book of the week, with numerous essays on the philosophical, ethical, legal, and societal challenges that AI brings; Cambridge has made the book open-access online. And finally, Sam Bendett joins for an update on the latest AI and autonomy-related information from Russia as well as Ukraine.
Andy and Dave discuss the latest in AI and autonomy news and research, including a report from Human Center AI that assesses progress (or lack thereof) of the implementation of the three pillars of America's strategy for AI innovation. The Department of Energy is offering up a total of $33M for research in leveraging AI/ML for nuclear fusion. China's Navy appears to have launched a naval mothership for aerial drones. China is also set to introduce regulation on “deepfakes,” requiring users to give consent and prohibiting the technology for fake news, among many other things. Xiamen University and other researchers publish a “multidisciplinary open peer review dataset” (MOPRD), aiming to provide ways to automate the peer review process. Google executives issue a “code red” for Google's search business over the success of OpenAI's ChatGPT. New York City schools have blocked access for students and teachers to ChatGPT unless it involves the study of the technology itself. Microsoft plans to launch a version of Bing that integrates ChatGPT to its answers. And the International Conference on Machine Learning bans authors from using AI tools like ChatGPT to write scientific papers (though still allows the use of such systems to “polish” writing). In February, an AI from DoNotPay will likely be the first to represent a defendant in court, telling the defendant what to say and when. In research, the UCLA Departments of Psychology and Statistics demonstrate that analogical reasoning can emerge from large language models such as GPT-3, showing a strong capacity for abstract pattern induction. Research from Google Research, Stanford, Chapel Hill, and DeepMind shows that certain abilities only emerge from large language models that have a certain number of parameters and a large enough dataset. And finally, John H. Miller publishes Ex Machina through the Santa Fe Institute Press, examining the topic of Coevolving Machines and the Origins of the Social Universe. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI news and research, including the release of the US National Defense Authorization Act for FY2023, which includes over 200 mentions of “AI” and many more requirements for the Department of Defense. DoD has also awarded its cloud-computing contracts, not to one company, but four – Amazon, Google, Microsoft, and Oracle. At the end of November, the San Francisco Board voted to allow the police force to use robots to administer deadly force, however, after a nearly immediate response from a “No Killer Robots” campaign, in early December the board passed a revised version of the policy that prohibits police from using robots to kill people. Israeli company Elbit unveils its LANIUS drone, a “drone-based loitering munition” that can carry lethal or non-lethal payloads, and appears to have many functions similar to the ‘slaughter bots,' except for autonomous targeting. Neuralink shows the latest updates on its research for putting a brain chip interface into humans, with demonstrations of a monkey manipulating a mouse cursor with its thoughts; the company also faces a federal investigation into possible animal-welfare violations. DeepMind publishes AlphaCode in Science, a story that we covered back in February. DeepMind also introduces DeepNash, an autonomous agent that can play Stratego. OpenAI unleashes ChatGPT, a spin-off of GPT-3 optimized for answering questions through back-and-forth dialogue. Meanwhile, Stack Overflow, a website for programmers, temporarily banned users from sharing responses generated by ChatGPT, because the output of the algorithm might look good, but it has “a high rate of being incorrect.” Researchers at the Weizmann Institute of Science demonstrate that, with a simple neural network, it is possible to reconstruct a “large portion” of the actual training samples. NOMIC provides an interactive map to explore over 6M images from Stable Diffusion. Steve Coulson creates “AI-assisted comics” using Midjourney. Stay tuned for AI Debate 3 on 23 December 2022. And the video of the week from Ricard Sole at the Santa Fe Institute explores mapping the cognition space of liquid and solid brains. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI news and research, including the introduction of a lawsuit against Microsoft, GitHub and OpenAI for allegedly violating copyright law by reproducing open-source code using AI. The Texas Attorney General files a lawsuit against Google alleging unlawful capture and use of biometric data of Texans without their consent. DARPA flies its final flight of ALIAS, an autonomous system outfitted on a UH-60 Black Hawk. And Rafael's DRONE DOME counter-UAS system wins Pentagon certification. In research, Meta publishes work on Cicero, an AI agent that combines Large Language Models with strategic reasoning to achieve human-level performance in Diplomacy. Meta researchers also publish work on ESMFold, an AI algorithm that predicts structures from some 600 million proteins, “mostly unknown.” And Meta also releases (then takes down due to misuse) Galactica, a 120B parameter language model for scientific papers. In a similar, but less turbulent vein, Explainpaper provides the ability to upload a paper, highlight confusing text, and ask queries to get explanations. CRC Press publishes online for free Data Science and Machine Learning: Mathematical and Statistical Methods, a thorough text for upper-class college or grad-school level. And finally, the video of the week features Andrew Pickering, Professor Emeritus of sociology and philosophy at the University of Exeter, UK, with a video on the Cybernetic Brain, and the book of the same name, published in 2011. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave once again welcome Sam Bendett, research analyst with CNA's Russia Studies Program, to the podcast to discuss the latest unmanned and autonomous news from the Ukraine and Russian conflict. The group discusses the use and role of commercial quadcopters, the recent Black Sea incident involving unmanned systems, and the supply of Iranian systems to Russia. They also discuss the Wagner Group's Research and Development center, and its potential role in the Ukraine-Russian conflict. Will Ukraine deploy lethal autonomous drones against Russia? https://www.newscientist.com/article/2344966-will-ukraine-deploy-lethal-autonomous-drones-against-russia/ PMC Wagner Center: https://www.euronews.com/2022/11/04/russias-wagner-paramilitary-group-opens-first-official-hq-in-st-petersburg Russia's Lancet: https://www.forbes.com/sites/davidhambling/2022/11/04/russian-videos-reveal-new-details-of-loitering-munitions/ Coordinated drone attack at Sevastopol: https://defense-update.com/20221030_coordinated-drone-attack-targets-the-russian-black-sea-fleet-at-sevastopol.html Iranian supply of drones to Russia: https://www.npr.org/2022/11/05/1134523148/ukraine-russia-war-iran-drones Russia's "brain drain" problem: https://www.maravipost.com/russia-vs-ukraine-the-major-brain-drain-amid-conflict/
Andy and Dave discuss the latest in AI-related news and research, including a bill from the EU that will make it easier for people to sue AI companies for harm or damages caused by AI-related technologies. The US Office of S&T Policy releases a Blueprint for an AI Bill of Rights, which further lays the groundwork for potential legislation. The US signs the AI Training for the Acquisition Workforce Act into law, requiring federal acquisition officials to receive training on AI, and it requires OMB to work with GSA to develop the curriculum. Various top robot companies pledge not to add weapons to their technologies and to work actively at not allowing their robots to be used for such purposes. Telsa reveals its Optimus robot at its AI Day. DARPA will hold a proposal session on 14 November for its AI Reinforcements effort. OpenAI makes DALL-E available for everybody, and Playground offers access to both DALL-E and Stable Diffusion. OpenAI also makes available the results of an NLP Community Meta survey in conjunction with NY University, providing AI researchers' views on a variety of AI-related efforts and trends. And Nathan Benaich and Ian Hogarth release the State of AI Report 2022, which covers a summary of everything from research, politics, safety, as well as some specific predictions for 2023. In research, DeepMind uses AlphaZero to explore matrix multiplication and discovers a slightly faster algorithm implementation for 4x4 matrices. Two research efforts look at turning text into video. Meta discusses its Make-A-Video for turning text prompts into video, leveraging text-to-image generators like DALL-E. And Google Brain discusses its Imagen Video (along with Phenaki, which produces long videos from a sequence of text prompts). The Foundation of Robotics is the open-access book of the week from Damith Herath and David St-Onge. And the video of the week addresses AI and the Application of AI in Force Structure, with LtGen (ret) Groen, Dr. Sam Tangredi, and Mr. Brett Vaughan joining in on the discussion for a symposium at the US Naval Institute.
Dr. Anya Fink from CNA's Russia Studies program joins the podcast to discuss the impacts of global sanctions on Russia's technology and AI sector.
Andy and Dave discuss the latest in AI news and research, starting with a publication from the UK's National Cyber Security Centre, providing a set of security principles for developers implementing machine learning models. Gartner publishes the 2022 update to its “AI Hype Cycle,” which qualitatively plots the position of various AI efforts along the “hype cycle.” PromptBase opens its doors, promising to provide users with better “prompts” for text-to-image generators (such as DALL-E) to generate “optimal images.” Researchers explore the properties of vanadium dioxide (VO2), which demonstrates volatile memory-like behavior under certain conditions. MetaAI announces a nascent ability to decode speech from a person's brain activity, without surgery (using EEG and MEG). Unitree Robotics, a Chinese tech company, is producing its Aliengo robotic dog, which can carry up to 11 pounds and perform other actions. Researchers at the University of Geneva demonstrate that transformers can build world models with fewer samples, for example, able to generate “pixel perfect” predictions of Pong after 120 games of training. DeepMind AI demonstrates the ability to teach a team of agents to play soccer by controlling at the level of joint torques and combine it with longer-term goal-directed behavior, where the agents demonstrate jostling for the ball and other behaviors. Researchers at Urbana-Champaign and MIT demonstrate a Composable Diffusion model to tweak and improve the output of text-to-image transformers. Google Research publishes results on AudioLM, which generates “natural and coherent continuations” given short prompts. And Michael Cohen, Marcus Hutter, and Michael Osborne published a paper in AI Magazine, arguing that dire predictions about the threat of advanced AI may not have gone far enough in their warnings, offering a series of assumptions on which their arguments depend. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI news and research, starting with DARPA moving into Phase 2 of its No Manning Required Ship (NOMARS) program, having selected Serco Inc for its Defiant ship design. The UK releases a roadmap on automated vehicles, Connected & Automated Mobility 2025, and describes new legislation that will place liability for the actions of self-driving vehicles onto manufacturers, and not the occupants. The DOD's Chief Digital and AI Office is preparing to roll out Tradewinds, an open solutions marketplace geared toward identifying new technologies and capabilities. The US bans NVIDIA and AMD from selling or exporting certain types of GPUs (mostly for high-end servers) to China and Russia. A report in Nature examines the “reproducibility crisis” involving machine learning in scientific articles, identifying eight types of “data leaks” in research that raise cause for concern. Google introduces a new AI image noise reduction tool that greatly advances the state of the art for low lighting and resolution images, using RawNeRF, which makes use of the previous neural radiance fields approach, but on raw image data. Hakwan Lau and Oxford University Press make available for free In Consciousness We Trust: the Cognitive Neuroscience of Subjective Experience. And Sam Bendett joins Andy and Dave to discuss the latest from Russia's Army 2022 Expo and other recent developments around the globe. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI and autonomy news and research, including an announcement that the Federal Trade Commission is exploring rules for cracking down on harmful commercial surveillance and lax data security, with the public having an opportunity to share input during a virtual public form on 8 September 2022. The Electronic Privacy Information Center (EPIC), with help from Caroline Kraczon, releases The State of State AI Policy, a catalog of AI-related bills that states and local governments have passed, introduced or failed during the 2021-2022 legislative season. In robotics, Xiaomi introduces CyberOne, a 5-foot 9-inch robot that can identify “85 types of environmental sounds and 45 classifications of human emotions.” Meanwhile at a recent Russian arms fair, Army-2022, a developer showed off a robot dog with a rocket-propelled grenade strapped to its back. NIST updates its AI Risk Management Framework to the second draft, making it available for review and comment. DARPA launches the SocialCyber project, a hybrid-AI project aimed at helping to protect the integrity of open-source code. BigScience launches BLOOM (BigScience Large Open-science Open-access Multilingual Language Model), a “bigger than GPT-3” multilanguage (46) model that a group of over 1,000 AI researchers has created, that anyone can download and tinker with it for free. Researchers at MIT develop artificial synapses that shuttle protons, resulting in synapses 10,000 times faster than biological ones. China's Comprehensive National Science Center claims that it has developed “mind-reading AI” capable of measuring loyalty to the Chinese Communist Party. Researchers at the University of Sydney demonstrate that human brains are better at identifying deepfakes than people, by examining results directly from neural activity. Researchers at the University of Glasgow combine AI with human vision to see around corners, reconstructing 16x16-pixel images of simple objects that the observer could not directly see. GoogleAI publishes research on Minerva, using language models to solve quantitative reasoning problems, and dramatically increasing the SotA. Researchers from MIT, Columbia, Harvard, and Waterloo publish work on a neural network that solves, explains, and generates university math problems “at a human level.” CSET makes available the Country Activity Tracker for AI, an interactive tool on tech competitiveness and collaboration. And a group of researchers at Merced's Cognitive and Information Sciences Program make available Neural Networks in Cognitive Science. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI news and research, including an announcement from DeepMind that it is freely providing a database of 200+ million protein structures as predicted by AlphaFold. Researchers at the Max Planck Institute for Intelligent Systems demonstrate how a robot dog can learn to walk in about one hour using a Bayesian optimization algorithm. A chess-playing robot breaks the finger of a seven-year-old boy during a chess match in Moscow. A bill with the Senate Armed Services Committee would require the Department of Defense to accelerate the fielding of new technology to defeat drone swarms. The Chief of Naval Operations Navigation Plan 2022 aims to add 150 uncrewed vessels by 2045. The text-to-image transformer DALL-E is now available in beta. Researchers at Columbia University use an algorithm to identify possible state variables from the observation of systems (such as a double pendulum) and discover “alternate physics”; the algorithm discovers the intrinsic dimension of the observed dynamics and identifies a candidate set of state variables, but in most cases, the scientists found it difficult (if not impossible) to decode those variables to known phenomena. Wolfram Media and Etienne Bernard make Introduction to Machine Learning: Mathematica available for free. And Jeff Edmonds and Sam Bendett join for a discussion on their latest report, Russian Military Autonomy in Ukraine: Four Months In – a closer look at the use of unmanned systems by both Russia and Ukraine. https://www.cna.org/our-media/podcasts/ai-with-ai
Dr. Anya Fink from CNA's Russia Studies program joins the podcast to discuss the impacts of global sanctions on Russia's technology and AI sector. Report CNA: A Technological Divorce: The impact of sanctions and the end of cooperation on Russia's technology and AI sector.
Andy and Dave discuss the latest in AI news and research, including an update from DARPA on its Machine Common Sense program, demonstrating rapidly adapting to changing terrain, carrying dynamic loads, and understanding how to grasp objects [0:55]. The Israeli military fields new tech from Camero-Tech that allows operators to ‘see through walls,' using pulse-based ultra-wideband micro-power radar in combination with an AI-based algorithm for tracking live targets [5:01]. In autonomous shipping [8:13], the Suzaka, a cargo ship powered by Orca AI, makes a nearly 500-mile voyage “without human intervention” for 99% of the trip; the Prism Courage sails from the Gulf of Mexico to South Korea “controlled mostly” by HiNAS 2.0, a system by Avikus, a subsidiary of Hyundai; and Promare's and IBM's Mayflower Autonomous Ship travels from the UK to Nova Scotia. In large language models [10:09], a Chinese research team unveils a 174 trillion parameter model, Bagualu (‘alchemist pot') and claims it runs an AI model as sophisticated as a human brain (not quite, though); Meta releases the largest open-source AI language model, with OPT-66B, a 66 billion parameter model; and Russia's Yandex opens its 100 billion parameters YaLM to public access. Researchers from the University of Chicago publish a model that can predict future crimes “one week in advance with about 90% accuracy” (referring to general crime levels, not specific people and exact locations), and also demonstrate the potential effects of bias in police response and enforcement [13:32]. In a similar vein, researchers from Berkeley, MIT, and Oxford publish attempts to forecast future world events using the neural network system Autocast, and show that forecasting performance still comes in far below a human expertise baseline [16:37]. Angelo Cangelosi and Minoru Asada provide the (graduate) book of the week, with Cognitive Robotics. And Dr. Anya Fink joins the podcast to discuss the impacts of global sanctions on Russia's technology and AI sector [21:15].
Andy and Dave discuss the latest in AI news and research, starting with the Department of Defense releasing its Responsible AI Strategy. In the UK, the Ministry of Defence publishes its Defence AI Strategy. The Federal Trade Commission warns policymakers about relying on AI to combat online problems and instead urges them to develop legal frameworks to ensure AI tools do not cause additional harm. YouTuber Yannic Kilcher trains an AI on 4chan's “infamously toxic” Politically Incorrect board, creating a predictably toxic bot, GPT-4chan; he then uses the bot to generate 15,000 posts on the board, quickly receiving condemnation from the academic community. Google suspends and then fires an engineer who claimed that one of its chatbots, LaMDA, achieving sentience; former Google employees Gebru and Mitchell write an opinion piece saying they warned this would happen. For the Fun Site of the Week, a mini version of DALL-E comes to Hugging Face. And finally, IBM researcher Kush Varshney joins Andy and Dave to discuss his book, Trustworthy Machine Learning, which provides AI researchers with practical tools and concepts when developing machine learning systems. Visit our CNA.org to explore the links mentioned in this episode.
CNA colleagues Kaia Haney and Heather Roff join Andy and Dave to discuss Responsible AI. They discuss the recent Inclusive National Security seminar on AI and National Security: Gender, Race, and Algorithms. The keynote speaker, Elizabeth Adams spoke on the challenges that society faces in integrating AI technologies in an inclusive fashion, and she identified ways in which consumers of AI-enabled products can ask questions and engage on the topic of inclusivity and bias. The group also discusses a variety of topics around the many challenges that organizations face in operationalizing these ideas, including a revisit of the findings from recent medical research, which found an algorithm was able to identify the race of a subject from x-rays and CAT scans, even with identifying features removed. Inclusive National Security Series: AI and National Security: Gender, Race and Algorithms Inclusive National Security webpage Sign up for the InclusiveNatSec mailing list here.
Andy and Dave discuss the latest in AI news and research, starting with an announcement that DoD will be updating its Directive 3000.09 on “Autonomous Weapons,” with the new Emerging Capabilities Policy Office leading the way [1:25]. The DoD names Diane Staheli as the new chief for Responsible AI [5:19]. NATO launches an AI strategic initiative, Horizon Scanning, to better understand AI and its potential military implications [6:31]. China unveils an autonomous drone carrier ship even though Dave wonders about the use of the terms unmanned and autonomous [8:59]. Stanford University and the Human-Centered AI Center build on their initiative for foundation models by releasing a call to the community for developing norms on the release of foundation models [10:42]. DECIDE-AI continues to develop its reporting guidelines for early-stage clinical evaluation of AI decision support systems [14:39]. The Army successfully demonstrates four waves of seven drones, launched by a single operator, during EDGE 22 [18:31]. Researchers from Zhejiang University and Hong Kong University of S&T demonstrate a swarm of physical micro flying robots, fully autonomous, able to navigate and communicate as a swarm, with fully onboard perception, localization, and control [19:58]. Google Research introduces a new text-to-image generator, Imagen, which uses diffusion models to increase the size and photorealism of an image [24:20]. Researchers discover that an AI algorithm can identify race from X-ray and CT images, even when correcting for variations such as body-mass index but can't explain why or how [31:21]. And Sonantic uses AI to create the voice lines for Val Kilmer in the new movie Top Gun: Maverick [34:18]. RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7.
Andy and Dave discuss the latest in AI news and research, starting with the European Parliament adopting the final recommendations of the Special Committee on AI in a Digital Age (AIDA), finding that the EU should not always regulate AI as a technology, but use intervention proportionate to the type of risk, among other recommendations [1:31]. Synchron enrolled the first patient in the U.S. clinical trial of its brain-computer interface, Stentrode, which does not require drilling into the skull or open brain surgery; it is, at present, the only company to receive FDA approval to conduct clinical trials of a permanently implanted BCI [4:14]. MetaAI releases its 175B parameter transformer for open use, Open Pre-trained Transformers (OPT), to include the codebase used to train and deploy the model, and their logbook of issues and challenges [6:25]. In research, DeepMind introduces Gato, a “single generalist agent,” which with a single set of weights, is able to complete over 600 tasks, including chatting, playing Atari games, captioning images, and stacking blocks with a robotic arm; one DeepMind scientist used the results to claim that “the game is over” and it's all about scale now, to which others that using massive amounts of data as a substitute for intelligence is perhaps “alt intelligence [8:48].” In the opinion essay of the week, Steve Johnson pens “AI is mastering language, should we trust what it says [18:07]?” Daedalus's Spring 2022 issue focuses on AI and Society, with nearly 400 pages and over 25 essays on a variety of AI-related topics [19:06]. And finally, Professor Ido Kanter from Bar-Ilan University joins to discuss his latest neuroscience research, which suggests a new model for how neurons learn, using dendritic branches [20:48]. RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7. Apply: Sr. Research Specialist (Artificial Intelligence Research) - ESDA Division Further Reading
Andy and Dave discuss the latest in AI news and search, including a report from the Government Accountability Office, recommending that the Department of Defense should improve its AI strategies and other AI-related guidance [1:25]. Another GAO report finds that the Navy should improve its approach to uncrewed maritime systems, particularly in its lack of accounting for the full costs to develop and operate such systems, but also recommends the Navy establish an “entity” with oversight for the portfolio [4:01]. The Army is set to launch a swarm of 30 small drones during the 2022 Experimental Demonstration Gateway Exercise (EDGE 22), which will be the largest group of air-launched effects the Army has tested [5:55]. DoD announces its new Chief Digital and AI Officer, Dr. Craig Martell, former head of machine learning for Lyft, and the Naval Postgraduate School [7:47]. And the National Geospatial-Intelligence Agency (NGA) takes over operational control of Project Maven's GEOINT AI services [9:55]. Researchers from Princeton and the University of Chicago create a deep learning model of “superficial face judgments,” that is, how humans judge impressions of what people are like, based on their faces; the researchers note that their dataset deliberately reflects bias [12:05]. And researchers from MIT, Cornell, Google, and Microsoft present a new method for completely unsupervised label assignments to images, with STEGO (self-supervised transformer with energy-based graph optimization), allowing the algorithm to find consistent groupings of labels in a largely automated fashion [18:35]. And elicit.org provides a “research discovery” tool, leveraging GPT-3 to provide insights and ideas to research topics [24:24]. Careers: https://us61e2.dayforcehcm.com/CandidatePortal/en-US/CNA/Posting/View/1624
Andy and Dave discuss the latest in AI news and research, including a proposal from the Ada Lovelace Institute with 18 recommendations to strengthen the EU AI Act. [0:57] NVidia updates its Neural Radiance Fields to Instant NeRF, which can reconstruct a 3D scene from 2D images nearly 1000 times faster than other implementations. [2:53] Nearly 100 Chinese-affiliated researchers publish a 200-page position paper about large-scale models, a “roadmap.” [4:13] In research, GoogleAI introduces PaLM (Pathway Language Model), at 540B parameters, which demonstrates the ability for logical inference and joke explanation. [7:09] OpenAI announces DALL-E 2, the successor to its previous image-from-text generator, which is no longer confused by mislabeling an item; though interestingly demonstrates greater resolution and diversity to similar technology from OpenAI, GLIDE, but not rated as well by humans, and DALL-E 2 still has challenges with ‘binding attributes.' [11:32] A white paper from Gary Marcus look at ‘Deep Learning Is Hitting a Wall: What would it take for AI to make real progress?' which includes an examination of a symbol-manipulation system that beat the best deep learning systems at playing ASCII game NetHack. [16:10] Professor Chad Jenkins from the University of Michigan returns to discuss the latest developments, including the upcoming Department of Robotics, and a robotics undergraduate degree. [19:10] https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news and research, including DoD's 2023 budget for research, engineering, development, and testing at $130B, around 9.5% higher than the previous year. DARPA announces the “In the Moment” (ITM) program, which aims to create rigorous and quantifiable algorithms for evaluating situations where objective ground truth is not available. The European Parliament's Special Committee on AI in a Digital Age (AIDA) adopts its final recommendations, though the report is still in draft (including that the EU should not regulate AI as a technology, but rather focus on risk). Other EP committees debated the proposal for an “AI Act” on 21 March, and included speakers such as Tegmark, Russell, and many others. The OECD AI Policy Observatory provides an interactive visual database of national AI policies, initiatives, and strategies. In research, a brain implant allows a fully paralyzed patient to communicate solely by “thought,” using neurofeedback. Researchers from Collaborations Pharmaceuticals and King's College London discover that they could repurpose their AI drug-seeking system to instead generate 40,000 possible chemical weapons. And NukkAI holds a bridge competition and claims its NooK AI “beats eight world champions,” though others take exception to the methods. And Kevin Pollpeter, from CNA's China Studies Program, joins to discuss the role (or lack) of Chinese technology in the Ukraine-Russia conflict. https://www.cna.org/news/AI-Podcast
Andy and Dave discuss the latest in AI news and research, including an announcement that Ukraine's defense ministry has begun to use Clearview AI's facial recognition technology and that Clearview AI has not offered the technology to Russia [1:10]. In similar news, WIRED provides an overview of a topic mentioned in the previous podcast – using open-source information and facial recognition technology to identify Russian soldiers [2:46]. The Department of Defense announces its classified Joint All-Domain Command and Control (JADC2) implementation plan, and also provides an unclassified strategy [3:24]. Stanford University Human-Centered AI (HAI) releases its 2022 AI Index Report, with over 200 pages of information and trends related to AI [5:03]. In research, DeepMind, Oxford, and Athens University present Ithaca, a deep neural network for restoring ancient Greek texts, while including both geographic and chronological attribution; they designed the system to work *with* ancient historians, and the combination achieves a lower error rate (18.3%) than either alone [10:24]. NIST continues refining its taxonomy for identifying and managing bias in AI, to include systemic bias, human bias, and statistical/computational bias [13:51]. Authors Pavel Brazdil, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren, Springer-Verlag makes Metalearning available for download, which provides a comprehensive introduction to metalearning and automated machine learning [15:28]. And finally, CNA's Dr. Anya Fink joins Andy and Dave for a discussion about the uses of disinformation in the Ukraine-Russian conflict [17:15]. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news and research, including a GAO report on AI – Status of Developing and Acquiring Capabilities for Weapon Systems [1:01]. The U.S. Army has awarded a contract for the demonstration of an offensive drone swarm capability (the HIVE small Unmanned Aircraft System), seemingly similar but distinct from DARPA's OFFSET demo [4:11]. A ‘pitch deck' from Clearview AI reveals their intent to expand beyond law enforcement and aim to have 100B facial photos in its database within a year [5:51]. Tortoise Media releases a global AI index that benchmarks nations based on their level of investment, innovation, and implementation of AI [7:57]. Research from UC Berkeley and the University of Lancaster shows that humans can no longer distinguish between real and fake (generated by GANs) faces [10:30]. MIT, Aberdeen, and the Centre of Governance of AI look at trends of computation in machine learning, identifying three eras and trends, including a ‘large-scale model' trend where large corporations use massive training runs [13:37]. A tweet from the chief scientist at OpenAI, speculating on the ‘slightly conscious' attribute of today's large neural networks, sparks much discussion [17:23]. While a white paper in the International Journal of Astrobiology examines what intelligence might look like at the planetary level, placing Earth as an immature Technosphere [19:04]. And Kush Varchney at IBM publishes for open access a book on Trustworthy Machine Learning, examining issues of trust, safety, and much more [21:29]. Finally, CNA Russia Studies Program member Sam Bendett returns for a quick update on autonomy and AI in the Ukraine-Russia conflict [23:30]. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news and research, starting with the Aircrew Labor In-Cockpit Automation System (ALIAS) program from DARPA, which flew a UH-60A Black Hawk autonomously and without pilots on board, to include autonomous (simulated) obstacle avoidance [1:05]. Another DARPA program, Robotic Autonomy in Complex Environments with Resiliency (RACER) entered its first phase, focused on high-speed autonomous driving in unstructured environments, such as off-road terrain [2:39]. The National Science Board releases its State of U.S. Science and Engineering 2022 report, which shows the U.S. continues to lose its leadership position in global science and engineering [4:30]. The Undersecretary of Defense for Research and Engineering, Heidi Shyu, formally releases its technology priorities, 14 areas grouped into three categories: seed areas, effective adoption areas, and defense-specific areas [6:31]. In research, OpenAI creates InstructGPT in an attempt to align language models to follow human instructions better, resulting in a model with 100x fewer parameters than GPT-3 and provided a user-favored output 70% of the time, though still suffering from toxic output [9:37]. DeepMind releases AlphaCode, which has succeeded in programming competitions with an average ranking in the top 54% across 10 contests with more than 5,000 participants each though it approaches the problem through more of a brute-force approach [14:42]. DeepMind and the EPFL's Swiss Plasma Center also announce they have used reinforcement learning algorithms to control nuclear fusion (commanding the full set of control coils of a tokamak magnetic controller). Venture City publishes Timelapse of AI (2028 – 3000+), imagining how the next 1,000 years will play out for AI and the human race [18:25]. And finally, with the Russia-Ukraine conflict continuing to evolve, CNA's Russia Program experts Sam Bendett and Jeff Edmonds return to discuss what Russia has in its inventory when it comes to autonomy and how they might use it in this conflict, wrapping up insights from their recent paper on Russian Military Autonomy in a Ukraine Conflict [22:52]. Listener Note: The interview with Sam Bendett and Jeff Edmonds was recorded on Tuesday, February 22 at 1 pm. At the time of recording, Russia had not yet launched a full-scale invasion of Ukraine. https://www.cna.org/news/AI-Podcast
Andy and Dave discuss the latest in AI news and research, including a report from the School of Public Health in Boston that shows why most “data for good” initiatives failed to impact the COVID-19 health crisis [0:45]. The Department of Homeland Security tests the use of robot dogs (from Ghost Robotics) for border patrol duties [5:00]. Researchers find that public trust in AI varies greatly depending on its application [7:52]. Researchers from Stanford University and Toyota Research Institute find extensive label and model errors in training data, such as over 70% of validation scenes (for publicly available autonomous vehicle datasets) containing at least one missing object box [12:05]. And principal researchers Josh Bongard and Mike Levin join Andy and Dave for more discussion on the latest Xenobots research [18:21]. Follow the link below to visit our website and explore the links mentioned in this episode. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news and research, including an update from the DARPA OFFSET (OFFensive Swarm-Enabled Tactics) program, which demonstrated the use of swarms in a field exercise, to include one event that used 130 physical drone platforms along with 30 simulated [0:33]. DARPA's GARD (Guaranteeing AI Robustness against Deception) program has released a toolkit to help AI developers test their models against attacks. Undersecretary of Defense for Research and Engineering, Heidi Shyu, announced DoD's technical priorities, including AI and autonomy, hypersonics, quantum, and others; Shyu expressed a focus on easy-to-use human/machine interfaces [3:35]. The White House AI Initiative Office opened an AI Public Researchers Portal to help connect AI researchers with various federal resources and grant-funding programs [8:44]. A Tesla driver faces felony charges (likely a first) for a fatal crash in which Autopilot was in use, though the criminal charges do not mention the technology [12:23]. In research, MIT's CSAIL publishes (worrisome) research on high scoring convolution neural networks that still achieve high accuracy, even in the absence of “semantically salient features” (such as graying out most of the image); the research also contains a useful list of known image classifier model flaws [18:29]. David Ha and Yujin Tang, at Google Brain in Tokyo, published a white paper surveying recent developments in Collective Intelligence for Deep Learning [19:46]. Roman Garnett makes available a graduate-level book on Bayesian Optimization. And Doug Blackiston returns to chat about the latest discoveries with the Xenobots research and kinematic self-replication [21:54].
Andy and Dave discuss the latest in AI news and research, including the signing of the 2022 National Defense Authorization Act, which contains a number of provisions related to AI and emerging technology [0:57]. The Federal Trade Commission wants to tackle data privacy concerns and algorithmic discrimination and is considering a wide range of options to do so, including new rules and guidelines [4:50]. The European Commission proposes a set of measures to regulate digital labor platforms in the EU. Engineered Arts unveils Ameca, a gray-faced humanoid robot with “natural-looking” expressions and body movements [7:07]. And DARPA launches its AMIGOS project, aimed at automatically converting training manuals and videos into augmented reality environments [13:16]. In research, scientists at the Bar-Ilan University in Israel upend conventional wisdom on neural responses by demonstrating that the duration of the resting time (post-excitation) can exceed 20 milliseconds, that the resting period is sensitive to the origin of the input signal (e.g. left versus right), and that the neuron has a sharp transition from the refractory period to full responsiveness without an intermediate stutter phase [15:30]. Researchers at Victoria University use brain cells to play Pong using electric signals and demonstrate that the cells learn much faster than current neural networks, reaching the same point living systems reach after 10 or 15 rallies, vice 5000 rallies for computer-based AIs [19:37]. MIT researchers present evidence that ML is starting to look like human cognition, comparing various aspects of how neural networks and human brains accomplish their tasks [24:34]. And OpenAI creates GLIDE< a 3.5B parameter text-to-image generation model to generate even higher quality images than DALL-E, though it still has trouble with “highly unusual” scenarios [29:30]. The Santa Fe Institute publishes The Complex Alternative: Complexity Scientists on the COVID-19 Pandemic, 800 pages on how complexity interwove through the pandemic [33:50]. And Chris Peter has an algorithm to create a short movie after watching Hitchcock's Vertigo 20 times [35:22]. Please visit our website to explore the links mentioned in this episode. https://www.cna.org/CAAI/audio-video
Andy and Dave welcome the hosts of the weekly podcast AI Today, Kathleen Walch and Ronald Schmelzer. On AI Today, Kathleen and Ron discuss topics related to how AI is making impacts around the globe, with a focus on having discussions with industry and business leaders to get their thoughts and perspectives on AI technologies, applications, and implementation challenges. Ron and Kathleen also co-founded Cognilytica, an AI research, education, and advisory firm. The four podcast hosts discuss a variety of topics, including the origins of the AI Today podcast, AI trends in industry and business, AI winters, and the importance of education. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news and research, starting with the US Department of Defense creating a new position of the Chief Digital and AI Officer, subsuming the Joint AI Center, the Defense Digital Service, and the office of the Chief Data Officer [0:32]. Member states of UNESCO adopt the first-ever global agreement on the ethics of AI, which includes recommendations on protecting data, banning social scoring and mass surveillance, helping to monitor and evaluate, and protecting the environment [3:26]. The European Digital Rights and 119 civil society organizations launch a collective call for an AI Act to articulate fundamental rights (for humans) regarding AI technology and research [6:02]. The Future of Life Institute releases Slaughterbots 2.0: “if human: kill()” ahead of the 3rd session in Geneva of the Group of Governmental Experts discussing lethal autonomous weapons systems [7:15]. In research, Xenobots 3.0, the living robots made from frog cells, demonstrate the ability to replicate themselves kinematically, at least for a couple of generations (extended to four generations by using an evolutionary algorithm to model ideal structures for replication) [12:23]. And researchers from DeepMind, Oxford, and Sydney demonstrate the ability to collaborate with machine learning algorithms to discover new results in mathematics (in knot theory and representation theory); though another researcher attempts to dampen the utility of the claims. [17:57] And finally, Dr. Mike Stumborg joins Dave and Andy to discuss research in Human-Machine Teaming, why it's important, and where the research will be going [21:44].
Andy and Dave discuss the latest in AI news and research, [0:53] starting with OpenAI's announcement that it is making GPT-3 generally available through its API (though developers still require approval for production-scale applications). [3:09] For DARPA's Gremlins program, two Gremlin Air Vehicles “validated all autonomous formation flying positions and safety features,” and one of the autonomous aircraft demonstrated airborne recovery to a C-130. [4:54] After three years, DARPA announces the winners of its Subterranean Robot Challenge, awarding prizes for teams operating in the “real-world” in virtual space. [7:03] The Defense Information Systems Agency released its Strategic Plan for 2022 through 2024, which includes plans to employ AI capabilities for defensive cyber operations. [8:08] The Department of Defense announces a new cloud initiative to replace the failed JEDI contract, with invitations to Amazon, Microsoft, Google, and Oracle to bid. [11:52] In research, DeepMind, Google Brain, and World Chess Champion Vladimir Kramnik join forces to peer into the guts of AlphaZero, with initial results showing strong evidence for the existence of “human-understandable concepts of surprising complexity” within the neural network. [17:48] Andrea Roli, Johannes Jaeger, and Stu Kauffman pen a white paper on how organisms come to know the world, and from these observations, derive fundamental limits on artificial general intelligence. [20:34] MIT Press makes available an elementary introduction to Bayesian Models of Perception and Action, by Wei Ji Ma, Konrad Paul Kording, and Daniel Goldreich. [23:40] And finally, Sam Bendett and Jeff Edmonds drop by for a chat on the latest and greatest in Russian AI and Autonomy – including an update on recent military expos and other AI-related events happening in Russia. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news and research, including the Defense Innovation Unit releasing Responsible AI Guidelines in Practice, which seeks to ensure tech contractors adhere to the Department of Defense's existing ethical principles for AI [0:53]. “Meta” (the Facebook re-brand) announces that it will end its use of facial recognition software and delete data on more than a billion people, though it will retain the technology for other products in its metaverse [3:12]. Australia's information and privacy commissioners release an order to Clearview AI to stop collecting facial biometrics from Australian citizens and to destroy all existing data [5:16]. The U.S. Marine Corps releases a Talent Management 2030 report, which describes the need for more cognitively mature Marines and seeks to “leverage the power of AI,” and to be “at the vanguard of service efforts to operationalize AI [7:39].” DOD releases at 2021 Report on Military and Security Developments Involving the People's Republic of China, which describes China's use of AI technology in influence operations, the digital silk road, military capabilities, and more [10:46]. A competition using unrestricted adversarial examples at the 2021 Conference on Computer Vision and Pattern Recognition includes as co-authors several members of the Army Engineering University of the People's Liberation Army [11:43]. Research from Okinawa and Australia demonstrates that deep reinforcement learning can produce accurate quantum control, even with noisy measurements, using a small particle moving in a double-well. [14:31] MIT Press makes available a nearly 700-page book, Algorithms for Decision Making, organized around four sources of uncertainty (outcome, model, state, and interaction) [18:01]. And Dr. Amanda Kerrigan and Kevin Pollpeter join Andy and Dave to discuss their latest research in what China is doing with AI technology, including a bi-weekly newsletter on the topic, and a preliminary analysis on China's view of Intelligent Warfare [20:06]. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news and research, including: NATO releases its first AI strategy, which included the announcement of a one billion euro “NATO innovation fund.” [0:52] Military research labs in the US and UK collaborate on autonomy and AI in a combined demonstration, integrating algorithms and automated workflows into military operations. [2:58] A report from CSET and MITRE identifies that the Department of Defense already has a number of AI and related experts, but that the current system hides this talent. [6:45] The National AI Research Resource Task Force partners with Stanford's Human-Centered AI and the Stanford Law School to publish Building a National AI Research Resource: A Blueprint for the National Research Cloud. [6:45] And in a trio of “AI fails,” a traffic camera in the UK mistakes a woman for a car and issues a fine to the vehicle's owner; [9:10] the Allen Institute for AI introduces Delphi as a step toward developing AI systems that behave ethically (though it sometimes thinks that it's OK to murder everybody if it creates jobs); [10:07] and a WSJ report reveals that Facebook's automated moderation tools were falling far short on accurate identification of hate speech and videos of violence and incitement. [12:22] Ahmed Elgammal from Rutgers teams up with Playform to compose two movements for Beethoven's Tenth Symphony, for which the composer left only sketches before he died. And finally, Andy and Dave welcome Dr. Heather Wolters and Dr. Megan McBride to discuss their latest research on the Psychology of (Dis)Information, with a pair of publications, one providing a primer on key psychological mechanisms, and another examining case studies and their implications. The Psychology of (Dis)information: A Primer on Key Psychological Mechanisms: https://www.cna.org/CNA_files/PDF/The%20Psychology-of-(Dis)information-A-Primer-on-Key-Psychological-Mechanisms.pdf The Psychology of (Dis)information: Case Studies and Implications: https://www.cna.org/CNA_files/PDF/The-Psychology-of-(Dis)information-Case-Studies-and-Implications.pdf Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video
Welcome to Season 5.0 of AI with AI! Andy and Dave discuss the latest in AI news and research, including. The White House calls for an AI “bill of rights,” and invites comments for information. In its 4th year, Nathan Benaich and Ian Hogarth publish their State of AI Report, 2021. [1:50] OpenAI uses reinforcement learning from human feedback and recursive task decomposition to improve algorithms' abilities to summarize books. [3:14] IEEE Spectrum publishes a paper that examines the diminishing returns of deep learning, questioning the long-term viability of the technology. [5:12] In related news, Nvidia and Microsoft release a 530 billion-parameter style language model, the Megatron-Turing Natural Language Generation model (MT-NLG). [6:54] DeepMind demonstrates the use of a GAN in improving high-resolution precipitation “nowcasting.” [10:05] Researchers from Waterloo, Guelph, and IIT Madras publish research on deep learning that can identify early warning signals of tipping points. [11:54] Military robot maker Ghost Robots creates a robot dog with a rifle, the Special Purpose Unmanned Rifle, or SPUR. [14:25] And Dr. Larry Lewis joins Dave and Andy to discuss the latest report from CNA on Leveraging AI to Mitigate Civilian Harm, which describes the causes of civilian harm in military operations, identifies how AI could protect civilians from harm, and identifies ways to lessen the infliction of suffering, injury, and destruction overall. [16:36] Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news and research, including, the UK government releases its National AI Strategy, a 10-year plan to make the country a global AI superpower [1:28]. Stanford University's One Hundred Year Study on AI Project releases its second report, Gathering Strength, Gathering Storms, assessing developments in AI between 2016 and 2021 around fourteen framing questions. [4:57] The UN High Commissioner for Human Rights calls for a moratorium on the sale and use of AI systems that pose series risks to human rights until adequate safeguards are put into place. [10:07] Jack Poulson at Tech Inquiry maps out US government use of AI-based weapons and surveillance, using publicly available information. [12:07] Researchers at Hebrew University examine the potential of single cortical neurons as deep artificial neural networks, finding that a deep neural network with 5-8 layers are necessary to approximate them. [16:10] Researchers at Stanford review the different architectures of neuronal circuits in the human brain, identifying different circuit motifs. [20:02] Other research at Stanford shows the ability to image and track moving non-line-of-sight objects using a single optical path (shining a laser through a keyhole). [22:05] And researchers at MIT, Nvidia, and Technion demonstrate that a neural network can identify the number and activity of people in a room, solely by examining a blank wall in the room. [26:33] The Nils Theory research group publishes Physics-Based Deep Learning, introducing physical models into deep learning to reconcile data-centered viewpoints with physical simulations. [30:34] Ori Cohen compiles the Machine and Deep Learning Compendium, an open resource (GitBook) on over 500 topics with summaries, links, and articles. [32:21] The Allen Institute for AI releases a web tool that converts PDF papers into HTML for more rapid web publishing of scientific papers. [33:20] And the Museum of Wild and Newfangled Art: This Show is Curated by a Machine invites viewers to ponder on why they think an AI chose the works within. [34:43]
Andy and Dave discuss the latest in AI news and research, including: [1:28] Researchers from several universities in biomedicine establish the AIMe registry, a community-driven reporting platform for providing information and standards of AI research in biomedicine. [4:15] Reuters publishes a report with insight into examples at Google, Microsoft, and IBM, where ethics reviews have curbed or canceled projects. [8:11] Researchers at the University of Tübingen create an AI method for significantly accelerating super-resolution microscopy, which makes heavy use of synthetic training data. [13:21] The US Navy establishes Task Force 59 in the Middle East, which will focus on the incorporation of unmanned and AI systems into naval operations. [15:44] The Department of Commerce establishes the National AI Advisory Committee, in accordance with the National AI Initiative Act of 2020. [19:02] Jess Whittlestone and Jack Clark publish a white paper on Why and How Governments Should Monitor AI Development, with predictions into the types of problems that will occur with inaction. [19:02] The Center for Security and Emerging Technology publishes a series of data-snapshots related to AI research, from over 105 million publications. [23:53] In research, Google Research, Brain Team, and University of Montreal take a broad look at deep reinforcement learning research and find discrepancies between conclusions drawn from point estimates (fewer runs, due to high computational costs) versus more thorough statistical analysis, calling for a change in how to evaluate performance in deep RL. [30:13] Quebec AI Institute publishes a survey of post-hoc interpretability on neural natural language processing. [31:39] MIT Technology Review dedicates its Sep/Oct 2021 issues to The Mind, with articles all about the brain. [32:05] Katy Borner publishes Atlas of Forecasts: Modeling and Mapping Desirable Futures, showing how models, maps, and forecasts inform decision-making in education, science, technology, and policy-making. [33:16] DeepMind in collaboration with University College London offers a comprehensive introduction to modern reinforcement learning, with 13 lectures (~1.5 hours each) on the topic. Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video CNA Careers Page: https://www.cna.org/careers/
Andy and Dave were recently interviewed on the AI Today podcast. On the AI Today podcast we regularly interview thought leaders who are implementing AI and cognitive technology at various companies and agencies. However in this episode hosts Kathleen Walch and Ron Schmelzer interview Andy Ilachinski and David Broyles, hosts of the AI with AI podcast. On their podcast, they explore the latest breakthroughs in artificial intelligence and autonomy, as well as their military implications so naturally, we discussed with them some of the biggest trends they are seeing emerging out of AI today, some of the challenges to AI adoption especially in military applications, and some of the surprising insights and trends they have seen over the 4 years they have hosted their podcast.
Andy and Dave discuss the latest in AI news and research, including: 0:57: The Allen Institute for AI and others come together to create a publicly available “COVID-19 Challenges and Directions” search engine, building off of the corpus of COVID-related research. 5:06: Researchers with the University of Warwick perform a systematic review of test accuracy for the use of AI in image analysis of breast cancer screening and find most (34 or 36) AI systems were less accurate than a single radiologist, and all were less accurate than a consensus of two or more radiologists (among other concerning findings). 10:19: A US judge rejects an appeal for the AI system DABUS to own a patent, noting that US federal law requires an “individual” to be an owner, and the legal definition of an “individual” is a natural person. 17:01: The US Patent and Trademark Office uses machine learning to analyze the history of AI in patents. 19:42: BCS publishes Priorities for the National AI Strategy, as the UK seeks to set global AI standards. 20:42: In research, MIT, Northeastern, and U Penn explore the challenges of discerning emotion from a person's facial movements (which largely relates to context), and highlight the reasons why facial recognition algorithms will struggle with this task. 28:02: GoogleAI uses diffusion models to generate high fidelity images; the approach slowly adds noise to corrupt the training data, and then using a neural network to reverse that corruption. 35:07: Springer-Verlag makes AI for a Better Future, by Bernd Carsten Stahl, available for open access. 36:19: Thomas Smith, co-founder of Gado Images, chats with GPT-3 about the COVID-19 pandemic and finds that it provides some interesting responses to his questions. Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news and research, including: 0:46: The GAO releases a more extensive report on US Federal agency use of facial recognition technology, including what purposes. 3:24: The US Department of Homeland Security Science and Technology Directorate publishes its AI and ML Strategic Plan, with an implementation plan to follow. 5:39: Ada Lovelace Institute, AI Now Institute, and Open Government Partnership publish a global study on Algorithmic Accountability for the Public Sector, which focuses on accountability mechanisms stemming from laws and policy. 9:04: Research from North Caroline State University shows that the benefits of autonomous vehicles will outweigh the risks, with proper regulation. 13:18: Research Section Introduction 14:24: Researchers at the Allen Institute for AI and the University of Washington demonstrate that artificial agents can learn generalizable visual representation during interactive gameplay, embodied within an environment (AI2-THOR); agents demonstrated knowledge of the principles of containment, object permanence, and concepts of free space. 19:37: Researchers at Stanford University introduce BEHAVIOR (Benchmark for Everyday Household Activities in Virtual, Interactive, and ecOlogical enviRonments), which establishes benchmarks for simulation of 100 activities that human often perform at home. 24:02: A survey examines the dynamics of research communities and AI benchmarks, suggesting that hybrid, multi-institution, and persevering communities are the ones more likely to improve state-of-the-art performance, among other things. 28:54: Springer-Verlag makes Representation Learning for Natural Language Processing available online. 32:09: Terry Sejnowski and Stephen Wolfram publish a three-hour discussion on AI and other topics. Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news, including an overview of Tesla's “AI Day,” which among other things, introduced the Dojo supercomputers specialized for ML, the HydraNet single deep-learning model architecture, and a “humanoid robot,” the Tesla Bot. Researchers at Brown University introduce neurograins, grain-of-salt-sized wireless neural sensors, for which they use nearly 50 to record neural activity in a rodent. The Associated Press reports on the flaws in ShotSpotter's AI gunfire detection system, and one case which used such evidence to send a man to jail for almost a year before a judge dismissed the case. The Department of the Navy releases its Science and Technology Strategy for Intelligent Autonomous Systems (publicly available), including an Execution Plan (available only through government channels). The National AI Research Resource Task Force extends its deadline for public comment in order to elicit more responses. The Group of Governmental Experts on Certain Conventional Weapons holds its first 2021 session for the discussion of lethal autonomous weapons systems; their agenda has moved on to promoting a common understanding and definition of LAWS. And Stanford's Center for Research on Foundation Models publishes a manifesto: On the Opportunities and Risks of Foundation Models, seeking to establish high level principles on massive models (such as GPT3) upon which many other AI capabilities build. In research, Georgie Institute of Technology, Cornell University, and IBM Research AI examine how the “who” in Explainable AI (e.g., people with or without a background in AI) shapes the perception of AI explanations. And Alvy Ray Smith pens the book of the week, with A Biography of the Pixel, examining the pixel as the “organizing principle of all pictures, from cave paintings to Toy Story.” Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video
Andy and Dave discuss the latest in AI news, including an upgraded version of OpenAI's CoPilot, called, Codex, which can not only complete code but creates it as well (based on natural language inputs from its users). The National Science Foundation is providing $220 million in grants to 11 new National AI Research Institutes (including two fully funded by the NSF). A new DARPA program seeks to explore how AI systems can share their experiences with each other, in Shared-Experience Lifelong Learning (ShELL). The Senate Committee on Homeland Security and Governmental Affairs introduces two AI-related bills: the AI Training Act (to establish a training program to educate the federal acquisition workforce), and the Deepfake Task Force Act (to task DHS to produce a coordinated plan on how a “digital content provenance” standard might assist with decreasing the spread of deepfakes). And the Inspectors General of the NSA and DoD partner to conduct a joint evaluation of NSA's integration of AI into signals intelligence efforts. In research, DeepMind creates the Perceiver IO architecture, which works across a wide variety of input and output spaces, challenging the idea that different kinds of data need different neural network architectures. DeepMind also publishes PonderNet, which learns to adapt the amount of computation based on the complexity of the problem (rather than the size of the inputs). Research from MIT uses the corpus of US patents to predict the rate of technological improvements for all technologies. The European Parliamentary Research Service publishes a report on Innovative Technologies Shaping the 2040 Battlefield. Quanta Magazine publishes an interview with Melanie Mitchell, which includes a deeper discussion on her research in analogies. And Springer-Verlag makes available for free An Introduction to Ethics in Robotics and AI (by Christoph Bartneck, Christoph Lütge, Alan Wagner, and Sean Welsh). Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video
Andy and Dave welcome the hosts of the weekly podcast AI Today, Kathleen Walch and Ronald Schmelzer. On AI Today, Kathleen and Ron discuss topics related to how AI is making impacts around the globe, with a focus on having discussions with industry and business leaders to get their thoughts and perspectives on AI technologies, applications, and implementation challenges. Ron and Kathleen also co-founded Cognilytica, an AI research, education, and advisory firm. The four podcast hosts discuss a variety of topics, including the origins of the AI Today podcast, AI trends in industry and business, AI winters, and the importance of education. Related Links CPMAI Methodology: https://www.cognilytica.com/cpmai/ Cognilytica website: https://www.cognilytica.com/ AI in Government community: https://www.aiingovernment.com/ Cognilytica: @Cognilytica Kathleen Walch: @Kath0134 Ron Schmelzer: @rschmelzer