POPULARITY
Host Paul Spain is joined by Jens Anders (Samsung NZ) as they delve into the exciting and ever-evolving world of technology. They discuss the expansive product portfolio Jens oversees at Samsung New Zealand, including everything from mobile phones to home appliances and more. They also dive into tech news, exploring topics like AI regulation, online safety, Military AI and the impact of technology in healthcare. Plus, privacy issues for travellers to the US, and Meta's legal dramas. Join us as we navigate these subjects and discover how technology is shaping our lives today. Special thanks to our show partners 2degrees, One New Zealand, Spark New Zealand, HP and Gorilla Technology.
Military AI is no longer science fiction—it's a reality shaping the future of defense. Inspired by Military AI: High Stakes Require Pragmatic, Risk-Based Approach, this episode takes a deep dive into the evolving role of AI in warfare, intelligence, and defense strategy.We explore how AI is transforming military operations, from data-driven intelligence analysis and predictive modeling to autonomous defense systems like the Iron Dome. But with high stakes come high risks—including AI data manipulation, over-reliance on automation, and ethical concerns surrounding autonomous weapons.Join us as we analyze the practical and ethical dilemmas posed by AI in defense and why global leadership must shape AI before AI shapes us.What You'll Learn in This Episode:1. The Three Key Areas of Military AI AdoptionEfficiency: How AI is streamlining compliance, logistics, and decision-making within the military.Intelligence & Data: How AI-powered systems analyze vast amounts of data to predict and prevent threats.Autonomy: The potential—and risks—of AI-driven battlefield automation, from drones to defense systems.2. The Benefits of AI in Military StrategyHow AI reduces human error and increases response times in critical situations.Predictive analytics in intelligence—tracking enemy movements and identifying threats before they emerge.The potential for AI to minimize casualties by handling high-risk military operations.3. The Risks & Ethical Dilemmas of Military AIData integrity concerns—how adversaries could manipulate AI-driven intelligence.The danger of AI over-reliance—losing critical human skills in military decision-making.Autonomous weapons and legal concerns—why military AI must have human-in-the-loop oversight.4. Global AI Leadership & the Future of AI in WarfareThe geopolitical race for AI supremacy—how the U.S., China, and other nations are developing AI for defense.Why international law and military ethics must evolve alongside AI advancements.How nations can adopt a pragmatic, risk-based approach to AI in defense while maintaining global stability.Key Takeaways:AI is transforming military strategy, enhancing efficiency, intelligence analysis, and battlefield autonomy.While AI-driven defense systems can save lives, they also introduce risks, including data manipulation, AI bias, and automation failures.A pragmatic, risk-based approach is essential to ensure that AI remains a tool for human decision-making, not a replacement for it.Global AI leadership will determine the future of warfare, requiring ethical frameworks, policy oversight, and international collaboration.Subscribe to our podcast for expert insights on AI, defense strategy, and global security. Visit The Future of Commerce for the latest research on how AI is shaping military innovation. Share this episode with leaders, defense professionals, and anyone invested in the future of AI-driven security.
Neoborn Caveman rips into OpenAI's unholy alliance with nuclear labs, exposing how the military-industrial complex is birthing its own digital overlord. From ghost code fragments to nuclear-powered AI, watch the chess pieces move into checkmate position.Buy Neoborn's book on how the once great Canada is morphing into a totalitarian regime: https://www.amazon.com/dp/B0DRYV6VJJ - it has NC's serious side and thorough research, with a glimpse of hope.Free speech marinated in comedy......... Hosted on Acast. See acast.com/privacy for more information.
This episode is sponsored by Oracle. Oracle Cloud Infrastructure, or OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking. So you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, Specialized Bikes, and Fireworks AI. Cut your current cloud bill in HALF if you move to OCI now: https://oracle.com/eyeonai In this episode of Eye on AI, Mark Bailey, Associate Professor at the National Intelligence University, joins Craig Smith to explore the rapidly evolving role of AI in modern warfare—its promises, risks, and the ethical dilemmas it presents. Mark shares his expertise on AI autonomy in military strategy, breaking down the differences between automation and true autonomy. We discuss how AI-driven systems could revolutionize combat by reducing human casualties, improving precision, and enhancing battlefield decision-making. But with these advancements come serious concerns—how do we prevent automation bias? Can we trust AI to make life-or-death decisions? And will AI-driven warfare lower the threshold for conflict, making war more frequent? We also examine the global AI arms race, the impact of AI on defense policies, and the ethical implications of fully autonomous weapons. Mark unpacks key challenges like the black box problem, AI alignment issues, and the long-term consequences of integrating AI into military operations. He also shares insights from his latest book, where he calls for international AI regulations to prevent an uncontrolled escalation of AI warfare. With AI-driven drone swarms, autonomous targeting systems, and defense innovations shaping the future of global security, this conversation is a must-watch for anyone interested in AI, defense technology, and the moral questions of war in the digital age. Don't forget to like, subscribe, and hit the notification bell for more discussions on AI, technology, and the future of intelligence! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) AI's Role in Warfare (02:02) Introducing Dr. Mark Bailey (04:02) Automation vs. Autonomy in Military AI (12:02) AI Warfare: A Threat to Global Stability? (17:10) Inside Dr. Bailey's Book: Ethics & AI in War (20:05) AI Reliability in Warfare (23:28) The Future of AI Swarms & Autonomous Warfare (24:17) Who Decides How AI is Used in War? (28:05) The Future of AI & Military Ethics
During 2024, efforts to address the governance of military artificial intelligence (AI) have gained momentum. Yet in the same year, we have also witnessed the growing use of AI decision support systems during armed conflict, and it is becoming clearer that such systems may pose a significant challenge to peace and stability. These developments raise questions about the current approach toward military AI governance. In this post, Elke Schwarz, Professor of Political Theory at Queen Mary University London, argues that efforts toward governance are complicated by a number of factors intrinsic to contemporary AI systems in targeting decisions. She highlights three in particular: (1) the character of current AI systems, which rests on iteration and impermanence; (2) the dominance of private sector producers in the sector and the financial ethos that grows from this; and (3) the expansive drive implicit in AI systems themselves, especially predictive AI systems in targeting decisions. These realities of AI suggest that the risks are perhaps greater than often acknowledged.
Welcome to another Weekly Update as we come into the end of November! Let's get straight into it! Rise of Robotheism - Is AI becoming a new religion? While I don't anticipate many people will sign up to worship at the altar of OpenAI, there's a growing trend in tech leaders and people looking for AI to save us. It gained enough popularity it even had a label. AI Physician Replacement - Elon Musk recently went on record saying it won't be long before AI replaces doctors and lawyers, and some recent findings out of John Hopkins would give some the impression he's right. However, I think a deeper analysis would argue, not quite. AI Work Transformation - A Microsoft software engineer recently shared how while AI is doing much of his coding, he still has plenty of work to do. It seems some of the concerns about AI replacing workers aren't holding water, and some research about organizational adoption will further mitigate the risks. Autonomous Military - The US military is confident their multi-billion dollar investment in AI will pay dividends, but what kind of metrics do you use to measure success? And, what ethical considerations are being taken? This is essential as we're already seeing fully autonomous weapons being field tested. AI Scammer Defense - The elderly are primary targets for international scammers and they ruin the lives of countless people daily. However, I love how one EU telecom company is fighting back with a cleverly named AI “dAIsy.” Show Notes: In this Weekly Update, Christopher explores the convergence of AI, technology, and the human experience. He discusses 'robo-theism' and the belief among certain tech leaders that AI could become a new deity. Christopher responds to Elon Musk's comments about the potential of AI to replace doctors and lawyers, also highlighting recent research from John Hopkins University. Additionally, he examines the slow adoption of AI by companies due to data and infrastructure challenges. He further digs into the rise of AI in the military, raising ethical concerns about autonomous weapons. Finally, on a lighter note, he shares how a UK telecom company is using an AI bot named DAISY to waste the time of phone scammers. 00:00 - Introduction and Welcome 01:30 - Exploring Robo-Theism and AI as a Deity 18:02 - The Future of AI in Medicine and Law 26:15 - AI in Software Development 37:36 - AI in the Military: Ethical and Philosophical Concerns 48:38 - AI vs. Scammers: A Clever Solution 51:16 - Conclusion and Final Thoughts #robotheism #healthcare #futureofwork #ai #military
Associate Professor of Media at UNSW, Michael Richardson examines how technology, culture, and power shape knowledge in war, security and surveillance. He warns that AI's rapid deployment can lead to catastrophic outcomes in warfare, where algorithms determine lethal targets based on biased data and predictive analytics. Listen as Michael calls for a global resistance against militarised AI, and the need for an ethical standard in technology, as the consequences of these advancements could redefine the very nature of warfare and humanity itself. Presented as part of The Ethics Centre's Festival of Dangerous Ideas, supported by UNSW Sydney.See omnystudio.com/listener for privacy information.
Tariffs, Pygmy Goat, Schneider, SQLite and Dixie Flatline, Deepfakes, Military AI, Josh Marpet, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-428
Tariffs, Pygmy Goat, Schneider, SQLite and Dixie Flatline, Deepfakes, Military AI, Josh Marpet, and more on the Security Weekly News. Show Notes: https://securityweekly.com/swn-428
Tariffs, Pygmy Goat, Schneider, SQLite and Dixie Flatline, Deepfakes, Military AI, Josh Marpet, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-428
Tariffs, Pygmy Goat, Schneider, SQLite and Dixie Flatline, Deepfakes, Military AI, Josh Marpet, and more on the Security Weekly News. Show Notes: https://securityweekly.com/swn-428
What do you love about the show? Let us know! In today's episode of Discover Daily, we explore how Chinese military researchers have adapted Meta's open-source Llama model to create ChatBIT, an AI tool capable of performing at 90% of ChatGPT-4's capability. This development raises critical questions about the control of open-source AI technology and its potential military applications, with Meta's policy director explicitly stating that such military use violates their acceptable use policy.We then uncover the fascinating secrets behind the Pantheon's 2000-year-old self-healing concrete, where recent MIT research reveals that special lime clasts created through "hot mixing" allow the ancient Roman structure to repair its own cracks. This discovery could reimagine modern construction methods and significantly reduce the environmental impact of concrete production, which currently accounts for 8% of global greenhouse gas emissions.Lastly, we look at a new discovery about the sea walnut, a transparent comb jelly that can reverse its aging process when faced with stressful conditions. This extraordinary creature, which becomes only the third known species capable of age reversal, transforms from its adult form back to a juvenile stage by regrowing tentacles and adopting different feeding behaviors, offering potential insights into the biology of aging and regenerative medicine.From Perplexity's Discover Feed:https://www.perplexity.ai/page/chinese-military-builds-llama-cATtO04XQQmPAEHGEmR1AQhttps://www.perplexity.ai/page/the-pantheon-s-2000-year-old-c-Tg_yPp2rQwqcJ0joXeMnSAhttps://www.perplexity.ai/page/another-creature-can-reverse-a-zdHTsn0hQZesUn.eEUMw5APerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
In the latest edition of "Ethical Articles" Dr. Brianna Rosen, strategy and policy fellow at Oxford University's Blavatnik School of Government, reads her latest commentary for Carnegie Council on key takeaways from the 2024 Responsible AI in the Military Domain (REAIM) summit in Seoul. To read the article, please go to: https://carnegiecouncil.co/military-ai-rosen
House Passes Series of Bills to Counter Chinese RegimeHouse Passes Bill Aiming to Halt CCP PartnershipsU.S. Think Tank Head Arrested, Set to Be ExtraditedTop U.S., China Military Brass Hold First Call in 2 YearsExpert: the Real Impact Lies in the Bills Left BehindU.S., UK Warn Iran, China Over Supporting Russia's WarNations Discuss Plans for Military AI at S. Korea SummitHuawei Challenges iPhone 16 Launch with New PhonePortugal to Keep Ban on Chinese 5G EquipmentDeflation Fears Spreading in ChinaBDO Accused of Improperly Liquidating Next DigitalLast Squatter Settlements to Be Removed in Hong Kong
Many Pentagon officials and U.S. lawmakers likely lay awake at night wondering what Chinese leaders think about the use of artificial intelligence in war. On today's episode, Sam Bresnick, a Research Fellow at Georgetown's Center for Security and Emerging Technology joins Lawfare Managing Editor Tyler McBrien to begin to answer that very question and discuss his new report, “China's Military AI Roadblocks: PRC Perspectives on Technological Challenges to Intelligentized Warfare.”They discuss how Sam found and analyzed dozens of Chinese-language journal articles about AI and warfare, Beijing's hopes for these new and emerging technologies, and what, in turn, keeps Chinese defense officials up at night as well.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Acknowledgement of Country// Headlines// Last Friday 21st June, Disrupt Land Forces had its first public meeting (at Black Spark, Northcote) to plan actions in protest of 'Land Forces', a massive weapons expo and conference which will be running this year from September 11-13 at the Melbourne Exhibition Centre. Ongoing actions in the previous host-city Magandjin (Brisbane) forced the conference to relocate to Naarm, and Disrupt Land Forces hope this year for the same - if not greater - level of disruption of harms dealers and their investors, both at the event and in the week leading up to it. Before the planning meeting various frontline community members and activists spoke of their experiences at the hands of the global military industrial complex, and the importance of international solidarity against imperialist structures. On today's show, we played a speech from the event by Adolf Mora, West Papuan community member and activist. Content warning: this speech includes mention of suicide in detention. If you are feeling distressed, you can always call lifeline on 13 11 14 or the suicide callback service on 1300 659 467. Aboriginal and Torres Strait Islander listeners can also contact 13 YARN (13 92 76), or Yarning Safe'n'Strong on 1800 959 563.// Since the early days of Israel's genocidal onslaught in Gaza, community activists have been staging pickets outside companies involved in the weapons manufacturing supply chain across so-called Australia, including AW Bell in Dandenong. Amanda Villalobos, local activist and participant in community pickets of AW Bell, joins us this morning to speak about the upcoming picket at AW Bell this Monday the 1st of July, aiming to close operations for the day to disrupt the company's role in the global supply chain of Lockheed Martin's F-35A Lightning II aircraft or Joint Strike Fighter. Find out more about the action and keep up to date by heading to Weapons Out Of Naarm on Instagram.// Researcher and organiser Matilda Byrne joins us to speak about Australia's place in the artificial intelligence and autonomous capabilities arms race, which she recently authored an article about for Declassified Australia. By marketing itself as a site for AI innovation and framing this issue as a key area of defence policy, Australia is deeply embedded in global networks of weapons development further integrating autonomous functions into technologies of war and genocide. Matilda is currently undertaking a PhD at RMIT's Social and Global Studies Centre where she is a sessional lecturer in international relations, security, global governance and disarmament. She is the National Coordinator of the Australia Stop Killer Robots campaign, based at SafeGround, an Australian non-for-profit that seeks to reduce impacts of legacy and emerging weapons.// Martin Hodgson, senior advocate at the Foreign Prisoners Support Service and co-host of Curtain the Podcast, speaks with us about the colonial crisis of domestic and family violence in so-called Australia. This conversation occurs in the context of Martin's recent appearance at a Parliamentary hearing held as part of the ongoing inquiry into missing and murdered First Nations women and children. While the Senate referred an inquiry into missing and murdered First Nations women and children to the Legal and Constitutional Affairs References Committee on 4th August 2022, the inquiry process continues, with the reporting date extended to 15th August 2024. Content warning: this interview includes discussion of domestic and family violence, femicide and racism. If you are feeling distressed, you can always call lifeline on 13 11 14 or the suicide callback service on 1300 659 467. Aboriginal and Torres Strait Islander listeners can also contact 13 YARN (13 92 76), or Yarning Safe'n'Strong on 1800 959 563. If you need support or advice, please call 1800 RESPECT on 1800 737 732.// We didn't end up having time to play the opening statement made by Karla McGrady, portfolio manager of Innovation at Our Watch, during last Monday's hearing as part of the Parliamentary inquiry into missing and murdered First Nations women and children, but you can listen to it here. Our Watch is a national leader in the primary prevention of violence against women and their children in Australia, and Karla appeared alongside CEO Patty Kinnersly to provide evidence on the significance of primary prevention and the importance of attending to colonialism and racism in the context of the inquiry.//
In this episode we dive into issues of human-machine teaming, with human factors engineer Jan Maarten Schraagen. Having edited the recently released book, Responsible Use of AI in Military Systems, Jan Maarten is an expert on how brittle technologies influence joint cognitive system performance. In this episode, in addition to exploring the book, we explore the outcomes of the 2023 REAIM Summit and what we can hope for from the 2024 Summit; talk about multidisciplinarity in the responsible military AI debate; and how we should be thinking about capability envelope of military AI - that is, how it can or should be restricted in the conditions under which it can operate. Jan Maarten Schraagen is a cognitive systems engineer at TNO, and studies how brittle technologies influence joint cognitive system performance. He is a human factors specialist with broad experience in optimizing work processes and teamwork design. He is particularly interested in making work safe, productive and healthy, and improving resilience in sociotechnical work systems.Additional resources:Jan Maarten Schraagen (ed), Responsible Use of AI in Military Systems, CRC Press, 2024Ekelhof, M.A.C. (2018). Lifting the Fog of Targeting: “Autonomous Weapons” and Human Control through the Lens of Military Targeting. Naval War College Review, 71(3), 61-94.Ekelhof, M.A.C. (2019). Moving beyond semantics on autonomous weapons: Meaningful human control in operation. Global Policy, 10(3), 343-348.Endsley, M.R. (2017). From here to autonomy: Lessons learned from human-automation research. Human Factors, 59(1), 5-27.Taddeo, M., & Blanchard, A. (2022). A comparative analysis of the definitions of autonomous weapons systems. Science and Engineering Ethics, 28, 37-59.Herbert Simon, Bounded Rationality, Utility and Probability, NPA, 1990Peeters, M.M.M., van Diggelen, J., van den Bosch, K., Bronkhorst, A., Neerincx, M.A., Schraagen, J.M., Raaijmakers, S. (2021). Hybrid collective intelligence in a human–AI society. AI and Society, 36(1), 217-238.Schraagen, J.M.C., Barnhoorn, J.S., Van Schendel, J., & Van Vught, W. (2022). Supporting teamwork in hybrid multi-team systems. Theoretical Issues in Ergonomics Science, 23(2), 199-220.Van der Kleij, R., Schraagen, J.M.C., Cadet, B., & Young, H.J. (2022). Developing decision support for cybersecurity threat and incident managers. Computers & Security, 113, 102535.Schraagen, J.M.C. (2023). Responsible use of AI in military systems: Prospects and challenges. Ergonomics, 66(11), 1719 – 1729.
In this episode I address this latest hair brained plan to "make the subways safer". Why it won't work and how 1984 has already come to life.
In this captivating episode of Discover Daily, powered by Perplexity, we explore three fascinating stories from the world of technology. First, we look into Microsoft's pitch of OpenAI's DALL-E to the U.S. Department of Defense for military applications, raising questions about the ethical implications of AI in warfare. Next, we discuss the potential dropping of charges against WikiLeaks founder Julian Assange by the U.S. government, sparking a debate about press freedom and national security. Finally, we examine Meta's development of custom AI processors, such as the Meta Training and Inference Accelerator (MTIA), to enhance their AI capabilities and reduce reliance on third-party hardware. Discover Daily, the AI-generated podcast brought to you by Perplexity, is your go-to source for the most intriguing stories in tech, science, and beyond. By curating content from Perplexity's Discover feed, this podcast keeps you informed, entertained, and inspired by the latest developments shaping our world. Subscribe to Discover Daily on your favorite podcast platform and download the Perplexity app on Android or iOS to access even more captivating content. Stay curious and join the conversation with Perplexity's Discover Daily podcast.From Perplexity's Discover feed:Microsoft pitched DALL·E to militaryUS is considering dropping Assange caseMeta custom AI processorPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
Part of the ongoing debate about the lawfulness of autonomy in military systems in the manner in which the technology integrates with and interacts with its human masters. The term Meaningful Human Control (or MHC) has garnered particular relevance in this debate. Today we speak with Dr Lena Trabucco about her upcoming OpinioJuris Symposium on Military AI and the Law of Armed Conflict, co-edited with Dr Magda Packholska, on this issue, as well as her work on legal challenges associated with emerging technology more broadly.Lena is a research fellow, a visiting scholar at the Stockton Center for International Law at the US Naval War College, and research fellow at the Technology, Law and Security Program at American University College of Law and the University of Copenhagen. Her research focuses on the intersection of international law and emerging military technology, particularly autonomous weapon systems. She has multiple projects examining human control throughout an autonomous weapon system life cycle. Previously, she was a post-doctoral researcher at the Centre for Military Studies at the University of Copenhagen. Lena received a PhD in law from the University of Copenhagen and a PhD in international relations from Northwestern University.Additional resources:Opinio Juris Symposium on Military AI and the Law of Armed ConflictKevin Jon Heller, 'The Concept of 'The Human' in the Critique of Autonomous Weapons', 14 Harvard National Security Journal (2023)Magdalena Pacholska, 'Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective', Israel Law Review (2022), 1–21Rebecca Crootof, 'A Meaningful Floor for 'Meaningful Human Control', Temple International & Comparative Law Journal, Vol. 30, 2016Kenneth Payne's SubstackKiller Robot Cocktail Party (Lena and Brad Boyd's substack)
Welcome to today's episode of the AI Lawyer Talking Tech podcast. In recent years, artificial intelligence has taken the legal world by storm, transforming the way lawyers work and raising a host of new challenges and opportunities. From the courtroom to the boardroom, AI is reshaping the legal landscape in ways we could never have imagined just a few short years ago. In today's episode, we'll be exploring the many ways in which AI is impacting the legal profession, from the use of generative AI in litigation to the ethical considerations surrounding the use of AI by lawyers. We'll also be taking a closer look at some of the latest developments in AI regulation, including Utah's groundbreaking new consumer protection law and the ongoing efforts by the music industry to combat AI exploitation. So join us as we dive deep into the fascinating world of AI and the law. Whether you're a seasoned legal professional or simply someone with an interest in the future of technology, this is an episode you won't want to miss. Let's get started! The impact of GenAI on various stages of litigation05 Apr 2024Thomson Reuters InstituteUS appellate judge calls bans on AI use by lawyers 'misplaced'05 Apr 2024ReutersLearn to Use Generative AI *Ethically* in Your Practice at Future Is Now Conference05 Apr 20242CivilityDigital ID Standard and Safe Harbour: A comprehensive guide and Verify 365's role in helping firms achieve this standard05 Apr 2024Legal FuturesLSB chief: Why are lawyers forced to pay for professional bodies?04 Apr 2024Legal FuturesToday At LMA: Top Picks For Novices, Veterans, And Data Junkies05 Apr 2024Above The LawUse of Artificial Intelligence Systems by Insurers; Notice 2024-0405 Apr 2024Pennsylvania Code & BulletinUnraveling the Legal and Regulatory Maze of Generative AI: 10 Areas to Watch05 Apr 2024JD SupraAI Chatbots Will Never Stop Hallucinating05 Apr 2024Scientific AmericanSymposium on Military AI and the Law of Armed Conflict: Drone Swarms as Weapons of Mass Destruction05 Apr 2024Opinio JurisMusic industry heavy hitters Sony and Universal join forces with artists including Katy Perry to combat AI exploitation05 Apr 2024City A.M.The Under-30 Female Entrepreneur Redefining LegalTech in Azerbaijan and Beyond05 Apr 2024Tech TimesFake AI law firms are sending fake DMCA threats to generate fake SEO gains04 Apr 2024Web TimesUtah Becomes First State To Enact AI-Centric Consumer Protection Law05 Apr 2024SkaddenIs Pay or Consent a choice under GDPR?05 Apr 2024Hogan LovellsWhy PEOs Need to Pay Particular Attention to New Data Privacy Rules – And the 1 Key Move You Should Make Immediately05 Apr 2024Fisher & Phillips LLPCalifornia issues procurement guidelines for state entities that are acquiring generative AI04 Apr 2024Hogan LovellsLies, Damned Lies, and AI: Lie-Detecting AI May Expose Employers to Liability04 Apr 2024Venable LLPTreasury Department Warns Financial Institutions to Prepare for AI-Age Fraud — AI: The Washington Report04 Apr 2024Mintz Levin
Explore the visionary approach of the US Pentagon, enriched by the perspectives of the Air Force CIO, as they outline the ChatGPT integration strategy into the military's operations and future advancements. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community
In this episode of his "Thinking Out Loud" series, Double D examines a new investigation out of @972mag that details the Israeli use of military AI to perpetuate war crimes in Gaza.Double D shows how the Orwellian surveillance police state of Israel has been used to feed data in to this "Habsora" or "Gospel" AI, turning it in to a "Mass Assassination Factory." He highlights quotes from IDF and Israeli Intelligence officials who detail that the immediate purpose of the siege of Gaza has been explicitly to kill and terrorize civilians.Zooming out, Double D relates this revelation to the coming world where AI is given the autonomy to kill human beings of its own accord; he ruminates on a future where militaries and governments are able to throw of their responsibility in crimes against humanity by blaming artificial intelligence and robotics.Support our Independent Media work on Patreon for $1 a month!patreon.com/entitledmillennialsOne-time donations through PayPal are GREATLY appreciated!paypal.me/entitledmillennials
The tech-centric war in Ukraine and the success of ChatGPT have prompted new interest in figuring out how to prevent military AI from going awry. Read this story here.
Facts & Spins for October 24, 2023 Top Stories: The US and European nations push for a delay in Israel's ground invasion of Gaza, The UN claims to have new evidence of Russian war crimes in Ukraine, María Corina Machado claims victory in Venezuela's opposition primary, Milei and Massa head for Argentina's election runoff, Democrats seek a classified briefing on whether House chaos helps China, A second Trump lawyer pleads guilty in the Georgia case, Detroit police find no evidence of a hate crime in the murder of a synagogue president, A search is underway for the suspected killer of a Maryland judge, A key Stallantis pickup truck plant joins the simmering US autoworkers strikes, and the Philippines bans its military from using AI image generators. Sources: https://www.verity.news/
Check out SignalWire at: https://bit.ly/signalwirewan Make compliance easy with Kolide at: https://www.kolide.com/WAN Save time and automate your social media marketing! Check out Tailwind at https://lmg.gg/tailwind and get 50 free bonus Ghostwriter AI credits! Timestamps (Courtesy of NoKi1119) Note: Timing may be off due to sponsor change: 0:00 Chapters. 1:09 Intro. 1:33 Topic #1: CS:GO is no more, long lives CS 2. 2:48 History of CS, Source, Luke's experience. 9:36 Subtick, playing with Joe, volumetric smoke, recoil & shield. 16:00 Twitch & FP on CS recoil, Steam reviews, $40M in 40 minute. 22:06 Steam reviews, discussing CS 2 replacing CS:GO, a negative review. 32:50 BG3, Linus's BG3 review on Discord, BG3's tutorial, games reviews. 48:21 Yvonne's character choice, camera mod, game preference. 56:03 LTTStore's new desk pad. 57:06 Linus recalls a viewer's argument on using YT Superchats. 58:18 Creator's warehouse bread plushies ft. FP poll. 1:01:16 Merch Messages #1. 1:01:22 Would you wipe your controversies if you also wipe what you learnt from them? 1:03:44 Luke disagrees with FP Poll. 1:04:58 Any personal experience dealing with low temperature on tech? 1:07:23 New FP Poll for breads. 1:07:48 How many times has DLL said anything that caused chaos within LMG? 1:11:00 FP Poll result, bread is the meme. 1:12:09 Topic #2: Newegg's GPU trade in program. 1:12:34 Trade in prices, easy process & warranty. 1:20:49 Linus's price take, Luke on working with PayPal, LTT's AMD video, Linus on satire. 1:38:30 Sponsors. 1:42:13 Covering last WAN's supplement sponsorship. 1:45:30 Merch Messages #2. 1:45:36 How bad a monitor would you use for free before spending on one? 1:53:16 Has LTT ever used consultors or contractors? 2:00:56 How do you address technical debt in your projects? 2:06:00 Topic #3: Samsung's Neo G9 monitors cracking. 2:08:10 Luke on the low frequency of curved screens damage, Linus on per-region support. 2:12:08 Linus calls about if the Secret Shopper segment was shot, continues to leak. 2:22:16 Linus recalls Samsung's The Wall, purchases TCL's 115" MiniLED TV. 2:30:22 Linus's theory on why this is Chinese exclusive, LTT shot the Compensator. 2:35:30 Topic #4: Russian zero-day seller offers $20M to hack iOS & Android. 2:39:57 Topic #5: Reddit phases out gold & awards, now pays for karma. 2:42:08 Topic #6: Metaverse's 3D facial scan. 3:02:07 Topic #7: TheFloW teases a PS4/PS5 exploit. 3:03:39 Topic #8: Google will discontinue Podcast next year. 3:04:52 Topic #9: OpenAI's & Meta's new AI. 3:09:28 Topic #10: Getty's library generates images through an ethical database. 3:10:23 Topic #11: Google Search showcases ChatGPT-3's "eggs can be melted" response. 3:11:09 Topic #12: Military AI to sort U.S. intelligence. Cont. Topic #9: OpenAI's & Meta's new AI. 3:13:11 Meta's Ray-Band glasses article. 3:15:04 Luke shows Hotbunlover. 3:15:56 Merch Messages #3 ft. WAN Show After Dark. 3:16:58 How viable do you believe LTT's cleanest setup is? 3:19:48 Difficult challenge for tech that you've come to terms with? 3:20:41 Has Luke looked into aftermarket options for his car's infotainment system? 3:22:27 Has Linus's workload lowered after stepping down? ft. Tech shop sequel, Sea of Stars, cutscenes. 3:39:41 Did Yvonne's medical history help her learn finance better? 3:40:16 Any noticeable LTTStore merch preference per region? 3:40:43 Do you see 12GB VRAM being enough for 2K Ultra settings in the future? 3:45:14 Is it true that Linus can leap over the chair from a standing position? 3:49:16 Arm wrestling on WAN Show? 3:50:18 Have you explored what it'd take to start an ISP? 3:50:52 What is Linus's favorite WAN Show snack? 3:51:22 Do you think you'll see AI reach full sentience in your lifetime? 3:52:32 Why does Luke stream his Starfield gameplay on Twitch and not on FP? 3:54:08 Outro.
Emily and Alex are joined by technology scholar Dr. Lucy Suchman to scrutinize a new book from Henry Kissinger and coauthors Eric Schmidt and Daniel Huttenlocher that declares a new 'Age of AI,' with abundant hype about the capacity of large language models for warmaking. Plus close scrutiny of Palantir's debut of an artificial intelligence platform for combat, and why the company is promising more than the mathy-maths can provide.Dr. Lucy Suchman is a professor emerita of sociology at Lancaster University in the UK. She works at the intersections of anthropology and the field of feminist science and technology studies, focused on cultural imaginaries and material practices of technology design. Her current research extends her longstanding critical engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary militarism. She is concerned with the question of whose bodies are incorporated into military systems, how and with what consequences for social justice and the possibility for a less violent world.This episode was recorded on July 21, 2023. Watch the video on PeerTube.References:Wall Street Journal: OpEd derived from 'The Age of AI' (Kissinger, Schmidt & Huttenlocher)American Prospect: Meredith Whittaker & Lucy Suchman's review of Kissinger et al's bookVICE: Palantir Demos AI To Fight Wars But Says It Will Be Totally Ethical About It Don't Worry About It Fresh AI Hell:American Psychological Association: how to cite ChatGPThttps://apastyle.apa.org/blog/how-to-cite-chatgptSpam reviews & children's books:https://twitter.com/millbot/status/1671008061173952512?s=20An analysis we like, comparing AI to the fossil fuel industry:https://hachyderm.io/@dalias/110528154854288688AI Heaven from Dolly Parton:https://consequence.net/2023/07/dolly-parton-ai-hologram-comments/You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Hugging Face CEO testifies to US House Science Committee about the importance of open-source AI, companies offer subscription services for military AI, YouTube tests new feature called “Test & Compare”, Byju's denies claims of board members' resignation, Oversight Board criticizes Meta for refusing to take down Brazilian pro-insurrection video, Omeat granted USDA approval to commercially sell cultivated chicken products, Apple releases new minor updates for its operating systems, Senators question Mark Zuckerberg about the leak of Meta's open-source LLM LLaMA, and Apple's updates fix non-security bug for charging with Lightning to USB 3 Camera Adapter.
In this episode, Dr Lauren Sanders speaks to Dr Paul Scharre, Vice President and Director of Studies at CNAS (the Centre for a New American Security), is a leading author on autonomous weapons and the impact of AI in the military. He has an extensive background in the drafting and implementation of policy relating to the current and future use of autonomy and artificial intelligence in the military; and has squeezed us in amid his book launch commitments for his most recent book, Four Battlegrounds: Power in the Age of Artificial Intelligence. It addresses the issue of global power in the face of the battle to control artificial intelligence, and how the next industrial revolution will impact the future of war. Additional Resources:Paul Scahrre, Four Battlegrounds: Power in the Age of Artificial Intelligence. Norton, 2023Paul Scharre, Army of None: Autonomous Weapons and the Future of War. Norton, 2019.ChinaTalk (podcast)Epoch AI - Quarterly trend reports and other research reports Anthropic - Frontier AI SafetyMatthew Sadler and Natasha Regan, Game Changer: AlphaZero's Ground Breaking Chess Strategy and the Promise of AI, 2019US State Department Statement on Responsiblee Use of Military AI, 16 Feb 23
Peter talks to Professor Katarzyna Zysk from IFS in Oslo about Russian military AI development, from the core reasons it is being pursued to the implementation plans and their maturity index of outputs. Most worrying perhaps is the discussion on ethical limitations (or lack thereof) being imposed on developers in Russia, whether in the realm of Lethal Autonomous Weapons or in experimentation with humans outside the decision loops. While many commentators have written off the Russian military following their performance in Ukraine over the past 12 months, this might well be premature. The inevitable recapitalisation of Russia's military will place AI and Emerging and Disruptive Technologies at the core of a new Russian force design. Taking that into account now is essential in de-risking Western decisions about the future.
Jung_E (Korean: 정이), stylised as JUNG_E, is a 2023 South Korean science fiction film written and directed by Yeon Sang-ho, starring Kang Soo-yeon, Kim Hyun-joo and Ryu Kyung-soo. The film features the final film appearance of Kang Soo-yeon who died ahead of its release. It premiered on January 20, 2023 on Netflix.Jung_E portrays a desolated Earth in the 22nd century that is no longer habitable due to climate change, and humans are forced to live in man-made shelters built in space. As the humans settle in around 80 of those shelters, three of them declare themselves as the Adrian Republic, attack Earth and other shelters, and cause a civil war between the Allied Force and Adrian Republic.Captain Yun Jung-yi is a legendary mercenary of Allied Forces who led her team to countless successful missions against the Adrian Republic. She has a little daughter, Yun Seo-hyun, who suffers from a lung tumor, and Jung-yi became a mercenary to afford her daughter's medical treatments. The day Seo-hyun gets the surgery, Jung-yi fails her mission and ends up in a coma. Kronoid, an institute in charge of developing AI technologies, convinces her family to agree to clone her brain, promising that they will cover Jung-yi's treatments, her daughter's education and living expenses.Thirty-five years later in 2194, Kronoid clones Jung-yi's brain to develop an AI mercenary, code-named Jung_E. Dr. Yun Seo-hyun is the team leader of research project Jung_E and seeks to memorialize Jung-yi as a hero rather than a failure. Researchers at Kronoid have copied Jung-yi's brain data and put them in android bodies. Through simulations of Jung-yi's final mission, they try to extract combat memory data that could be used to develop an exceptional combat AI. Just like Captain Jung-yi failed her last mission, AI JUNG_E also fails in every simulation of the last mission.Seo-hyun learns that her childhood cancer has returned, and she only has three months left to live. She also learns through the chairman of Kronoid that there is no need to develop combat AI anymore because the Adrian Republic and Allied Forces are entering a treaty. With peace approaching, the company has decided to focus on household services. Seo-hyun discovers one of her team members with a scantily clad Jung_E robot in his apartment, but he claims that Kronoid has tasked him with investigating using the Jung_E models as sex toys. After the last simulation of project Jung_E, Seo-hyun escapes the facility with a Jung_E robot while other combat robots and security forces chase them. Kim Sang-Hoon, the project's director, is revealed to also be a robot with an AI brain. Kim shoots Seo-hyun as she and the Jung_E robot flee via elevated rail. The Jung_E robot fights him off, and he is destroyed when his part of the train falls to the ground. Seo-hyun urges Jung_E to leave her and survive. After surprising Seo-hyun by showing affection, Jung_E escapes and looks around on top of a rock in a mountain range ★ Support this podcast on Patreon ★
In this cross-over episode of the Acquisition Talk and China Talk podcasts, we have Greg Allen on to discuss progress in AI/ML defense applications in China and the United States. Greg Allen is the director for the project on AI governance at CSIS, and was formerly the director of strategy and policy at the Joint Artificial Intelligence Center (JAIC). During the episode, we discuss: - Military use cases of AI/ML as they are shaping up in Ukraine - Bureaucratic challenges in the US to fielding AI/ML systems - How far is the US away from weaponizing autonomy This podcast was produced by Eric Lofgren. You can follow me on Twitter @AcqTalk and find more information at https://AcquisitionTalk.com -
Truth Social fails to get approval from the Google Play store, Military AI startups are gaining interest and growth, and LG announces its first bendable OLED TV.
#3.2 - Killer Robots Don't Kill, the Pentagon Does. Discussing Military AI with Dr. David Bosold by Panoramas of the Local Globe
As the United States (US) and India gradually deepen cooperation in political, technological, and other areas, the cooperation in military Artificial Intelligence (AI) also holds a lot of potentials. In this episode, Megha Pardhi and Husanjot Chahal explore avenues for India-US cooperation in military applications of AI. Husanjot Chahal is Research Analyst at the Centre for Security and Emerging Technology (CSET) at Georgetown University's Walsh School of Foreign Service. Before CSET, she worked in the World Bank's Corporate Security Division and in New Delhi-based research organizations, including the MP Institute for Defence Studies and Analyses (IDSA).Follow Megha on Twitter: https://twitter.com/pardhimegha21Follow Husanjot on Twitter: https://twitter.com/hsjchahalRead Husan's paper here - Mapping India's AI Potential. Also, check out Megha's newsletter - China Tech Dispatch.Check out Takshashila's courses: https://school.takshashila.org.in/You can listen to this show and other awesome shows on the new and improved IVM Podcast App on Android: https://ivm.today/android or iOS: https://ivm.today/iosYou can check out our website at https://www.ivmpodcasts.com
On this episode of Chain Reaction, Aaron spoke with Anna Nadibaidze about her new report "Russian Perceptions of Military AI, Automation, and Autonomy." The conversation touched on Russian views of Artificial Intelligence, autonomy, and the ethical use of these types of weapons. The report is available for download on www.fpri.org
Boosting military technology under the JAIC Robert Work, former deputy secretary of defense, discusses funding for the Joint Artificial Intelligence Center to accelerate adoption of artificial intelligence in the military Reviewing the Navy's “divest to invest” strategy Seth Cropsey, senior fellow at the Hudson Institute, discusses potential risks and benefits of the Navy's plan to divest of some ships and invest in others within a few years
Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Research, a Senior Fellow at the Carnegie Council for Ethics in International Affairs, and author of the book Eyes in the Sky: the Secret Rise of Gorgon Stare and How It Will Watch Us All. Arthur recently published The Black Box, Unlocked: Predictability and Understandability in Military AI, and the three discuss the inherent challenges of artificial intelligence and the challenges of creating definitions to enable meaningful global discussion on AI.
Air Vice Marshal Bahadur questions General Pannu and Rajiv Malhotra in light of the latter's recent book on the subject of artificial intelligence and the military. To buy this book, visit www.AIandPower.com See more on Artificial Intelligence here: http://bit.ly/AIandPower Do check out our YouTube channel 'Rajiv Malhotra Official' and do follow us on Facebook '@RajivMalhotra.Official' and Twitter '@InfinityMessage' and '@RajivMessage'. To support this project: https://infinityfoundation.com/donate/ --- Support this podcast: https://anchor.fm/kurukshetra/support
Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Research, a Senior Fellow at the Carnegie Council for Ethics in International Affairs, and author of the book Eyes in the Sky: the Secret Rise of Gorgon Stare and How It Will Watch Us All. Arthur recently published The Black Box, Unlocked: Predictability and Understandability in Military AI, and the three discuss the inherent challenges of artificial intelligence and the challenges of creating definitions to enable meaningful global discussion on AI. Click here to visit our website and explore the links mentioned in the episode.
Autonomous machines capable of deadly force are increasingly prevalent in modern warfare, despite numerous ethical concerns. Is there anything we can do to halt the advance of the killer robots? By Frank Pasquale. Help support our independent journalism at theguardian.com/longreadpod
Our latest episode with a summary and discussion of last week's big AI news!This week Microsoft and partners aim to shrink the ‘data desert’ limiting accessible AI, Access Now resigns from Partnership on AI due to lack of change among tech companies, A radical new technique lets AI learn with practically no data, 0:00 - 0:40 Intro 0:40 - 5:40 News Summary segment5:40 News Discussion segmentFind this and more in our text version of this news roundup: https://www.skynettoday.com/digests/the-eighty-seventhMusic: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)
In COVID-related AI news, Youyang Gu provides world- and county-level COVID-19 predictions using machine learning, along with a rolling examination of accuracy. In regular AI news, a military coalition of 13 countries meets to discuss the use of and ethics of AI. Orcan Intelligence provides a deeper look into Europeans’ concerns about AI technologies. Ben Lee and the Library of Congress unveil the full open version of the Newspaper Navigator, which provides access to 1.56 million photographs from newspapers. Research from Intel and Binghamton University uses the pulse of the beating heart to identify deep fake videos with a 97% accuracy. And Arthur Holland Michel publishes the Black Box, Unlocked: Predictability and Understandability in Military AI. Click here to visit our website and explore the links mentioned in the episode.
Military AI can easily be fooled for now - Article Link Here. Our Sun's next 11-year Solar Cycle should be quieter than its previous cycle - Article Link Here. A German company will be debuting the worlds first Unmanned Aerial Taxi next year - Article Link Here.Feel free to send me an email at stemnewspodcast@gmail.com with any questions, feedback, recommendations, or ideas for future topics to discuss. Follow STEMnews Podcast on Twitter or LinkedIn
30 April 2020
In this episode, your favourite Canadians use one of their filler episodes to keep you entertained in these grim quarantine days. Stay safe everyone, and remember to wash your hands! We hope you enjoy and perhaps learn something from our conversation! Follow us on social media: Twitter: https://twitter.com/TheBBPodcast1 Instagram: https://www.instagram.com/podcastthebb/ Facebook: https://www.facebook.com/The-BB-Podcast-312757386079841/?modal=admin_todo_tour Snapchat: B_BPodcastListen to the Podcast on: Spotify: https://open.spotify.com/show/50mDzu8sZFI6Ue1FpqDrfy?si=I58owl6NSimDbev_9Y9OPg Apple: https://podcasts.apple.com/ca/podcast/the-b-b-podcast/id1453632365 Google Music: https://podcasts.google.com/?feed=aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2NhbndhbGtlci9mZWVkLnhtbA&ved=0CAAQ4aUDahcKEwjYmtnmiMHnAhUAAAAAHQAAAAAQAQ&hl=en-CA Podbean: https://thebandbpodcast.podbean.com/ Music courtesy of Aim for Theo, go follow them on the socials:https://twitter.com/aimfortheo Thank you for listening and until next time.~B&B
Killer drones powered by AI, autonomous tanks, and cyber attacks. What is the future of warefare? And how can our military use weather as a strategic advantage? In this episode, we'll interview Stephen Padgett to talk about his time in the Navy and how he engineered a system that can help our weapons respond to various weather conditions. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
Technology, including AI, needs protection. Intellectual property of corporations needs protected from espionage. Military technology needs protection from foreign agents. Show Notes 01:10 | Introduction; Daniel M. Ogden, J.D. 02:20 | Technology in the military history 08:00 | WWII technology; Norton Gun Site 12:20 | Banning AI autonomous weapons 16:48 | Slaughterbots 18:13 | Military AI endgame 18:55 | Other Read More › Source
Technology, including AI, needs protection. Intellectual property of corporations needs protected from espionage. Military technology needs protection from foreign agents. Show Notes 01:10 | Introduction; Daniel M. Ogden, J.D. 02:20 | Technology in the military history 08:00 | WWII technology; Norton Gun Site 12:20 | Banning AI autonomous weapons 16:48 | Slaughterbots 18:13 | Military AI endgame 18:55 | Other… Source
In breaking news, Andy and Dave discuss the Convention on Conventional Weapons meeting on lethal autonomous weapons systems (LAWs) at the United Nations, where more than 70 countries are participating in the sixth meeting since 2014. Highlights include the priorities for discussion, as well as the UK delegation's role and position. The Pentagon’s AI programs get a boost in the defense budget. DARPA announces the Automating Scientific Knowledge Extraction (ASKE) project, with the lofty goal of building an AI tool that can automatically generate, test, and refine its own scientific hypotheses. Google employees react to and protest the company’s secret, censored search engine (Dragonfly) for China. The Electronic Frontier Foundation releases a white paper on Mitigating the Risks of Military AI, which includes applications outside of the “kill chain.” And Brookings releases the results of a survey that asks people whether AI technologies should be developed for warfare.
The U.S. Department of Defense Fiscal Year (FY) 2019 budget request is considerably higher than last year with more funding slated for upgrades of air, land, and sea platforms. In this podcast with Mike MacPherson, Vice President, Strategic Planning, Curtiss-Wright Defense Solutions, he discusses the impact for the budget increase on the defense electronics industry and what applications -- radar, unmanned systems, electronic warfare, etc. He also opines on the technology driving innovation in the defense electronic world such as artificial intelligence/machine learning and cyber security. MacPherson also discusses how the DoD needs to speed up its acquisition process to keep pace with technology advances. This podcast is sponsored by the Avionics & Flight Ops show, the premier conference and exhibition for the international avionics and aviation electronics community. For information on attending the event, go to www.ae-expo.eu.
Military AI, the human tendency to collect, and reminiscing about old and fun, yet perhaps a little too addictive, past-times. Sponsor: Valley Vintage - Soaps, balms and oils made the Vintage way. Links Should AI researchers kill people? The Toys that Made Us - Netflix RuneScape Mobile Ways to Contact Us Connect with us in Slack: slack.techreformation.com Visit our website to search for past shows and topics Shout out at us on Twitter at @techreformation! Review us on Apple Podcasts and recommend us on Overcast, or even better - share Tech Reformation with a friend! Music used by special permission of Matthew Parker. Check him out on SoundCloud and iTunes!
Charles and I discuss the role of Military AI in controlling US politics, personal identity, and reality itself. This one goes deep!
We’re back on track this week! On the show: Adam, Nelson, George, and Victor! This week on FNPL, it looks like the Boy Scouts are finally being not stupid about having Gay Leaders. The mormons are butthurt about it. http://www.nytimes.com/2015/07/28/us/boy-scouts-end-nationwide-ban-on-gay-leaders.html Experts: “Military AI is totally bad, mmmkay?”http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons Local Police in Illinois rush the stage at […]