POPULARITY
Send us a textIn this episode of FedBiz'5, host Bobby Testa dives into one of the biggest shifts in government contracting: the rise of Artificial Intelligence (AI) in federal procurement. With more than 200 active AI use cases across federal agencies—from the DoD's Project Maven to the IRS's chatbot—AI is no longer emerging. It's operational, funded, and rapidly expanding.Bobby breaks down five critical insights government contractors need to understand right now if they want to stay ahead in this evolving space. This episode isn't about theory—it's about real agency usage, real budgets, real compliance expectations, and real market strategies that help small to mid-sized contractors show up and compete.You'll learn:How AI is currently being used across federal agenciesWhere AI-related opportunities are growing (hint: DoD, VA, NIH)What regulatory expectations you need to prepare for (yes, FAR clauses are coming)How to position your AI capabilities in SAM.gov, DSBS, and capability statementsWhy early outreach—RFIs, Industry Days, OSDBUs, and teaming—is more important than everWhether your company offers AI tools, supports infrastructure, provides training, or consults on ethical compliance, this episode will show you how to align with what agencies are actually buying.We also explore how tools like FedBiz365 can uncover pre-solicitation activity, highlight AI-related trends, and help you build a smart, targeted strategy based on real data—not guesswork.And of course, Bobby keeps it real (and funny) as always, because what's a discussion about AI and federal procurement without a little commentary on IRS hold music, fantasy football algorithms, and black-box buzzwords?
In this powerhouse episode of Nephilim Death Squad, we welcome back Brad Lail of The Awakened Podcast for an unfiltered dive into media psyops, deep state propaganda, AI warfare, and the Nephilim-infested underbelly of global power. The squad unpacks the hypnotic grip of modern media, explores the roots of government mind control, and breaks down the occult web connecting Big Tech, secret military programs, and ancient bloodlines. Brad shares jaw-dropping insights about Project Maven, the RH-negative bloodline's ties to the Nephilim, and his firsthand encounters with UFOs and missing time. Plus: the inside scoop on Bohemian Grove 3 and why BlackRock and Vanguard are the true puppet masters of your reality. This one's a red-pill overload—strap in and stay sharp.FOLLOW BRAD: The Awakened Podcast - Spirituality, Paranormal, Hidden History Podcast☠️ NEPHILIM DEATH SQUAD Skip the ads. Get early access. Tap into the hive mind of dangerous RTRDs in our private Telegram channel — only on Patreon:
The convergence of artificial intelligence with national security agendas has emerged as a defining shift in global power dynamics. The militarization of artificial intelligence is no longer a futuristic concept. It is a present-day reality unfolding at the nexus of defense, surveillance, and technology. As machine learning capabilities grow more sophisticated and integrated into critical systems, the military-industrial complex has found fertile ground in Silicon Valley's innovations. What was once experimental is now operational, and the line between civilian and military AI has begun to blur in unsettling ways. As a legal professional, I'm trained to follow the chain of accountability who acted, who decided, who bears responsibility. In court, every consequence has a name attached to it. But on a digitized battlefield governed by algorithms, that chain snaps. There is no witness to cross-examine, no general to question, no operator who made the final call. What happens when war is executed by systems that cannot reason, cannot hesitate, and cannot be held to account? This article traces the trajectory of AI in military use, examines the ethical pitfalls of autonomous weaponry, and explores the broader implications of entrusting war to algorithms. Emergence of AI in Modern Warfare In 2018, Google made headlines for refusing to renew its involvement in Project Maven, a U.S. Department of Defense initiative that applied artificial intelligence to analyze drone footage. Following internal protests and a wave of public scrutiny, the company pledged to avoid developing AI for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people." Fast-forward to 2023, and Google is now one of several tech giants contributing to the Pentagon's AI ambitions through contracts under the Joint Warfighting Cloud Capability (JWCC) project. This shift reveals a stark evolution not only in corporate policy but also in the very role AI is beginning to play in modern warfare. Artificial intelligence, once used merely for backend military logistics or benign simulations, is now being integrated directly into decision-making processes, battlefield assessments, and autonomous weapons systems. As companies like Google, Microsoft, Amazon, and Palantir deepen their collaboration with defense agencies, a complex ethical and strategic dilemma emerges: Who controls the future of war, and what does it mean to hand lethal authority to machines? The militarization of AI poses unprecedented questions about human agency, moral accountability, and global security. As governments rush to develop next-generation warfare capabilities, the boundaries between innovation and destruction are becoming dangerously blurred. From Backend Support to the Battlefield: A Brief History The integration of artificial intelligence into military infrastructure did not begin with drones or kill lists. In its early days, AI was a tool for simulation-based training, data sorting, and logistics optimization. The technology was used to forecast equipment failures, manage supply chains, and assist in intelligence analysis. As machine learning evolved, so did its utility to the defense sector. The 2010s saw an increase in Pentagon investments in tech partnerships. One of the most notable collaborations was Project Maven, which used AI to improve the analysis of surveillance footage. Google's role in the project sparked an internal rebellion, with thousands of employees signing a petition demanding the company withdraw from "the business of war." The protest worked temporarily. Over time, however, the military's appetite for AI intensified. The Department of Defense established the Joint Artificial Intelligence Center (JAIC) in 2018, signaling a deeper commitment to integrating AI across all branches of the armed forces. Simultaneously, Silicon Valley's skepticism toward military projects began to soften, influenced by geopol...
Dive into an exciting surprise episode of All Quiet on the Second Front with host Tyler Sweatt, joined by Lt. Gen. Jack Shanahan and Stephen Rodriguez from the Atlantic Council's Software-Defined Warfare Commission. This episode cuts through the complexities of defense technology to focus squarely on the transformative role of software in modern warfare. Tyler, Jack, and Stephen discuss Department of Defense initiatives, underscore the urgency of innovative strategies, and share personal anecdotes that illuminate the path toward a software-driven defense landscape, shedding light on pivotal developments that are reshaping military engagement across the globe. Tune in to understand the stakes and opportunities in software-defined warfare.What's Happening on the Second FrontInsights into Project Maven and the JAIC's impact on military strategies.Lessons from Jack Shanahan's Air Force career and Stephen Rodriguez's tech ventures.How the DoD is adapting to software-centric warfare for technological superiority.The need for rapid, actionable strategies to ensure future readiness.Connect with JackLinkedIn: Jack ShanahanConnect with StephenLinkedIn: Stephen RodriguezConnect with TylerLinkedIn: Tyler Sweatt
Alex Karp, CEO of Palantir, and Jacob Helberg, discuss AI-enabled warfare, the importance of taking clear ethical stances, the evolution of Silicon Valley culture, and the integration of tech innovation into governmental defense systems at the Hill and Valley Forum in Washington, DC. —
A Note from James:Is our military way behind other countries in terms of using the latest technology with AI, with drones, with biotech, with cybersecurity? I think for many years we know we're behind on supersonic weapons. Are we behind on AI? How did Hamas send undetected a thousand or so paragliders into Israel without Israel detecting it? Are we behind on the AI that's in sensors? What is going on?So, with the help of Chris Kirchhoff, who wrote the book "Unit X: How the Pentagon and Silicon Valley are Transforming the Future of War," we answer these questions and more.Episode Description:In this episode, James Altucher hosts Christopher Kirchhoff to explore the critical question: Is the US military lagging behind in technology? They discuss the current technological shortcomings of the military, historical contexts, and how metrics of military power are evolving. Kirchhoff provides an insightful analysis of the Hamas attack as a case study to highlight technological vulnerabilities and failures. The conversation expands to cover the rise of drones, the innovative Replicator Initiative, and the crucial role of AI and machine learning in military operations. Kirchhoff shares his experiences bridging the gap between Silicon Valley and the Pentagon, offering a rare glimpse into the challenges and successes of modern military technology integration.What You'll Learn:Technological Shortcomings: Understand the areas where the US military is currently falling behind other nations in technology.Impact of Drones: Learn about the transformative role drones play in modern warfare and their potential to change military strategies.Replicator Initiative: Discover the Pentagon's innovative approach to building low-cost autonomous weapon systems.AI in Military Operations: Gain insights into how AI and machine learning are being integrated into military strategies and operations.Bridging Technology Gaps: Explore the challenges and successes of connecting Silicon Valley's rapid innovation with the Pentagon's strategic needs.Chapters:01:30 Introduction: Is the US Military Lagging in Technology?02:15 Current Technological Shortcomings03:20 Historical Context of Military Superiority03:59 Changing Metrics of Military Power06:42 Hamas Attack: A Case Study08:15 Technological Vulnerabilities and Failures10:22 US Military's Technological Lag11:42 The Rise of Drones in Modern Warfare14:52 The Replicator Initiative17:54 Bridging the Gap Between Silicon Valley and the Pentagon24:39 Challenges in Government Contracting28:35 Innovative Contracting Solutions31:17 Discovering Joby Aviation: The Future of Flying Cars32:24 Military Applications and Collaboration with Joby34:53 The Rise of Drones in Modern Warfare37:12 Rogue Squadron: The Military's First Commercial Drone Unit39:32 Anduril and the Future of Combat Collaborative Aircraft45:14 AI and Machine Learning in Military Operations51:31 Ethical Issues in Military Technology01:04:02 Strategic Stability and the Future of Warfare01:09:35 Conclusion: Bridging Silicon Valley and the MilitaryAdditional Resources:Unit X: How the Pentagon and Silicon Valley are Transforming the Future of WarJoby AviationAnduril IndustriesDefense Innovation Unit (DIU)DARPA ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to “The James Altucher Show” wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn
The U.S. military once used Google's tech without their employees knowing. Anna Butrico explains the complicated history behind “Project Maven.” Google famously used the slogan, “Don't be evil,” to guide its business practices. However, many Google employees were upset when they learned that the company had partnered with the U.S. Department of Defense on Project Maven, whose goal was to produce AI that could track people and vehicles. Is it immoral for a tech company to partner with the military to create war technology? Or is it immoral not to? -------------------------------------------------------- About Anna Butrico: Anna Butrico is Chief of Staff of Odgers Berndtson U.S. Anna supports the OBUS leadership team in driving the firm's growth strategy. Prior to joining Odgers, Anna was a senior communications associate at the McChrystal Group, where she advised Fortune 100 leaders on how to tap into human potential to achieve stronger business outcomes. A communications expert and former speechwriter, she is the co-author, with GEN (Ret.) Stanley McChrystal, of Risk: A User's Guide. Anna earned her undergraduate degree in English from Vanderbilt University, where she graduated magna cum laude. She also studied literature at St. Anne's College at the University of Oxford. -------------------------------------------------------- About Big Think | Smarter Faster™ ► Big Think The leading source of expert-driven, educational content. With thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, Big Think helps you get smarter, faster by exploring the big ideas and core skills that define knowledge in the 21st century. Go Deeper with Big Think: ►Become a Big Think Member Get exclusive access to full interviews, early access to new releases, Big Think merch and more ►Get Big Think+ for Business Guide, inspire and accelerate leaders at all levels of your company with the biggest minds in business Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, CPT Brandon Pugh sits down with Lt Gen (Ret.) John “Jack” Shanahan an expert in the artificial intelligence (AI) field to discuss AI. In his final assignment he served as the inaugural Director of the U.S. Department of Defense (DoD) Joint Artificial Center (JAIC). During this episode, Lt Gen (Ret.) Shanahan discusses how he became the DoD's lead on AI, standing up Project Maven as the director, the process of introducing AI into the DoD, and AI ethics in the DoD including DoD Directive 3000.09 Autonomy in Weapon Systems. The episode ends with Lt Gen (Ret.) Shanahan discussing the future of AI and potential legislation that may be proposed. NSL Practitioner's interested in reviewing resources and scholarship produced by ADN should check out the Operational Law Handbook and LOAC Documentary Supplement and other significant military legal resources available at The Judge Advocate General's Legal Center and School website under publications. Connect with The Judge Advocate General's Legal Center and School by visiting our website at https://tjaglcs.army.mil/ or on Facebook (tjaglcs), Instagram (tjaglcs), or LinkedIn (school/tjaglcs).
In this episode, CPT Brandon Pugh sits down with Lt Gen (Ret.) John “Jack” Shanahan an expert in the artificial intelligence (AI) field to discuss AI. In his final assignment he served as the inaugural Director of the U.S. Department of Defense (DoD) Joint Artificial Center (JAIC). During this episode, Lt Gen (Ret.) Shanahan discusses how he became the DoD's lead on AI, standing up Project Maven as the director, the process of introducing AI into the DoD, and AI ethics in the DoD including DoD Directive 3000.09 Autonomy in Weapon Systems. The episode ends with Lt Gen (Ret.) Shanahan discussing the future of AI and potential legislation that may be proposed. NSL Practitioner's interested in reviewing resources and scholarship produced by ADN should check out the Operational Law Handbook and LOAC Documentary Supplement and other significant military legal resources available at The Judge Advocate General's Legal Center and School website under publications. Connect with The Judge Advocate General's Legal Center and School by visiting our website at https://tjaglcs.army.mil/ or on Facebook (tjaglcs), Instagram (tjaglcs), or LinkedIn (school/tjaglcs).
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #54: Clauding Along, published by Zvi on March 8, 2024 on LessWrong. The big news this week was of course the release of Claude 3.0 Opus, likely in some ways the best available model right now. Anthropic now has a highly impressive model, impressive enough that it seems as if it breaks at least the spirit of their past commitments on how far they will push the frontier. We will learn more about its ultimate full capabilities over time. We also got quite the conversation about big questions of one's role in events, which I immortalized as Read the Roon. Since publication Roon has responded, which I have edited into the post along with some additional notes. That still leaves plenty of fun for the full roundup. We have spies. We have accusations of covert racism. We have Elon Musk suing OpenAI. We have a new summary of simulator theory. We have NIST, tasked with AI regulation, literally struggling to keep a roof over their head. And more. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. Predict the future. Language Models Don't Offer Mundane Utility. Provide basic info. LLMs: How Do They Work? Emmett Shear rederives simulators, summarizes. Copyright Confrontation. China finds a copyright violation. Curious. Oh Elon. He sues OpenAI to… force it to change its name? Kind of, yeah. DNA Is All You Need. Was I not sufficiently impressed with Evo last week? GPT-4 Real This Time. A question of intelligence. Fun With Image Generation. Be careful not to have too much fun. Deepfaketown and Botpocalypse Soon. This will not give you a hand. They Took Our Jobs. They gave us a few back. For now, at least. Get Involved. Davidad will have direct report, it could be you. Introducing. An AI-based RPG will never work, until one does. In Other AI News. The fallout continues, also other stuff. More on Self-Awareness. Not the main thing to worry about. Racism Remains a Problem for LLMs. Covert is a generous word for this. Project Maven. Yes, we are putting the AIs in charge of weapon targeting. Quiet Speculations. Claimed portents of various forms of doom. The Quest for Sane Regulation. NIST might need a little help. The Week in Audio. Sergey Brin Q&A. Rhetorical Innovation. It is not progress. We still keep trying. Another Open Letter. Also not really progress. We still keep trying. Aligning a Smarter Than Human Intelligence is Difficult. Recent roundup. Security is Also Difficult. This too is not so covert, it turns out. The Lighter Side. It's me, would you like a fries with that? Language Models Offer Mundane Utility Forecast almost as well, or sometimes better, than the wisdom of crowds using GPT-4? Paper says yes. Prompt they used is here. This does require an intensive process. First, we generate search queries that are used to invoke news APIs to retrieve historical articles. We initially implement a straightforward query expansion prompt (Figure 12a), instructing the model to create queries based on the question and its background. However, we find that this overlooks sub-considerations that often contribute to accurate forecasting. To achieve broader coverage, we prompt the model to decompose the forecasting question into sub-questions and use each to generate a search query (Min et al., 2019); see Figure 12b for the prompt. For instance, when forecasting election outcomes, the first approach searches directly for polling data, while the latter creates sub-questions that cover campaign finances, economic indicators, and geopolitical events. We combine both approaches for comprehensive coverage. Next, the system retrieves articles from news APIs using the LM-generated search queries. We evaluate 5 APIs on the relevance of the articles retrieved and select NewsCatcher1 and Google News (Section E.2). Our initial retrieval provides wide covera...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #54: Clauding Along, published by Zvi on March 8, 2024 on LessWrong. The big news this week was of course the release of Claude 3.0 Opus, likely in some ways the best available model right now. Anthropic now has a highly impressive model, impressive enough that it seems as if it breaks at least the spirit of their past commitments on how far they will push the frontier. We will learn more about its ultimate full capabilities over time. We also got quite the conversation about big questions of one's role in events, which I immortalized as Read the Roon. Since publication Roon has responded, which I have edited into the post along with some additional notes. That still leaves plenty of fun for the full roundup. We have spies. We have accusations of covert racism. We have Elon Musk suing OpenAI. We have a new summary of simulator theory. We have NIST, tasked with AI regulation, literally struggling to keep a roof over their head. And more. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. Predict the future. Language Models Don't Offer Mundane Utility. Provide basic info. LLMs: How Do They Work? Emmett Shear rederives simulators, summarizes. Copyright Confrontation. China finds a copyright violation. Curious. Oh Elon. He sues OpenAI to… force it to change its name? Kind of, yeah. DNA Is All You Need. Was I not sufficiently impressed with Evo last week? GPT-4 Real This Time. A question of intelligence. Fun With Image Generation. Be careful not to have too much fun. Deepfaketown and Botpocalypse Soon. This will not give you a hand. They Took Our Jobs. They gave us a few back. For now, at least. Get Involved. Davidad will have direct report, it could be you. Introducing. An AI-based RPG will never work, until one does. In Other AI News. The fallout continues, also other stuff. More on Self-Awareness. Not the main thing to worry about. Racism Remains a Problem for LLMs. Covert is a generous word for this. Project Maven. Yes, we are putting the AIs in charge of weapon targeting. Quiet Speculations. Claimed portents of various forms of doom. The Quest for Sane Regulation. NIST might need a little help. The Week in Audio. Sergey Brin Q&A. Rhetorical Innovation. It is not progress. We still keep trying. Another Open Letter. Also not really progress. We still keep trying. Aligning a Smarter Than Human Intelligence is Difficult. Recent roundup. Security is Also Difficult. This too is not so covert, it turns out. The Lighter Side. It's me, would you like a fries with that? Language Models Offer Mundane Utility Forecast almost as well, or sometimes better, than the wisdom of crowds using GPT-4? Paper says yes. Prompt they used is here. This does require an intensive process. First, we generate search queries that are used to invoke news APIs to retrieve historical articles. We initially implement a straightforward query expansion prompt (Figure 12a), instructing the model to create queries based on the question and its background. However, we find that this overlooks sub-considerations that often contribute to accurate forecasting. To achieve broader coverage, we prompt the model to decompose the forecasting question into sub-questions and use each to generate a search query (Min et al., 2019); see Figure 12b for the prompt. For instance, when forecasting election outcomes, the first approach searches directly for polling data, while the latter creates sub-questions that cover campaign finances, economic indicators, and geopolitical events. We combine both approaches for comprehensive coverage. Next, the system retrieves articles from news APIs using the LM-generated search queries. We evaluate 5 APIs on the relevance of the articles retrieved and select NewsCatcher1 and Google News (Section E.2). Our initial retrieval provides wide covera...
陳瑩 林裕豐 王瑞德 林廷輝 張禹宣 黃暐瀚 吳子嘉 --- Send in a voice message: https://podcasters.spotify.com/pod/show/ettvamerica/message
The Apple Car project is dead, Project Maven-developed AI identified air strike targets, and did Amazon use AI to 'replicate the voices' of actors in the Road House remake? It's Wednesday February 28th and this is Engadget News. Learn more about your ad choices. Visit megaphone.fm/adchoices
On today's episode, the US military's mysterious project to bring modern artificial intelligence to the battlefield — told by the defense official behind it, whose job was so secretive he couldn't even tell his wife about it. Bloomberg's Katrina Manson takes host Saleha Mohsin behind the scenes for an unclassified look at Project Maven.See omnystudio.com/listener for privacy information.
On today's episode, the US military's mysterious project to bring modern artificial intelligence to the battlefield — told by the defense official behind it, whose job was so secretive he couldn't even tell his wife about it. Bloomberg's Katrina Manson takes host Saleha Mohsin behind the scenes for an unclassified look at Project Maven.See omnystudio.com/listener for privacy information.
Chris & Daniel explore AI in national security with Lt. General Jack Shanahan (USAF, Ret.). The conversation reflects Jack's unique background as the only senior U.S. military officer responsible for standing up and leading two organizations in the United States Department of Defense (DoD) dedicated to fielding artificial intelligence capabilities: Project Maven and the DoD Joint AI Center (JAIC). Together, Jack, Daniel & Chris dive into the fascinating details of Jack's recent written testimony to the U.S. Senate's AI Insight Forum on National Security, in which he provides the U.S. government with thoughtful guidance on how to achieve the best path forward with artificial intelligence.
Chris & Daniel explore AI in national security with Lt. General Jack Shanahan (USAF, Ret.). The conversation reflects Jack's unique background as the only senior U.S. military officer responsible for standing up and leading two organizations in the United States Department of Defense (DoD) dedicated to fielding artificial intelligence capabilities: Project Maven and the DoD Joint AI Center (JAIC). Together, Jack, Daniel & Chris dive into the fascinating details of Jack's recent written testimony to the U.S. Senate's AI Insight Forum on National Security, in which he provides the U.S. government with thoughtful guidance on how to achieve the best path forward with artificial intelligence.
Laura Nolan, Principal Software Engineer at Stanza, joins Corey on Screaming in the Cloud to offer insights on how to use SRE to avoid disastrous and lengthy production delays. Laura gives a rich history of her work with SREcon, why her approach to SRE is about first identifying the biggest fire instead of toiling with day-to-day issues, and why the lack of transparency in systems today actually hurts new engineers entering the space. Plus, Laura explains to Corey why she dedicates time to work against companies like Google who are building systems to help the government (inefficiently) select targets during wars and conflicts.About LauraLaura Nolan is a software engineer and SRE. She has contributed to several books on SRE, such as the Site Reliability Engineering book, Seeking SRE, and 97 Things Every SRE Should Know. Laura is a Principal Engineer at Stanza, where she is building software to help humans understand and control their production systems. Laura also serves as a member of the USENIX Association board of directors. In her copious spare time after that, she volunteers for the Campaign to Stop Killer Robots, and is half-way through the MSc in Human Factors and Systems Safety at Lund University. She lives in rural Ireland in a small village full of medieval ruins.Links Referenced: Company Website: https://www.stanza.systems/ Twitter: https://twitter.com/lauralifts LinkedIn: https://www.linkedin.com/in/laura-nolan-bb7429/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is someone that I have been low-key annoying to come onto this show for years, and finally, I have managed to wear her down. Lauren Nolan is a Principal Software Engineer over at Stanza. At least that's what you're up to today, last I've heard. Is that right?Laura: That is correct. I'm working at Stanza, and I don't want to go on and on about my startup, but I'm working with Niall Murphy and Joseph Bironas and Matthew Girard and a bunch of other people who more recently joined us. We are trying to build a load management SaaS service. So, we're interested in service observability out of the box, knowing if your critical user journeys are good or bad out of the box, being able to prioritize your incoming requests by what's most critical in terms of visibility to your customers. So, an emerging space. Not in the Gartner Group Magic Circle yet, but I'm sure at some point [laugh].Corey: It is surreal to me to hear you talk about your day job because for, it feels like, the better part of a decade now, “Laura, Laura… oh, you mean USENIX Laura?” Because you are on the USENIX board of directors, and in my mind, that is what is always short-handed to what you do. It's, “Oh, right. I guess that isn't your actual full-time job.” It's weird. It's almost like seeing your teacher outside of the elementary school. You just figure that they fold themselves up in the closet there when you're not paying attention. I don't know what you do when SREcon is not in process. I assume you just sit there and wait for the next one, right?Laura: Well, no. We've run four of them in the last year, so there hasn't been very much waiting. I'm afraid. Everything got a little bit smooshed up together during the pandemic, so we've had a lot of events coming quite close together. But no, I do have a full-time day job. But the work I do with USENIX is just as a volunteer. So, I'm on the board of directors, as you say, and I'm on the steering committee for all of the global SREcon events, and typically is often served by the program committee as well. And I'm sort of there, annoying the chairs to, “Hey, do your thing on time,” very much like an elementary school teacher, as you say.Corey: I've been a big fan of USENIX for a while. One of the best interview processes I ever saw was closely aligned with evaluating candidates along with USENIX SAGE levels to figure out what level of seniority are they in different areas. And it was always viewed through the lens of in what types of consulting engagements will the candidate shine within, not the idea of, “Oh, are you good or are you crap? And spoiler, if I'm asking the question, I'm of course defaulting myself to goading you to crap.” Like the terrible bespoke artisanal job interview process that so many companies do. I love how this company had built this out, and I asked them about it, and, “Oh, yeah, it comes—that dates back to the USENIX SAGE things.” That was one of my first encounters with what USENIX actually did. And the more I learned, the more I liked. How long have you been involved with the group?Laura: A relatively short period of time. I think I first got involved with USENIX in around 2015, going to [Lisa 00:03:29] and then going on to SREcon. And it was all by accident, of course. I fell onto the SREcon program committee somehow because I was around. And then because I was still around and doing stuff, I got eventually—you know, got co-opted into chairing and onto the steering committee and so forth.And you know, it's like everything volunteer. I mean, people who stick around and do stuff tend to be kept around. But USENIX is quite important to me. We have an open access policy, which is something that I would like to see a whole lot more of, you know, we put everything right out there for free as soon as it is ready. And we are constantly plagued by people saying, “Hey, where's my SREcon video? The conference was like two weeks ago.” And we're like, “No, no, we're still processing the videos. We'll be there; they'll be there.”We've had people, like, literally offer to pay extra money to get the videos sooner, but [laugh] we're, like, we are open access. We are not keeping the videos away from you. We just aren't ready yet. So, I love the open access policy and I think what I like about it more than anything else is the fact that it's… we are staunchly non-vendor. We're non-technology specific and non-vendor.So, it's not, like, say, AWS re:Invent for example or any of the big cloud vendor conferences. You know, we are picking vendor-neutral content by quality. And as well, as anyone who's ever sponsored SREcon or any of the other events will also tell you that that does not get you a talk in the conference program. So, the content selection is completely independent, and in fact, we have a complete Chinese wall between the sponsorship organization and the content organization. So, I mean, I really like how we've done that.I think, as well, it's for a long time been one of the family of conferences that our organizations have conferences that has had the best diversity. Not perfect, but certainly better than it was, although very, very unfortunately, I see conference diversity everywhere going down after the pandemic, which is—particularly gender diversity—which is a real shame.Corey: I've been a fan of the SREcon conferences for a while before someone—presumably you; I'm not sure—screwed up before the pandemic and apparently thought they were talking about someone else, and I was invited to give a keynote at SREcon in EMEA that I co-presented with John Looney. Which was fun because he and I met in person for the first time three hours beforehand, beat together our talk, then showed up an hour beforehand, found there will be no confidence monitor, went away for the next 45 minutes and basically loaded it all into short term cash and gave a talk that we could not repeat if we had to for a million dollars, just because it was so… you're throwing the ball to your partner on stage and really hoping they're going to be able to catch it. And it worked out. It was an anger subtext translator skit for a bit, which was fun. All the things that your manager says but actually means, you know, the fun sort of approach. It was zany, ideally had some useful takeaways to it.But I loved the conference. That was one of the only SREcons that I found myself not surprised to discover was coming to town the next week because for whatever reason, there's presumably a mailing list that I'm not on somewhere where I get blindsided by, “Oh, yeah, hey, didn't you know SREcon is coming up?” There's probably a notice somewhere that I really should be paying attention to, but on the plus side, I get to be delightfully surprised every time.Laura: Indeed. And hopefully, you'll be delightfully surprised in March 2024. I believe it's the 18th to the 20th, when SREcon will be coming to town in San Francisco, where you live.Corey: So historically, in addition to, you know, the work with USENIX, which is, again, not your primary occupation most days, you spent over five years at Google, which of course means that you have strong opinions on SRE. I know that that is a bit dated, where the gag was always, it's only called SRE if it comes from the Mountain View region of California, otherwise it's just sparkling DevOps. But for the initial take of a lot of the SRE stuff was, “Here's how to work at Google.” It has progressed significantly beyond that to the point where companies who have SRE groups are no longer perceived incorrectly as, “Oh, we just want to be like Google,” or, “We hired a bunch of former Google people.”But you clearly have opinions to this. You've contributed to multiple books on SRE, you have spoken on it at length. You have enabled others to speak on it at length, which in many ways, is by far the better contribution. You can only go so far scaling yourself, but scaling other people, that has a much better multiplier on it, which feels almost like something an SRE might observe.Laura: It is indeed something an SRE might observe. And also, you know, good catch because I really felt you were implying there that you didn't like my book contributions. Oh, the shock.Corey: No. And to be clear, I meant [unintelligible 00:08:13], strictly to speaking.Laura: [laugh].Corey: Books are also a great one-to-many multiplier because it turns out, you can only shove so many people into a conference hall, but books have this ability to just carry your words beyond the room that you're in a way that video just doesn't seem to.Laura: Ah, but open access video that was published on YouTube, like, six weeks ahead [laugh]. That scales.Corey: I wish. People say they want to write a book and I think they're all lying. I think they want to have written the book. That's my philosophy on it. I do not understand people who've written a book. Like, “So, what are you going to do now?” “I'm going to write another book.” “Okay.” I'm going to smile, not take my eyes off you for a second and back away slowly because I do not understand your philosophy on that. But you've worked on multiple books with people.Laura: I actually enjoy writing. I enjoy the process of it because I always learn something when I write. In fact, I learn a lot of things when I write, and I enjoy that crafting. I will say I do not enjoy having written things because for me, any achievement once I have achieved it is completely dead. I will never think of it again, and I will think only of my excessively lengthy-to do list, so I clearly have problems here. But nevertheless. It's exactly the same with programming projects, by the way. But back to SRE we were talking about SRE. SRE is 20 now. SRE can almost drink alcohol in the US, and that is crazy.Corey: So, 2003 was the founding of it, then.Laura: Yes.Corey: Yay, I can do simple arithmetic in my head, still. I wondered how far my math skills had atrophied.Laura: Yes. Good job. Yes, apparently invented in roughly 2003. So, the—I mean, from what I understand Google's publishing of the, “20 years of SRE at Google,” they have, in the absence of an actual definite start date, they've simply picked. Ben Treynor's start date at Google as the start date of SRE.But nevertheless, [unintelligible 00:09:58] about 20 years old. So, is it all grown up? I mean, I think it's become heavily commodified. My feeling about SRE is that it's always been this—I mean, you said it earlier, like, it's about, you know, how do I scale things? How do I optimize my systems? How do I intervene in systems to solve problems to make them better, to see where we're going to be in pain and six months, and work to prevent that?That's kind of SRE work to me is, figure out where the problems are, figure out good ways to intervene and to improve. But there's a lot of SRE as bureaucracy around at the moment where people are like, “Well, we're an SRE team, so you know, you will have your SLO Golden Signals, and you will have your Production Readiness Checklists, which will be the things that we say, no matter how different your system is from what we designed this checklist for, and that's it. We're doing SRE now. It's great.” So, I think we miss a lot there.My personal way of doing SRE is very much more about thinking, not so much about the day-to-day SLO [excursion-type 00:10:56] things because—not that they're not important; they are important, but they will always be there. I always tend to spend more time thinking about how do we avoid the risk of, you know, a giant production fire that will take you down for days, or God forbid, more than days, you know? The sort of, big Roblox fire or the time that Meta nearly took down the internet in late-2021, that kind of thing. So, I think that modern SRE misses quite a lot of that. It's a little bit like… so when BP, when they had the Deepwater Horizon disaster on that very same day, they received an award for minimizing occupational safety risks in their environment. So, you know, [unintelligible 00:11:41] things like people tripping and—Corey: Must have been fun the next day. “Yeah, we're going to need that back.”Laura: [laugh] people tripping and falling, and you know, hitting themselves with a hammer, they got an award because it was so safe, they had very little of that. And then this thing goes boom.Corey: And now they've tried to pivot into an optimization award for efficiency, like, we just decided to flash fry half the sea life in the Gulf at once.Laura: Yes. Extremely efficient. So, you know, I worry that we're doing SRE a little bit like BP. We're doing it back before Deepwater Horizon.Corey: I should disclose that I started my technical career as a grumpy old Unix sysadmin—because it's not like you ever see one of those who's happy or young; didn't matter that I was 23 years old, I was grumpy and old—and I have viewed the evolution since then have going from calling myself a sysadmin to a DevOps engineer to an SRE to a platform engineer to whatever we're calling it this week, I still view it as fundamentally the same job, in the sense that the responsibility has not changed, and that is keep the site or environment up. But the tools, the processes and the techniques we apply to it have evolved. Is that accurate? Does it sound like I'm spouting nonsense? You're far closer to the SRE world than I ever was, but I'm curious to get your take on that perspective. And please feel free to tell me I'm wrong.Laura: No, no. I think you're completely right. And I think one of the ways that I think is shifted, and it's really interesting, but when you and I were, when we were young, we could see everything that was happening. We were deploying on some sort of Linux box or other sort of Unix box somewhere, most likely, and if we wanted, we could go and see the entire source code of everything that our software was running on. And kids these days, they're coming up, and they are deploying their stuff on RDS and ECS and, you know, how many layers of abstraction are sitting between them and—Corey: “I run Kubernetes. That means I don't know where it runs, and neither does anyone else.” It's great.Laura: Yeah. So, there's no transparency anymore in what's happening. So, it's very easy, you get to a point where sometimes you hit a problem, and you just can't figure it out because you do not have a way to get into that system and see what's happening. You know, even at work, we ran into a problem with Amazon-hosted Prometheus. We were like, “This will be great. We'll just do that.” And we could not get some particular type of remote write operation to work. We just could not. Okay, so we'll have to do something else.So, one of the many, many things I do when I'm not, you know, trying to run the SREcon conference or do actual work or definitely not write a book, I'm studying at Lund University at the moment. I'm doing this master's degree in human factors and system safety. And one of the things I've realized since doing that program is, in tech, we missed this whole 1980s and 1990s discipline of cognitive systems theory, cognitive systems engineering. This is what people were doing. They were like, how can people in the control room in nuclear plants and in the cockpit in the airplane, how can they get along with their systems and build a good mental model of the automation and understand what's going on?We missed all that. We came of age when safety science was asking questions like how can we stop organizational failures like Challenger and Columbia, where people are just not making the correct decisions? And that was a whole different sort of focus. So, we've missed all of this 1980s and 1990s cognitive system stuff. And there's this really interesting idea there where you can build two types of systems: you can build a prosthesis which does all your interaction with a system for you, and you can see nothing, feel nothing, do nothing, it's just this black box, or you can have an amplifier, which lets you do more stuff than you could do just by yourself, but lets you still get into the details.And we build mostly prostheses. We do not build amplifiers. We're hiding all the details; we're building these very, very opaque abstractions. And I think it's to the detriment of—I mean, it makes our life harder in a bunch of ways, but I think it also makes life really hard for systems engineers coming up because they just can't get into the systems as easily anymore unless they're running them themselves.Corey: I have to confess that I have a certain aversion to aspects of SRE, and I'm feeling echoes of it around a lot of the human factor stuff that's coming out of that Lund program. And I think I know what it is, and it's not a problem with either of those things, but rather a problem with me. I have never been a good academic. I have an eighth grade education because school is not really for me. And what I loved about being a systems administrator for years was the fact that it was like solving puzzles every day.I got to do interesting things, I got to chase down problems, and firefight all the time. And what SRE is represented is a step away from that to being more methodical, to taking on keeping the site up as a discipline rather than an occupation or a task that you're working on. And I think that a lot of the human factors stuff plays directly into it. It feels like the field is becoming a lot more academic, which is a luxury we never had, when holy crap, the site is down, we're going to go out of business if it isn't back up immediately: panic mode.Laura: I got to confess here, I have three master's degrees. Three. I have problems, like I said before. I got what you mean. You don't like when people are speaking in generalizations and sort of being all theoretical rather than looking at the actual messy details that we need to deal with to get things done, right? I know. I know what you mean, I feel it too.And I've talked about the human factors stuff and theoretical stuff a fair bit at conferences, and what I always try to do is I always try and illustrate with the details. Because I think it's very easy to get away from the actual problems and, you know, spend too much time in the models and in the theory. And I like to do both. I will confess, I like to do both. And that means that the luxury I miss out on is mostly sleep. But here we are.Corey: I am curious as far as what you've seen as far as the human factors adoption in this space because every company for a while claimed to be focused on blameless postmortems. But then there would be issues that quickly turned into a blame Steve postmortem instead. And it really feels, at least from a certain point of view, that there was a time where it seemed to be gaining traction, but that may have been a zero interest rate phenomenon, as weird as that sounds. Do you think that the idea of human factors being tied to keeping systems running in a computer sense has demonstrated staying power or are you seeing a recession? It could be I'm just looking at headlines too much.Laura: It's a good question. There's still a lot of people interested in it. There was a conference in Denver last February that was decently well attended for, you know, a first initial conference that was focusing on this issue, and this very vibrant Slack community, the LFI and the Learning from Incidents in Software community. I will say, everything is a little bit stretched at the moment in industry, as you know, with all the layoffs, and a lot of people are just… there's definitely a feeling that people want to hunker down and do the basics to make sure that they're not seen as doing useless stuff and on the line for layoffs.But the question is, is this stuff actually useful or not? I mean, I contend that it is. I contend that we can learn from failures, we can learn from what we're doing day-to-day, and we can do things better. Sometimes you don't need a lot of learning because what's the biggest problem is obvious, right [laugh]? You know, in that case, yeah, your focus should just be on solving your big obvious problem, for sure.Corey: If there was a hierarchy of needs here, on some level, okay, step one, is the building—Laura: Yes.Corey: Currently on fire? Maybe solve that before thinking about the longer-term context of what this does to corporate culture.Laura: Yes, absolutely. And I've gone into teams before where people are like, “Oh, well, you're an SRE, so obviously, you wish to immediately introduce SLOs.” And I can look around and go, “Nope. Not the biggest problem right now. Actually, I can see a bunch of things are on fire. We should fix those specific things.”I actually personally think that if you want to go in and start improving reliability in a system, the best thing to do is to start a weekly production meeting if the team doesn't have that, actually create a dedicated space and time for everyone to be able to get together, discuss what's been happening, discuss concerns and risks, and get all that stuff out in the open. I think that's very useful, and you don't need to spend however long it takes to formally sit down and start creating a bunch of SLOs. Because if you're not dealing with a perfectly spherical web service where you can just use the Golden Signals and if you start getting into any sorts of thinking about data integrity, or backups, or any sorts of asynchronous processing, these sorts of things, they need SLOs that are a lot more interesting than your standard error rate and latency. Error rate and latency gets you so far, but it's really just very cookie-cutter stuff. But people know what's wrong with their systems, by and large. They may not know everything that's wrong with their systems, but they'll know the big things, for sure. Give them space to talk about it.Corey: Speaking of bigger things and turning into the idea of these things escaping beyond pure tech, you have been doing some rather interesting work in an area that I don't see a whole lot of people that I talked to communicating about. Specifically, you're volunteering for the campaign to stop killer robots, which ten years ago would have made you sound ridiculous, and now it makes you sound like someone who is very rationally and reasonably calling an alarm on something that is on our doorstep. What are you doing over there?Laura: Well, I mean, let's be real, it sounds ridiculous because it is ridiculous. I mean, who would let a computer fly around to the sky and choose what to shoot at? But it turns out that there are, in fact, a bunch of people who are building systems like that. So yeah, I've been volunteering with the campaign for about the last five years, since roughly around the time that I left Google, in fact, because I got interested in that around about the time that Google was doing the Project Maven work, which was when Google said, “Hey, wouldn't it be super cool if we took all of this DoD video footage of drone video footage, and, you know, did a whole bunch of machine-learning analysis on it and figured out where people are going all the time? Maybe we could click on this house and see, like, a whole timeline of people's comings and goings and which other people they are sort of in a social network with.”So, I kind of said, “Ahh… maybe I don't want to be involved in that.” And I left Google. And I found out that there was this campaign. And this campaign was largely lawyers and disarmament experts, people of that nature—philosophers—but also a few technologists. And for me, having run computer systems for a large number of years at this point, the idea that you would want to rely on a big distributed system running over some janky network with a bunch of 18-year-old kids running it to actually make good decisions about who should be targeted in a conflict seems outrageous.And I think almost every [laugh] software operations person, or in fact, software engineer that I've spoken to, tends to feel the same way. And yet there is this big practical debate about this in international relations circles. But luckily, there has just been a resolution in the UN just in the last day or two as we record this, the first committee has, by a very large majority, voted to try and do something about this. So hopefully, we'll get some international law. The specific interventions that most of us in this field think would be good would be to limit the amount of force that autonomous weapon, or in fact, an entire set of autonomous weapons in a region would be able to wield because there's a concern that should there be some bug or problem or a sort of weird factor that triggers these systems to—Corey: It's an inevitability that there will be. Like, that is not up for debate. Of course, it's going to break in 2020, the template slide deck that AWS sent out for re:Invent speakers had a bunch of clip art, and one of them was a line art drawing of a ham with a bone in it. So, I wound up taking that image, slapping it on a t-shirt, captioning it “AWS Hambone,” and selling that as a fundraiser for 826 National.Laura: [laugh].Corey: Now, what happened next is that for a while, anyone who tweeted the phrase “AWS Hambone” would find themselves banned from Twitter for the next 12 hours due to some weird algorithmic thing where it thought that was doxxing or harassment or something. And people on the other side of the issue that you're talking about are straight face-idly suggesting that we give that algorithm [unintelligible 00:24:32] tool a gun.Laura: Or many guns. Many guns.Corey: I'm sorry, what?Laura: Absolutely.Corey: Yes, or missiles or, heck, let's build a whole bunch of them and turn them loose with no supervision, just like we do with junior developers.Laura: Exactly. Yes, so many people think this is a great idea, or at least they purport to think this is a great idea, which is not always the same thing. I mean, there's lots of different vested interests here. Some people who are proponents of this will say, well, actually, we think that this will make targeting more accurate, less civilians will actually will die as a result of this. And the question there that you have to ask is—there's a really good book called Drone by Chamayou, Grégoire Chamayou, and he says that there's actually three meanings to accuracy.So, are you hitting what you're aiming at is one of it—one thing. And that's a solved problem in military circles for quite some time. You got, you know, laser targeting, very accurate. Then the other question is, how big is the blast radius? So, that's just a matter of, you know, how big an explosion are you going to get? That's not something that autonomy can help with.The only thing that autonomy could even conceivably help with in terms of accuracy is better target selection. So, instead of selecting targets that are not valid targets, selecting more valid targets. But I don't think there's any good reason to think that computers can solve that problem. I mean, in fact, if you read stuff that military experts write on this, and I've got, you know, lots of academic handbooks on military targeting processes, they will tell you, it's very hard and there's a lot of gray areas, a lot of judgment. And that's exactly what computers are pretty bad at. Although mind you, I'm amused by your Hambone story and I want to ask if AWS Hambone is a database?Corey: Anything is a database, if you hold it wrong.Laura: [laugh].Corey: It's fun. I went through a period of time where, just for fun, I would ask people to name an AWS service and I would talk about how you could use it incorrectly as a database. And then someone mentioned, “What about AWS Neptune,” which is their graph database, which absolutely no one understands, and the answer there is, “I give up. It's impossible to use that thing as a database.” But everything else can be. Like, you know, the tagging system. Great, that has keys and values; it's a database now. Welcome aboard. And I didn't say it was a great database, but it is a free one, and it scales to a point. Have fun with it.Laura: All I'll say is this: you can put labels on anything.Corey: Exactly.Laura: We missed you at the most recent SREcon EMEA. There was a talk about Google's internal Chubby system and how people started using it as a database. And I did summon you in Slack, but you didn't show up.Corey: No. Sadly, I've gotten a bit out of the SRE space. And also, frankly, I've gotten out of the community space for a little while, when it comes to conferences. And I have a focused effort at the start of 2024 to start changing that. I am submitting CFPs left and right.My biggest fear is that a conference will accept one of these because a couple of them are aspirational. “Here's how I built the thing with generative AI,” which spoiler, I have done no such thing yet, but by God, I will by the time I get there. I have something similar around Kubernetes, which I've never used in anger, but soon will if someone accepts the right conference talk. This is how I learned Git: I shot my mouth off in a CFP, and I had four months to learn the thing. It was effective, but I wouldn't say it was the best approach.Laura: [laugh]. You shouldn't feel bad about lying about having built things in Kubernetes, and with LLMs because everyone has, right?Corey: Exactly. It'll be true enough by the time I get there. Why not? I'm not submitting for a conference next week. We're good. Yeah, Future Corey is going to hate me.Laura: Have it build you a database system.Corey: I like that. I really want to thank you for taking the time to speak with me today. If people want to learn more, where's the best place for them to find you these days?Laura: Ohh, I'm sort of homeless on social media since the whole Twitter implosion, but you can still find me there. I'm @lauralifts on Twitter and I have the same tag on BlueSky, but haven't started to use it yet. Yeah, socials are hard at the moment. I'm on LinkedIn. Please feel free to follow me there if you wish to message me as well.Corey: And we will, of course, put links to that in the [show notes 00:28:31]. Thank you so much for taking the time to speak with me. I appreciate it.Laura: Thank you for having me.Corey: Laura Nolan, Principal Software Engineer at Stanza. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry, insulting comment that soon—due to me screwing up a database system—will be transmogrified into a CFP submission for an upcoming SREcon.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.
In this episode, Brock talks with Colin Carroll. Colin is a former Marine Corps recon and intelligence officer and today works at Anduril on strategy and growth. Colin has been a part of several unique teams within and adjacent to the Department of Defense. We get into his time at Project Maven as well as the Joint AI Center to talk about building innovative teams from scratch, why people are the most important ingredient when determining success, and how those people influence the early culture of an organization. Colin also talks about some of the technical advancements AI is bringing to the battlefield and the changing state of the defense technology business in particular how people should be thinking about how to win in defense. Show Notes: Getting fired from the Joint AI Center. 0:39 Command climate surveys and morale in the Department of Defense. 3:02 Career in DoD and Project Maven 12:06 Military service, Intel, and AI development. 15:30 Autonomous systems and program design in government. 19:02 Team dynamics in the DoD. 25:34 Building and maintaining a high-performing team. 29:36 Hiring and evaluating candidates for innovative companies. 32:44 Career paths and personal growth. 39:15 Disrupting the defense industry with AI. 43:17 Government procurement and defense industry. 46:42 Tailoring government go-to-market strategies for different stages of a company's lifecycle. 50:03 Defense industry strategy. 56:28 Scaling production and command control for multidomain unmanned systems. 59:33 Defense tech gaps and data needs for autonomous systems. 1:02:15 The data problem in the DoD. 1:10:03 Tension between industry and government in defense tech. 1:13:23 DOD acquisitions challenges and cultural changes. 1:16:53 -- The Scuttlebutt Podcast - The podcast for service members and veterans building a life outside the military. The Scuttlebutt Podcast features discussions on lifestyle, careers, business, and resources for service members. Show host, Brock Briggs, talks with a special guest from the community committed to helping military members build a successful life, inside and outside the service. Follow along: • Episode & transcript: https://www.scuttlebuttpodcast.co/ • Brock: https://www.brockbriggs.com/
Google -- or, more properly, Alphabet -- is a huge company, and is at the bleeding edge of numerous technological innovations. So, while it wasn't necessarily a surprise that Uncle Sam wanted Google's help building AI, it certainly disturbed a great many people, some of whom were Google's own engineers. So what exactly happened? Join Ben and Matt as they dive into the strange story of Project Maven in this classic episode.They don't want you to read our book.: https://static.macmillan.com/static/fib/stuff-you-should-read/See omnystudio.com/listener for privacy information.
As the birthplace of semiconductors and computers, Silicon Valley has historically been a major center of the defense industry. That changed with the Vietnam War, when antiwar protesters burned down computing centers at multiple universities to oppose the effort in Southeast Asia, as well as the rise of countercultural entrepreneurs who largely determined the direction of the internet age. Today, there are once again growing ties between tech companies and the Pentagon as the need for more sophisticated AI tools for defense becomes paramount. But as controversies like Google's launch of Project Maven attest, there remains a wide chasm of distrust between many software engineers and the Pentagon's goals for a robust defense of the American homeland. In this episode of “Securities”, host Danny Crichton and Lux founder and managing partner Josh Wolfe sit down with retired lieutenant general Jack Shanahan to talk about rebuilding the trust needed between these two sides. Before retirement, Shanahan was the inaugural director of the Pentagon's Joint Artificial Intelligence Center, a hub for connecting frontier AI tech into all aspects of the Defense Department's operations. We talk about the case of Project Maven and its longer-term implications, the ethical issues that lie at the heart of AI technologies in war and defense, as well as some of the lessons learned from Russia's invasion of Ukraine the past year.
This episode explores how militaries around the world are beginning to integrate artificial intelligence into their capabilities, from surveillance drones to autonomous weapons. We discuss key use cases like intelligence analysis and examples like Project Maven, while also examining major ethical dilemmas military AI creates around lethal authority and accountability. From cybersecurity to autonomous drones, the intersection of AI and defense raises profound governance challenges that will reshape global stability. This podcast was generated with the help of artificial intelligence. Although I work for the University of the Armed Forces in Germany, I did Claude 2 do its work and didn't interfere with the content. Music credit: Modern Situations by Unicorn Heads
How is the DoD thinking about deploying AI? What are the challenges and opportunities involved in building out AI assurance? To discuss, I brought on Dr. Jane Pinelis, Chief AI Engineer The Johns Hopkins University Applied Physics Laboratory. She was previously the Chief of the Test, Evaluation, and Assessment branch at the Department of Defense Joint Artificial Intelligence Center (JAIC). Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI's Algorithmic Warfare Cross-Functional Team, better known as Project Maven. Cohosting is Karson Elmgren of CSET. Outtro music: https://www.youtube.com/watch?v=HgzGwKwLmgM Learn more about your ad choices. Visit megaphone.fm/adchoices
How is the DoD thinking about deploying AI? What are the challenges and opportunities involved in building out AI assurance? To discuss, I brought on Dr. Jane Pinelis, Chief AI Engineer The Johns Hopkins University Applied Physics Laboratory. She was previously the Chief of the Test, Evaluation, and Assessment branch at the Department of Defense Joint Artificial Intelligence Center (JAIC). Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI's Algorithmic Warfare Cross-Functional Team, better known as Project Maven. Cohosting is Karson Elmgren of CSET. Outtro music: https://www.youtube.com/watch?v=HgzGwKwLmgM Learn more about your ad choices. Visit megaphone.fm/adchoices
Virgin Galactic returns to space. Space Force is partnering with the Air Force on offensive space cyber operations. South Korea delivers its first commercial grade satellite into orbit, and more. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our weekly intelligence roundup, Signals and Space, and you'll never miss a beat. And be sure to follow T-Minus on Twitter and LinkedIn. T-Minus Guest Our featured interview today is with Jon Check, Executive Director of Cyber Protection Solutions at Raytheon Intelligence and Space. He joins us to discuss Raytheon's support for the National Collegiate Cyber Defense Competition. You can follow Jon on LinkedIn and Twitter. Selected Reading VIRGIN GALACTIC COMPLETES SUCCESSFUL SPACEFLIGHT- Virgin Galactic 5.24 Schriever Spacepower Series: Lt Gen Stephen N. Whiting - Mitchell Institute for Aerospace Studies Unified and integrated: How Space Force envisions the future of data-sharing for space operations - Breaking Defense NGA making 'significant advances' months into AI-focused Project Maven takeover - Breaking Defense National Geospatial-Intelligence Agency to demo data processing node- C4ISRNET (5th LD) S. Korea launches space rocket Nuri following delay - Yonhap News Agency Fleet Space raises A$50M Series C to globalise revolutionary critical minerals exploration tech- Fleet PR Dish in Talks to Sell Wireless Plans Through Amazon - WSJ Does the roar of rocket launches harm wildlife? These scientists seek answers- Nature ESA receives Space for Climate Protection Award- ESA LEGO sends 1,000 astronauts to space and lands them safely in a mini space-shuttle- Space.com Audience Survey We want to hear from you! Please complete our 4 question survey. It'll help us get better and deliver you the most mission-critical space intel every day. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at space@n2k.com to request more info. Want to join us for an interview? Please send your pitch to space-editor@n2k.com and include your name, affiliation, and topic proposal. T-Minus is a production of N2K Networks, your source for strategic workforce intelligence. © 2023 N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week, A'ndre chats with Colin Carroll, the Head of Applied Intuition's Government Relations team, and a vocal expert on leveraging autonomous systems in defense. We break down Colin's perspectives on how the Defense Department has leveraged autonomous systems in the past and present, and Colin outlines what he sees as the biggest blockers for rapid and efficient development in and out of the Department. Colin discusses his early career experience at the Joint Artificial Intelligence Center and Project Maven, his thoughts on whether autonomy development is siloed across the branches of the Armed Forces, and if autonomy, artificial intelligence, and machine learning tools are shaping operations/strategy in Ukraine and the Indo-Pacific. Colin provides his commentary on how the Defense Department can better leverage the private sector, and also plugs Applied Intuition's upcoming event, Nexus 23.
I was pleased to have Colin Carroll join me on the Acquisition Talk podcast to discuss the acquisition of machine learning in the Department of Defense. He is the Director of Government Relations at Applied Intuition, a company that enables autonomous vehicles through simulation development and validation. Before that, Colin had a number of positions including Chief Operating Officer at the JAIC, mission Integration Lead for Project Maven, and 10 years of active service in the Marine Corps. 2:30 -Project Maven started with Bob Work and 10 slides 6:30 - Everyone in the Pentagon's in the fight 10:30 - There's not yet an urgency like in 2009 with MRAP 12:30 - How JAIC operations differed from Project Maven 15:00 - DoD autonomy programs often have zero data 17:00 - How to structure AI/ML programs in DoD 19:00 - The Joint Common Foundations is no more 24:40 - Most DoD's data is owned by industry 27:00 - DoD is buying brittle AI/ML models 29:00 - Competing with GOTS software 31:00 - Separating HW acquisition from SW 37:00 - DoD's $2B AI/ML spending estimate likely high 42:00 - We don't win by reforming SBIR 59:20 - The buzzword of JADC2 1:05:16 - The idea behind Title 10 failed 1:09:50 - Force Design 2030 and the future fight 1:20:10 - How to build a defense team at a tech company This podcast was produced by Eric Lofgren. You can follow me on Twitter @AcqTalk and find more information at https://AcquisitionTalk.com
This episode examines how special operations forces are integrating high-tech tools like artificial intelligence and machine learning to optimize their operations. Dr. Richard Shultz of the Fletcher School of Law and Diplomacy and Gen. Richard Clarke, commander of US Special Operations Command, join the podcast to trace the history of US special operations forces' efforts in Iraq to adapt to the counterterrorism fight there, explain how these forces made use of data to enable a remarkably rapid operational tempo, and describe how a program called Project Maven took shape to harness new technological capabilities.
On Today's Show:"There will always be incidents to respond to though. And that's part of this mindset too, is that you're assuming breach much like in zero trust. You're awaiting it, but you already have mechanisms in place to help you in that situation" - Meghan GoodAs cyber threats continue to evolve, security is more important than ever. It is no longer effective to just meet basic requirements. In today's world, security needs to be proactive. It needs to look ahead and predict the future threats it may need to fend off. That's exactly what the Beyond Compliance approach is, and why it's such a game changer. Meghan Good is VP and Director of the Cyber Accelerator at Leidos. Today, she joins to explain what Beyond Compliance means, how it works, and the best way for organizations to begin with this modern-day approach to cybersecurity.Key TakeawaysHow to think Beyond ComplianceOvercoming the challenges involved in always looking aheadWhy collective defence is the way forwardLinkswww.leidos.com/cyber
In episode 35 of The Gradient Podcast, guest host Sharon Zhou speaks to Jack Shanahan.John (Jack) Shanahan was a Lieutenant General in the United States Air Force, retired after a 36-year military career. He was the inaugural Director of the Joint Artificial Intelligence Center (JAIC) in the U.S. Department of Defense (DoD). He was also the Director of the Algorithmic Warfare Cross-Functional Team (Project Maven). Currently, he is a Special Government Employee supporting the National Security Commission on Artificial Intelligence; serves on the Board of Advisors for the Common Mission Project; is an advisor to The Changing Character of War Centre (Oxford University); is a member of the CACI Strategic Advisory Group; and serves as an Advisor to the Military Cyber Professionals Association.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:(00:00) Intro(01:20) Introduction to Jack and Sharon(07:30) Project Maven(09:45) Relationship of Tech Sector and DoD(16:40) Need for AI in DoD(20:10) Bringing the tech-DoD divide(30:00) ConclusionEpisode Links:John N.T. Shanahan WikipediaAI To Revolutionize U.S. Intelligence Community With General ShanahanEmail: aidodconversations@gmail.com Get full access to The Gradient at thegradientpub.substack.com/subscribe
On Today's Show:"If you can get the trust relationship right, when humans and machines actually work together to solve problems you can really transform the way that business is done. If you build that relationship with intention based on trust, then humans actually really like working with AI-enabled capabilities traditionally." - Ron KeesingSome of our best work in technology comes when humans and machines work together. That also applies to AI-enabled tech. But to see those rewards, like any relationship, trust needs to be present, and that means it needs to be built. Building that trust is a task the team at Leidos is heavily focused on. Today, Ron Keesing, Senior VP for technology integration at Leidos and Tifani O'Brien, Lead for the AI and Machine Learning Accelerator at Leidos join us to walk us through how they're doing that and the challenges they face.Key TakeawaysHow trust in AI impacts the application of the technologyMethods for evoking trust in AIHow AI can help humans unlearn biasesLinkswww.leidos.com/ai
Where should tech builders draw the line on AI for military or surveillance? Just because it can be built, doesn't mean it should be. At what point do we blow the whistle, call out the boss, and tell the world? Find out what it's like to sound the alarm from inside a big tech company.Laura Nolan shares the story behind her decision to leave Google in 2018 over their involvement in Project Maven, a Pentagon project which used AI by Google.Yves Moreau explains why he is calling on academic journals and international publishers to retract papers that use facial recognition and DNA profiling of minority groups.Yeshimabeit Milner describes how the non-profit Data for Black Lives is pushing back against use of AI powered tools used to surveil and criminalize Black and Brown communities.Shmyla Khan, describes being on the receiving end of technologies developed by foreign superpowers as a researcher with the Digital Rights Foundation in Pakistan.IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 6, host Bridget Todd shares stories of people who make AI more trustworthy in real life. This season doubles as Mozilla's 2022 Internet Health Report. Go to the report for show notes, transcripts, and more.
Facebook's parent company Meta has used AI to develop a new method of making concrete that it claims produces 40 percent less carbon emissions than standard mixes, and is already using it in its latest data center. https://www.newscientist.com/article/2317122-meta-is-using-ai-to-create-low-carbon-concrete-for-its-data-centres/ What's happening Facebook's parent company Meta says it's working on a long-term research effort studying how the human brain learns language to improve artificial intelligence. https://www.cnet.com/science/facebook-parent-meta-is-studying-the-human-brain-to-improve-ai/ Project Maven, once the Pentagon's priority program to accelerate the use of artificial intelligence in the military, is now being transferred to the National Geospatial-Intelligence Agency, according to senior officials in the intelligence community. https://breakingdefense.com/2022/04/pentagons-flagship-ai-effort-project-maven-moves-to-nga/ An algorithm looking for child neglect is a cause for concern https://apnews.com/article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1 The Department of Energy earlier this month created the Artificial Intelligence Advancement Council to coordinate funding and development of algorithms and hold regulators accountable for their use. https://www.fedscoop.com/doe-ai-advancement-council-launched/ Visit www.integratedaisolutions.com
Andy and Dave discuss the latest in AI news and search, including a report from the Government Accountability Office, recommending that the Department of Defense should improve its AI strategies and other AI-related guidance [1:25]. Another GAO report finds that the Navy should improve its approach to uncrewed maritime systems, particularly in its lack of accounting for the full costs to develop and operate such systems, but also recommends the Navy establish an “entity” with oversight for the portfolio [4:01]. The Army is set to launch a swarm of 30 small drones during the 2022 Experimental Demonstration Gateway Exercise (EDGE 22), which will be the largest group of air-launched effects the Army has tested [5:55]. DoD announces its new Chief Digital and AI Officer, Dr. Craig Martell, former head of machine learning for Lyft, and the Naval Postgraduate School [7:47]. And the National Geospatial-Intelligence Agency (NGA) takes over operational control of Project Maven's GEOINT AI services [9:55]. Researchers from Princeton and the University of Chicago create a deep learning model of “superficial face judgments,” that is, how humans judge impressions of what people are like, based on their faces; the researchers note that their dataset deliberately reflects bias [12:05]. And researchers from MIT, Cornell, Google, and Microsoft present a new method for completely unsupervised label assignments to images, with STEGO (self-supervised transformer with energy-based graph optimization), allowing the algorithm to find consistent groupings of labels in a largely automated fashion [18:35]. And elicit.org provides a “research discovery” tool, leveraging GPT-3 to provide insights and ideas to research topics [24:24]. Careers: https://us61e2.dayforcehcm.com/CandidatePortal/en-US/CNA/Posting/View/1624
Photo: A stand-alone exhibit entitled, “Innovations in Defense: Artificial Intelligence and the Challenge of Cybersecurity,” features Pittsburgh-based team ForAllSecure's Mayhem Cyber Reasoning System. The system took first place at the August 2016 Cyber Grand Challenge finals, beating out six other computers. The Mayhem CRS was put on display at the Smithsonian's National Museum of American History. The exhibit was produced by the Lemelson Center for the Study of Invention and Innovation. DoD photo What is Project Maven? Francis Rose, @FrancisRoseDC @FedScoop, host, Government Matters (Washington, D.C.); NationalDefenseWeek.com and francisrose.com; The Daily Scoop, The Fed Scoop podcast https://www.fedscoop.com/radio/project-maven-is-moving-improving-how-citizens-interact-with-government-what-the-army-gains-from-cloud%EF%BF%BC/
On today's episode of The Daily Scoop Podcast, the Department of Energy launches the Artificial Intelligence Advancement Council to coordinate funding and development of algorithms. The latest update of the president's management agenda includes five life experiences the Office of Management and Budget believes the government can improve on. Suzette Kent, CEO at Kent Advisory Services and former federal chief information officer, discusses OMB's approach to how citizens interact with government. The National Geospatial-Intelligence Agency will take over Project Maven over the course of this fiscal year. Rear Admiral Danelle Barrett (USN, ret.), former deputy chief information officer of the Navy and former director of current operations at U.S. Cyber Command, explains the importance of the program and what a success transition would look like. The Army is about seven years into its cloud journey now. Colten O'Malley, deputy commander of U.S. Army Command & Control Support Agency, tells FedScoop's Billy Mitchell how important cloud has been to the service. The Daily Scoop Podcast is available every weekday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Google Podcasts, Spotify and Stitcher. And if you like what you hear, please let us know in the comments.
The Pentagon's marquee artificial intelligence initiative, known as Project Maven, is moving to a new home. It's leaving the Office of the Secretary of Defense. For where it's headed, turn to Federal News Network's Justin Doubleday reporting from the GEOINT conference in Denver.
On today's episode of The Daily Scoop Podcast, the fifth generation of a key military cyber training program is under development. The Joint All-Domain Command and Control (JADC2) operation will get a new leader at the Department of Defense, Lt. Gen. Mary O'Brien. Lt. Gen. Jack Shanahan (USAF-ret.), former director of DOD's Joint Artificial Intelligence Center (JAIC) and former leader of Project Maven, discusses the role of the JADC2 leader in coordinating all the pieces of the operation across the department. Okta Federal Chief Security Officer Sean Frazier discusses how organizations need to keep their cybersecurity posture flexible and agile even as employees begin returning to the office. This interview is underwritten by Okta. Rear Adm. Michael Ryan, commander of Coast Guard Cyber Command, discusses the threat landscape facing USCG today, explains how they are reducing cyber risk and outlines the three lines of effort in the Coast Guard Cyber Strategic Outlook. The Daily Scoop Podcast is available every weekday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Google Podcasts, Spotify and Stitcher. And if you like what you hear, please let us know in the comments.
Amina joins us to discuss her journey at Google, working on Project Maven. Interview with Amina Al Sherif, Chief Innovation Officer at Anno.Ai. Follow her on LinkedIn: https://bit.ly/34prFVQor Twitter: https://bit.ly/3sbMEUF This episode is brought to you by EthicsGrade, an ESG Ratings agency with a particular focus on Technology Governance, especially AI Ethics. You can find more information about EthicsGrade here: https://www.ethicsgrade.io/ You can also follow EthicsGrade on Twitter (@EthicsGrade) and LinkedIn: https://bit.ly/2JCiQOg Connect with Us: Join our Slack channel for more conversation about the big ethics issues that rise from AI: https://bit.ly/3jVdNov Follow Are You A Robot? on Twitter, Instagram, and Facebook: @AreYouARobotPod Follow our LinkedIn page: https://bit.ly/3gqzbSw Check out our website: https://www.areyouarobot.co.uk/ Resources: Amina's blog: https://bit.ly/3gggEJm AYAR? episode “Killer Robots” with Wanda and Richard: https://bit.ly/3gc9Z2X
Are you aware of Google's AI venture called Deepmind? We should be! It may control the world one day and we are feeding it all the information to accomplish it. It is also deeply involved with Project Maven. The U.S. Military's move to turn AI into a war machine. Is Google creating a super human that is going to one day replace man or are we already there?email us at: downtherh@protonmail.comSHOW NOTES:https://singularityhub.com/2020/07/26/deepminds-newest-ai-programs-itself-to-make-all-the-right-decisions/https://venturebeat.com/2021/12/08/deepmind-makes-bet-on-ai-system-that-can-play-poker-chess-go-and-more/https://www.thetimes.co.uk/article/1fc096ce-5919-11ec-81f2-17f963b74220?shareToken=f82596f3d92e69d99b8b3b151c492235https://globalnews.ca/news/4125382/google-pentagon-ai-project-maven/https://futurism.com/the-byte/air-forces-project-maven-goes-livehttps://www.defenseone.com/technology/2018/07/google-deepmind-researchers-pledge-lethal-ai/149819/https://venturebeat.com/2019/11/08/the-u-s-military-algorithmic-warfare-and-big-tech/https://abigailshrier.substack.com/p/what-i-told-the-students-of-princeton---Send in a voice message: https://anchor.fm/darrell-g-fortune/message
Are you aware of Google's AI venture called Deepmind? We should be! It may control the world one day and we are feeding it all the information to accomplish it. It is also deeply involved with Project Maven. The U.S. Military's move to turn AI into a war machine. Is Google creating a super human that is going to one day replace man or are we already there? email us at: downtherh@protonmail.com SHOW NOTES: https://singularityhub.com/2020/07/26/deepminds-newest-ai-programs-itself-to-make-all-the-right-decisions/ https://venturebeat.com/2021/12/08/deepmind-makes-bet-on-ai-system-that-can-play-poker-chess-go-and-more/ https://www.thetimes.co.uk/article/1fc096ce-5919-11ec-81f2-17f963b74220?shareToken=f82596f3d92e69d99b8b3b151c492235 https://globalnews.ca/news/4125382/google-pentagon-ai-project-maven/ https://futurism.com/the-byte/air-forces-project-maven-goes-live https://www.defenseone.com/technology/2018/07/google-deepmind-researchers-pledge-lethal-ai/149819/ https://venturebeat.com/2019/11/08/the-u-s-military-algorithmic-warfare-and-big-tech/ https://abigailshrier.substack.com/p/what-i-told-the-students-of-princeton --- Send in a voice message: https://anchor.fm/darrell-g-fortune/message
Google has entered the ring alongside Microsoft, Amazon, and Oracle to bid for Department of Defense's Joint Warfighting Cloud Capability contract. You may remember back in 2018 when Google employees protested Project Maven, a Google DoD AI project that among other things sought to increase the accuracy of drone strikes. Google did not renew that contract but now they are back in earnest. Good. China and Russia seek to destroy us and the rules based global order that has lead to the most progress in the history of humanity. We need our best companies and smartest minds on this. Would love to hear from you. Rate, comment, subscribe on Apple Podcasts and for the video edition- YouTube. Follow me on Twitter, Instagram, TikTok where I always post when a new show drops.
On today's episode of The Daily Scoop Podcast, the Office of Management and Budget is looking to hire a federal chief statistician, a position that has been vacant for almost two years. Steven Grundman, Senior Fellow, Atlantic Council, discusses how the defense industry should be evaluating their solutions to better fit problems the Department of Defense is trying to solve. Lt. Gen Jack Shanahan (USAF-ret.), former Director of the Joint Artificial Intelligence Center (JAIC), Department of Defense, explains the significance of speed in artificial intelligence projects at the Pentagon and lessons learned from his experiences at the JAIC and standing up Project Maven. The Daily Scoop Podcast is available every weekday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Google Podcasts, Spotify and Stitcher. And if you like what you hear, please let us know in the comments.
This episode welcomes Jack Poulson to discuss Project Maven and the relationship between tech and the US military. Last month, Jack released a report at his organization, Tech Inquiry, revealing the extensive relationship between tech corporations and the US military via Project Maven. We discuss the kinds of technologies in the project, including drone surveillance, automated military tanks, location surveillance, and social media surveillance. We also discuss what we can infer about how the project, and some of these technologies, might work together. Jack Poulson is the Executive Director of Tech Inquiry and former Google AI Research Scientist and Assistant Professor of Mathematics at Stanford. You can follow him on Twitter at @_jack_poulson.
For the military, capabilities in the field matter most, not R&D. So, when it comes to artificial intelligence, the Defense Department has been moving quickly by standing up a special team, like a startup enterprise. Its first pilot project, “Project Maven,” began as an intelligence application. Now the push is on to apply it in other areas. Rob and Jackie sat down with retired Lt. Gen. Jack Shanahan, the first director of the Defense Department's Joint Artificial Intelligence Center (JAIC), to discuss how AI is being used in the defense world and the implications for the broader AI ecosystem. MentionedDaniel Castro, Michael McLaughlin, “Who Is Winning the AI Race: China, the EU, or the United States? — 2021 Update” (Center for Data Innovation, 2021).Rob Atkinson, Jackie Whisman, “Podcast: Innovating in the Defense Sector to Remain Competitive With China, Featuring Michael Brown” (ITIF, 2021).RelatedEvent, “How to Deepen Transatlantic Cooperation in AI for Defense” (CDI, 2021).Rob Atkinson, “Emerging Defense Technologies Need Funding to Cross ‘The Valley of Death'” (RealClear Defense, 2020).ITIF, “ITIF Technology Explainer: What Is Artificial Intelligence?” (ITIF, 2018).
The Sunday Times’ tech correspondent brings on Cade Metz, New York Times tech journalist and author of Genius Makers, to talk about the rise of artificial intelligence (3:00), the most important auction in tech (4:35), Europe’s AI crackdown (7:40), Geoff Hinton and neural networks (10:00), how AI starts to spread (13:00), Deepmind’s Demis Hassabis (18:20), why he turned down Facebook’s takeover bid (21:00), Project Maven (23:20), the AI “arms race” with China (25:25), whether artificial general intelligence is possible (29:20), the AlphaGo moment (33:00), Move 37 (38:10), what AI disrupts next (42:00), bias (45:05), the robot arm room (51:30), and the Rubik’s cube solution (56:15) Support this show http://supporter.acast.com/dannyinthevalley. See acast.com/privacy for privacy and opt-out information.
Land of the Giants host Shirin Ghaffary and co-host Alex Kantrowitz explain how a secret contract with the US Department of Defense prompted a showdown between Google and its engineers. This is a preview of Land of the Giants Season 3, Episode 5: A Military Contract Tests Google’s Open Culture. Check out the full episode here. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of 2COT, Vishal dives deeper into a project that set the course for a tech giant that now faces scrutiny from its own employees in the form of an organized union. What prompted this response? And is it a solution that will spark change? Only time will tell. Show Notes Articles The Intercept: https://theintercept.com/2019/03/01/google-project-maven-contract/ The Bulletin: https://thebulletin.org/2017/12/project-maven-brings-ai-to-the-fight-against-isis/ ProPublica: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Videos TheVerge converge: https://youtu.be/42JQyffAONw The full interview with Bernie Sanders: https://youtu.be/yR7YT7sAZbY Leave us a Review If you enjoy listening to our podcast, we'd love for you to leave us a review wherever you happen to listen to podcasts. It helps us gain exposure and keep making quality episodes. How to reach us? The best way to reach us is on Twitter @2centsthursday. Hope to see you there!
In this episode of Phoenix Cast, hosts John and Kyle and Rich talk about the compromise of the social media app Parler. They discuss how a sequence of unprecedented events led to 80 terabytes of the site's data being scraped and the cybersecurity lessons to be learned from this event. Episode Links: Wired article: https://www.wired.com/story/parler-hack-data-public-posts-images-video/ Insecure direct object reference: https://www.acunetix.com/blog/web-security-zone/what-are-insecure-direct-object-references/#:~:text=Insecure%20direct%20object%20references%20(IDOR,control%20and%2For%20authorization%20checks. Project Maven: https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html Share your thoughts with us on Twitter: @USMC_TFPhoenix
In this episode of By Any Means Necessary, hosts Sean Blackmon and Jacquie Luqman are joined by Dr. Jack Rasmus, economist, radio show host, & author of 'The Scourge of Neoliberalism,' to discuss his new article, "What Happens January 6th, 20th & After?," why he sees the January 6th 'circus' of right-wingers descending on Washington, D.C. as a central part of the fight for the Republican Party, and why he thinks 'the real problem' isn't Donald Trump but his 70 million supporters.In the second segment, Sean and Jacquie are joined by Chris Garaffa, editor of TechForThePeople.org to discuss the newly-formed Alphabet Workers Union made up of Google workers, why the workers felt the need to form a labor union, and how resistance to the US government's intrusive "Project Maven" artificial intelligence project ties into the workers' increasing awareness of their collective bargaining power.In the third segment, Sean and Jacquie are joined by journalist Alan MacLeod to discuss his recent article "Fingers Point to US-Backed Gov't in Colombia's Ninetieth Massacre of the Year," why it appears left-wing former combatants are being targeted almost exclusively, and how the bipartisan US government support for the Colombian regime facilitates the deadly violence.Later in the show, Sean and Jacquie are joined by Dr. Jared Ball, Professor of Communication Studies at Morgan State University, the curator of imixwhatilike.org, and author of “The Myth and Propaganda of Black Buying Power,” to discuss the activation of Wisconsin National Guard ahead of the announcement of whether the police officer who killed Jacob Blake will be held legally accountable, why it seems establishment Democrats only "engage Blackness on the foundations of popular culture," and how the increasing racialization of the two major parties in southern states points to the need for alternative political possibilities.
In this segment of By Any Means Necessary, hosts Sean Blackmon and Jacquie Luqman are joined by Chris Garaffa, editor of TechForThePeople.org to discuss the newly-formed Alphabet Workers Union made up of Google workers, why the workers felt the need to form a labor union, and how resistance to the US government's intrusive "Project Maven" artificial intelligence project ties into the workers' increasing awareness of their collective bargaining power.
The tech world is at a tipping point. From data analytics to AI, the last few years have seen an explosion in emerging tech, and it's only accelerating. But are we pausing to think: just because we can, should we?
This episode examines how special operations forces are integrating high-tech tools like artificial intelligence and machine learning to optimize their operations. Dr. Richard Shultz of the Fletcher School of Law and Diplomacy and Gen. Richard Clarke, commander of US Special Operations Command, join the podcast to trace the history of US special operations forces' efforts in Iraq to adapt to the counterterrorism fight there, explain how these forces made use of data to enable a remarkably rapid operational tempo, and describe how a program called Project Maven took shape to harness new technological capabilities.
This week I speak to Lieutenant General Jack Shanahan, recently retired Director of the Pentagon’s Joint Artificial Intelligence Center,or JAIC. He was instrumental in starting Project Maven to integrate state-of-the-art computer vision into drone technology. He then started the JAIC, the central hub for the military’s AI efforts. Gen. Shanahan spoke about the challenges of nurturing innovation within a rigid and multilayered organization like the DOD and the threats the US faces ahead.
Join hosts Guy Snodgrass and Mark Solomons as they discuss operationalizing AI--where the rubber meets the road with artificial intelligence--with Joe Larson. Joe was recently the deputy director for the Pentagon's Project MAVEN and has years of experience introducing AI to large, diverse organizations. Listen in to hear about the lessons ANY organization can benefit from when seeking to explore AI's advantages... and the potential pitfalls you'll want to avoid. At the end of the day, Mark is concerned about one thing: is AI coming for his job? --- Support this podcast: https://anchor.fm/htlpodcast/support
Join Joe Larson as we discuss the national security strategy behind the Department of Defense's pursuit of artificial intelligence. Joe is a retired U.S. Marine Corps Reserve lieutenant colonel who served as deputy director for Project MAVEN, an intelligence program using AI to rapidly categorize video imagery. Joe has years of experience with artificial intelligence and in this episode he walks us through the "why" behind many of today's AI-related issues. --- Support this podcast: https://anchor.fm/htlpodcast/support
Az év elején komoly visszhangot váltott ki, hogy mindkét nagy tech óriás változtat a milliók (milliárdok) által használt csevegő szoftverein: A Google egyszerűen kivezeti, valamint fizetős szolgáltatások felé tereli a felhasználóit, a Facebook pedig összevon szolgáltatásokat, aminek végső soron úgyis a privacy látja kárát. A műveletekről egyelőre nem sokat lehet tudni, de amit igen, azt megbeszéljük, a maradékot meg megpróbáljuk kitalálni. A Google háza táján amúgy is bűzlik valami a Project Maven botrány kirobbanása óta. A céges kultúra gyökeres megváltozásáról, mindenféle szervezeti botrányokról, valamint a transzparancia csökkenéséről beszélnek az elemzők és az egykor alkalmazottak is - közöttük felsővezetők. Nemrég pedig a két alapító, Sergey Brin és Lerry Page is elhagyta a vállalatot. Shoshana Zuboff nemrég megjelent könyve (The Age of Surveillance Capitalism) kapcsán beszélgetünk még megfigyelési botrányokról, valamint, hogy mennyire vagyunk akár tudtunkon kívül is kiszolgáltatva az alternatív adatainkból (pl. helyadatok, érdeklődési körök, lehallgatás, stb) pénzt csináló cégeknek. Még a Pokémonokban sem bízhatunk.
(U kunt in YouTube de Engels gesproken tekst in het Nederlandse ondertitelen) When Zach Vorhies realized that his employer Google were going to not only tamper with the US elections but use that tampering with the elections to essentially overthrow the United States, he decided to blow the whistle. He took the evidence he had gathered to the department of justice and came forward through project Veritas in august of 2019, where those documents still can be found online. Zach has since been talking about the implications of what he saw happen at Google to the press. He figures it’s them who’s breaking the law, not him and his best chance of making it through unhurt and the story be heard, was to tell and retell it. In this interview with Rico Brouwer of Dutch online channel café Weltschmerz, Zach not only details how the search engine actually works to rank results but also explains how Google may edit content. Like their translation service deleted the word ‘covfefe’, after president Trump used it in 2017 and how Democratic Presidential Candidate Tulsi Gabbard was removed from Google top results after she won one of the debates. Further, Zach talks about how Google has played a role also in destabilizing countries in foreign regime change politics. Zach explains how Google would also use Wikipedia to rank their search results, even though Wikipedia is not to be considered an authoritative source of information. When Rico mentions that the whistleblowing on the removal of covfefe and the biblical meaning of the word was up on Wikipedia, that came as a pleasant surprise to Zach. We learned after the interview was recorded, than the battle for the content of that particular Wikipedia page was going on even while we were recording. Since Wikipedia holds a history of edits, you could look up the edits made on November 20th 2019, via the links below and find the ones that include Zach and the meaning of the word. Meanwhile in Europe, in the Netherlands the Minister of Internal Affairs assessed in October 2019 that rather than Google or Facebook it was ‘fake news’ and ‘alternative media channels’ that posed the bigger problem she intended to pro-actively put control over. She wrote to parliament that according to research she had done, a search engine manipulation effect was not happening in the Netherlands. Rico Brouwer previously interviewed dr. Robert Epstein, who has done extensive research into the Search Engine Manipulation Effect on elections in different countries where Google is being used. They discussed the far reaching implications and threat to democracies worldwide, including the Netherlands, by Google. The interview with Robert Epstein can be found in the links below. links: * Interview with dr. Robert Epstein https://potkaars.nl/blog/2019/11/19/robert-epstein-search-engine-manipulation-effect * Zach Vorhies in Project Veritas https://www.projectveritas.com/2019/08/14/google-machine-learning-fairness-whistleblower-goes-public-says-burden-lifted-off-of-my-soul/ * Google document dump on Veritas site https://www.projectveritas.com/google-document-dump/ * Trump Covfefe tweet 2017 https://twitter.com/realDonaldTrump/status/869858333477523458 * covfefe current wikipedia page https://en.wikipedia.org/wiki/Covfefe * Covfefe edits in Wikipedia https://en.wikipedia.org/w/index.php?title=Covfefe&action=history * Abovementioned report (English) sent to Dutch parliament by the Minister of the interior https://www.rijksoverheid.nl/binaries/rijksoverheid/documenten/rapporten/2019/10/18/rapport-politiek-en-sociale-media-manipulatie/rapport-politiek-en-sociale-media-manipulatie.pdf * Talking about big Pharma and Google, dr. Mercola https://soundcloud.com/drmercola/dr-mercola-discusses-google * Vorhies was interviewed after shooting took place at Google YouTube HQ in 2018, shooter was YouTube content creator Nasim Aghdam https://www.youtube.com/watch?v=IO9m6wZc-ps * Project Maven https://www.cnet.com/news/google-project-maven-drone-protect-resign/
In June of 2018, following a campaign initiated by activist employees within the company, Google announced its intention not to renew a US Defense Department contract for Project Maven, an initiative to automate the identification of military targets based on drone video footage. Defendants of the program argued that that it would increase the efficiency and effectiveness of US drone operations, not least by enabling more accurate recognition of those who are the program’s legitimate targets and, by implication, sparing the lives of noncombatants. But this promise begs a more fundamental question: What relations of reciprocal familiarity does recognition presuppose? And in the absence of those relations, what schemas of categorization inform our readings of the Other? The focus of a growing body of scholarship, this question haunts not only US military operations but an expanding array of technologies of social sorting. Understood as apparatuses of recognition (Barad 2007: 171), Project Maven and the US program of targeted killing are implicated in perpetuating the very architectures of enmity that they take as their necessitating conditions. Taking any apparatus for the identification of those who comprise legitimate targets for the use of violent force as problematic, this talk joins a growing body of scholarship on the technopolitical logics that underpin an increasingly violent landscape of institutions, infrastructures and actions, promising protection to some but arguably contributing to our collective insecurity. Lucy Suchman’s concern is with the asymmetric distributions of sociotechnologies of (in)security, their deadly and injurious effects, and the legal, ethical, and moral questions that haunt their operations. She closes with some thoughts on how we might interrupt the workings of these apparatuses, in the service of wider movements for social justice. Lucy Suchman is a Professor of Anthropology of Science and Technology in the Department of Sociology at Lancaster University, in the United Kingdom.
A 1960 Tape of a Government Cover-up. Listen to This! Do you really trust them with your life? Also, what is Project Maven and why is the Pentagon and Google partnering on it? WHY do I feel that AI is the wrong way to go? A Google Whistleblower Spills the Beans!
Die Ukrainierinnen und Ukrainer haben den Schauspieler Wolodymyr Selenskyj zu ihrem neuen Präsidenten gewählt. Vorläufigen Ergebnissen zufolge kam er auf rund 73 Prozent der Stimmen, Amtsinhaber Petro Poroschenko dagegen nur auf knapp 25 Prozent. Die Wahl Selenskyjs zum Präsidenten ist weitaus mehr als ein Fernsehmärchen, das Wirklichkeit wurde, sagt Steffen Dobbert, Politikredakteur bei ZEIT ONLINE, der den Wahltag vor Ort in Kiew beobachtet hat. Er erzählt uns, warum sie ein wichtiger Schritt hin zu einer Demokratisierung der Ukraine ist. Die Technologieindustrie steht seit Monaten in der Kritik: Spätestens seit dem Cambridge-Analytica-Skandal muss sich Facebook für den Umgang mit Daten rechtfertigen, Amazon kürzlich für seine diskriminierende künstliche Intelligenz im Personalmanagement und Google-Mitarbeiter haben gegen das sogenannte Project Maven protestiert, eine Kooperation des Unternehmens mit dem Militär. All diese Fälle haben eine Gemeinsamkeit: Sie weisen auf grundsätzliche ethische Probleme in der Technologiebranche hin. Muss sich das Silicon Valley neu erfinden? Digitalredakteurin Meike Laaff hat darüber mit Mozilla-Chefin Mitchell Baker gesprochen und gibt uns ihre Einschätzung. Und sonst so? Avocados auf Instagram verändern unsere Welt – zum Negativen. Mitarbeit: Sarah Remsky Sie erreichen uns per Mail: wasjetzt@zeit.de
The Institute of Electrical and Electronics Engineers (IEEE) has released its first edition of Ethically Aligned Design (EAD1e), a nearly 300-page report involving thousands of global experts; the report covers 8 major principles including transparency, accountability, and awareness of misuse. DARPA announces the Artificial Social Intelligence for Successful Teams program, which will attempt to help AI build shared mental models and understand the intentions, expectations, and emotions of its human counterparts. DARPA also announced a program to design chips for Real Time Machine Learning (RTML), which will generate optimized hardware design configurations and standard code, based on the objectives of the specific ML algorithms and systems. The U.S. Army awarded a $152M contract to QinetiQ North America for producing “backpack-sized” robots; the common robotic system individual (CRS(I)) is a remotely operated, unmanned ground vehicle. The White House has launched a site to highlight AI initiatives. Anduril Industries gets a Project MAVEN contract to support the Joint AI Center. And the 2019 Turing Award goes to neural network pioneers Hinton, LeCun, and Bengio. Researchers at Johns Hopkins demonstrate that humans can decipher adversarial images; that is, they can “think like machines” and anticipate how image classifiers will incorrectly identify unrecognizable images. A group of researchers at MIT, Columbia, Cornell, and Harvard demonstrate “particle robots” inspired by biological cells; these robots can’t move, but can pulsate from a size of 6in to about 9in, and as a collective, they can demonstrate movement and other collective behavior (even with a 20% failure of the components). Researchers at the Harbin Institute of Technology and Machine State University control a swarm of “microbots” (here, single grains of hematite) through application of different magnetic fields. And researchers use honey bees (in Austria) and zebrafish (in Switzerland) to influence each other’s collective behavior through robotic mediation. A report from the Interregional Crime and Justice Research Institute released a report on AI in law enforcement, from a recent meeting organized by INTERPOL. DefenseOne publishes a report from Tucker, Glass, and Bendett, on how the U.S. military services are using AI. An e-book from Frontiers in Robotics and AI collects 13 papers on the topic of “Consciousness in Humanoid Robots.” Andy highlights a book from 2007, “Artificial General Intelligence,” which claims to be the first to codify the use of AGI as a term-of-art. MIT Tech Review’s EnTech Digital 2019 has released the videos from its 25-26 March event. And DARPA has released more videos from its AI Colloquium. The U.N. Group of Governmental Experts is meeting in Geneva to discuss lethal autonomous weapons systems (LAWS). A short story from Husain and Cole describes a hypothetical future war in Europe between Russian and NATO forces. And Ian McDonald pens a story that captures the life of military drone pilots in Sanjeev and Robotwallah.
Public Space Travel Podcast Excerpt from Episode 02 - Droning On And On: Google, Project Maven, and war profiteering In this episode of PST, we discuss Google’s development of artificial intelligence for the U.S. Dept. of Defense’s Project Maven; the (limited) effectiveness of pushback by the Google employees who were creating the AI; Google’s ‘core principles’ in relation to the drone project; and end up going down the rabbit hole of technology and its use to facilitate oppression. Public Space Travel is a podcast dedicated to social/political critique, comedy, and education. Coming from an anti-hierarchy/oppression perspective, we aim for progressive/radical left solidarity with brothers/sisters/trans and non-binary folk of all stripes. Co Hosts this episode and where to follow them: Lazarus - @PSTLazarus Luci - @PSTLuci Mar(x) - @PSTInTheShadows We want to discuss and examine topics (or interview people) that you want to hear about, as well as make corrections for things we’ve said. Reach out to us: PublicSpaceTravel@gmail.com Voice Mail: (208) 502-1406 Twitter: @PublicSpacePod ------------------------------------------------- Public Space Travel Intro Theme by Lazarus (Remix of Dial My Number by Benedek off of “Bonus Beat Blast 2011” licensed under a Creative Commons CC BY 3.0 License, found on freemusicarchive.org) Episode art edited by Lazarus (Original photo “NO DRONES” by Martin Sanchez found on unsplash.com)
Public Space Travel Podcast Episode 02 - Droning On And On: Google, Project Maven, and war profiteering In this episode of PST, we discuss Google’s development of artificial intelligence for the U.S. Dept. of Defense’s Project Maven; the (limited) effectiveness of pushback by the Google employees who were creating the AI; Google’s ‘core principles’ in relation to the drone project; and end up going down the rabbit hole of technology and its use to facilitate oppression. Public Space Travel is a podcast dedicated to social/political critique, comedy, and education. Coming from an anti-hierarchy/oppression perspective, we aim for progressive/radical left solidarity with brothers/sisters/trans and non-binary folk of all stripes. Co Hosts this episode and where to follow them: Lazarus - @PSTLazarus Luci - @PSTLuci Mar(x) - @PSTInTheShadows We want to discuss and examine topics (or interview people) that you want to hear about, as well as make corrections for things we’ve said. Reach out to us: PublicSpaceTravel@gmail.com Voice Mail: (208) 502-1406 Twitter: @PublicSpacePod ------------------------------------------------- Public Space Travel Intro Theme by Lazarus (Remix of "Dial My Number" by Benedek off of “Bonus Beat Blast 2011” licensed under a Creative Commons CC BY 3.0 License, found on freemusicarchive.org) Public Space Travel Outro Theme by Lazarus (Remix of "RSPN" by Blank and Kytt off of “Heavy, Crazy, Serious” (Tough Love Records, 2010). Licensed under a Creative Commons Attribution License on freemusicarchive.org) Episode art edited by Lazarus (Original photo “NO DRONES” by Martin Sanchez found on unsplash.com) Article Links: Google Hedges On Promise To End Controversial Involvement In Military Drone Contract By Lee Fang at The Intercept Google Is Helping the Pentagon Build AI for Drones By Kate Conger and Dell Cameron at Gizmodo
Will Roper, assistant secretary of the Air Force for acquisition, technology and logistics, is something like Q for the Defense Department. He formerly ran the Strategic Capabilities Office, a secretive military skunkworks designed to figure out how to fight future wars. While there, he helped design swarms of tiny unmanned drones; he helped create Project Maven; and he tried to partner the Defense Department with the videogame industry.
欢迎收听丽莎老师讲机器人,想要孩子参加机器人竞赛、创意编程、创客竞赛的辅导,找丽莎老师!欢迎添加微信号:153 5359 2068,或搜索微信公众号:我最爱机器人。丽莎老师讲机器人之谷歌全球竞赛只开发对人类有益的人工智能。人工智能领域的一些最大障碍是阻止此类软件产生与其人类创造者相同的内在缺陷和偏见,并使用AI来解决社会问题而不是简单地自动化任务。现在,谷歌是当今开发人工智能软件的世界领先组织之一,正在推出一项全球竞赛,以帮助推动对整个领域和整个社会产生积极影响的应用和研究的发展。这项名为AI Impact Challenge的竞赛今天在加利福尼亚州桑尼维尔办公室召开的一场名为AI for Social Good的活动上宣布,该活动由该公司的Google.org慈善机构负责监督和管理。谷歌将其定位为将非营利组织,大学和其他组织(不在硅谷的企业和利润驱动的世界中)整合到人工智能研究和应用的未来发展中的一种方式。该公司表示,它将向一些受助者授予高达2500万美元的资金,以“帮助将最佳创意转化为行动。”作为竞赛的一部分,Google将为项目提供云资源,并从今天开始开放应用程序。接受的受助者将在明年的Google I / O开发者大会上公布。谷歌采取这一举措的首要任务是利用人工智能来解决环境科学,医疗保健和野生动物保护等领域的问题。谷歌表示人工智能已经被用来通过跟踪和识别鲸鱼声来帮助确定鲸鱼的位置,然后可以用来帮助保护免受环境和野生动物的威胁。该公司表示,人工智能还可用于预测洪水,并确定特别容易受到野火影响的森林区域。谷歌的另一个重要领域是消除人工智能软件中可能会复制人类盲点和偏见的偏见。一个值得注意的最近一个例子是谷歌在1月份承认它无法找到修复其照片标记算法的解决方案,将照片中的黑人识别为大猩猩,最初是大部分白人和亚洲劳动力的产品,无法预见其如何图像识别软件可能会犯这样的根本错误。(谷歌的员工人数仅为2.5%。)谷歌没有找到解决方案,而是删除了在Google相册上搜索某些灵长类动物的功能。这就是那些问题- 谷歌称其难以预见并需要帮助解决- 该公司希望其竞争可以尝试解决。该竞赛与谷歌新推出的社会良好人工智能计划一起,遵循6月初公布的公开承诺,该公司表示永远不会开发人工智能武器,其人工智能研究和产品开发将遵循一套道德原则。作为这些原则的一部分,谷歌表示不会对违反“国际公认的规范”的人工智能监控项目起作用,并且其研究将遵循“广泛接受的国际法和人权原则。”该公司还表示其人工智能研究将主要关注“对社会有益”的项目。最近几个月,包括谷歌在内的许多技术最大的参与者都在努力开发可能被军方使用的技术和产品的道德规范,或者可能有助于美国和国外监控国家的发展。许多这些技术,如面部和图像识别,都涉及到AI的复杂用途。特别是谷歌发现自己卷入了围绕其参与美国国防部无人机计划Maven的争议,以及其为中国市场推出搜索和算法新闻产品的秘密计划。在经历了严重的内部强烈反对,外部批评和员工辞职后,谷歌同意在合同履行后退出Project Maven的工作。
欢迎收听丽莎老师讲机器人,想要孩子参加机器人竞赛、创意编程、创客竞赛的辅导,找丽莎老师!欢迎添加微信号:153 5359 2068,或搜索微信公众号:我最爱机器人。丽莎老师讲机器人之AI改变现代商业的25种方式(中)。9、人类和机器人协作几十年来,机器人一直在制造业的装配线上做着各种活儿。最近,一项新功能被添加进来:与人类协作。协作机器人范围广泛,从可以将正确部件交给人类同事的机器人助手,到增强人类力量的机器人外骨骼,以及 AI 指导。在宝马位于南卡罗来纳州斯巴达堡的工厂,绰号为夏洛特小姐的协作机器人正在负责安装车门。梅赛德斯 - 奔驰正转向协作机器人,帮助实现某些豪华汽车生产线的个性化生产。比如,在更加灵活协作机器人的帮助下,工人可以更快地在定制化 S-Class 轿车所需的各种零件中进行拣选。麻省理工学院教授 Julie Shaw 正在研究利用机器学习开发软件算法,通过读取周围人类的信号,算法可以教授协作机器人如何以及何时进行沟通。一些研究人员甚至研究过将协作机器人连接到读取脑电波上。可以读心的机器人助手?现在叫协作(机器人)。10、为清洁能源提供动力如果风能比化石燃料还便宜,并且便宜到一定程度,那么,将风能转化为电能的过程会更加高效。西门子开发的机器学习技术正在发挥这方面的作用。研究人员意识到,通过使用天气和组件震动的数据,巨大的风力涡轮机可以不断微调自身,比如,调整转子叶片的角度。但是,人无法分析计算出这一点,这正是 AI 和机器学习的用武之地。计算传感器生成的所需参数,在「以前,仅被用于远程维护和服务诊断,现在,他们也在帮助风力涡轮机产生更多的电力。」该技术甚至可以调整涡轮机,以适应那些从其前面通过的气流,而这些气流是无法预测的。如今,广泛部署这样的人工智能技术,正成为西门子可再生能源公司(去年,西门子风能业务与西班牙 风电业务合并成立的一家独立公司)的发展机遇。11、密切关怀凡人人类似乎并不太了解自己的极限在哪里。他们往往吃得太多,睡得太少,并高估了一段时间内可以取得的成就。在有些场合下,比如感恩节晚餐,这些行为可能看起来不那么重要,但在某些行业,比如长途货运和重型设备操作,这些行为不仅危险,甚至是灾难性的。这就是为什么越来越多地的公司使用人工智能(类似守护天使)来保护高风险行业的员工。经过数百小时数据训练的系统能够实时监控操作员的心率,体温,疲劳程度或紧张情绪指标,并在个人需要休息或休息时发出信号(SAP 一款叫做 Connected Worker Safety 的产品可以做到这一点。)至于我们其他人,可以期待在未来的车库中看到这类技术,汽车制造商们正在构想让汽车监督人类的办法。虽然该技术目前仅限于在几个车型的仪表板上闪烁的咖啡杯图标,但是,与大多数主要汽车制造商合作的人工智能公司 汽车,使用语音和面部识别技术来检测疲劳程度,将很快成为新车的标准配置。AI 正从三个方面让你更安全12、武器自动锁定的目标如果企业和五角大楼愿意的话,能够自动锁定目标的杀手机器人距离我们不会太远。不过,到目前为止,国防部还没有制造出自主性致命武器(无需人类即可自主实现攻击,就像 Facebook 在你照片上标记朋友一样容易)。但是,可能构成这类武器系统基础的人工智能技术正在顺利研发过程中。比如,五角大楼最引人注目的人工智能计划 Maven,旨在研发借助机器学习算法即可识别恐怖分子的无人机,协助军队打击 ISIS。对于国防工业来说,这并不算什么新鲜事,但是,五角大楼开始越来越多地向硅谷寻求人工智能和面部识别方面的专业技术,最近还引发了争议,谷歌宣布将退出 Project Maven。未来,公司赢得利润丰厚的新人工智能合约的唯一障碍,可能就是他们自己不情愿。13、避免威胁未能有效防止网络和真实生活遭受攻击,让人们代价惨痛。2017 年,个人数据泄露造成的平均损失近 400 万美元。但是,最近攻击激增有一个好处:这意味着,还有更多的数据有待深入挖掘。几十年来,机器学习技术已被用于识别模式和过滤电子邮件,但是 Barracuda Networks 等供应商的新系统借助 AI,可以学习特定公司及其高管的独特通信模式,以查明潜在的网络钓鱼诈骗及其他黑客企图。在物理安全领域,人工智能甚至被用于安全摄像头,「识别」并试图阻止威胁。启动 Athena Security 的新摄像头即可识别对方什么时候会拔枪,甚至会自动报警。简而言之:我们拥有的数据越多,就越能使用 AI 抗击犯罪。14、贪污者要小心!如何捕获金融犯罪?全球各地的银行,如汇丰银行和丹麦银行,越来越多地借助人工智能打击金融诈骗,洗钱和欺诈行为,而不是让合规人员筛查数千笔交易,寻找可疑线索。(几家银行因未能发现非法资金流入他们的帐户而遭受巨额罚款,这也成为银行采用新技术的重要推动力。)汇丰银行与人工智能创业公司 Ayasdi 合作,实现了部分合规自动化。在与汇丰银行合作的为期 12 周的试点项目中,Ayasdi 的人工智能技术使假阳性减少了 20%(交易看起来很可疑,但事实上合法),同时保留下的可疑活动报告数量,与人类审查结果相同。AI 正从 7 个方面改变人类的吃穿住行15、不必自己开车在理想道路条件下实现安全自动驾驶的技术的出现,已经有一段时间了。不过对于现实世界来说,汽车的驾驶行为还必须更像人类一点。这也是著名 iPhone 黑客乔治·霍兹创建的创业公司 Comma.ai 所要做的事情。公司的 Openpilot 技术不是用来教会汽车识别树或者停车标志,而是分析驾驶员的驾驶习惯来训练自己。该公司将从行车记录仪应用程序和一个名为 Panda 的插件模块中提取的数百万英里驾驶数据,汇总成一个模仿人类驾驶员的自主系统。该公司将其技术定位为自动驾驶的 Android,与特斯拉的 Autopilot 这个封闭系统(类似苹果系统)形成对比。16、新的旅行伴侣2010 年爆发的冰岛火山影响了数百万航班,也正因如此,开创了旅游通讯的新纪元。由于航空信息流能力有限,航空公司发现,社交媒体能更加实时有效地触及顾客。「一旦开启闸门,这种沟通方式「便势不可挡。」而且,从那时起,旅客人数也在激增,2016 年入境人数达到 12.5 亿,增加了 30%。这种规模导致人工处理基于社交媒体的信息沟通变得「不可能」。聊天机器人能够回答的基本问题是,比如:我的航班是否延误了?我的酒店结账时间是什么时候?例如,Booking 的聊天机器人,据称可以自动解决 60%的客户查询。这项技术的下一阶段是让机器人了解客户的旅行目的,比如商务或玩乐?并根据客户喜好在整个旅程中提出建议,从航班升级到最佳素食餐厅、咖啡馆预订。因此,目前的聊天机器人可能很快就会成为一个全面的自动化礼宾服务员。
NASA might start looking more like NASCAR, Snopes verifies Smart Dust, the new Google AI Chief resurrects Project Maven, and Facebook AI can now read memes. If you desire MORE Become a Patron, join the exclusive community, and receive Extended Reports of CCNT every week! AGG for the WEEK of Sept. 7-Sept. 11 YOU HEARD IT HERE FIRST FOLKS! (Updates on stories) The Super Rich of Silicon Valley Have a Doomsday Escape Plan in New Zealand Young blood could be the secret to long-lasting health: study Drinking young people’s blood could be the secret to long-lasting health, scientists claim Pet Clones: On the Threshold of Cloning Humans? Inside the Very Big, Very Controversial Business of Dog Cloning | Vanity Fair SOPHIA MEDITATION (54) Sophia Robot Meditation with Deepak Chopra - YouTube Loving Sophia: Hanson Robotics is teaching robots how to connect to humans — Quartz Finding Your Zen with Sophia - Hanson Robotics TECHNOLOGY, ROBOTS, AI OH MY! Amazon granted patent for workers in robot cages | Fox News Artsy robot reproduces images by winding thread This Hyper-Real Robot Will Cry and Bleed on Med Students | WIRED Try not to be freaked out by this robot's eerily human expressions AI robots can develop prejudices, just like us mere mortals BrambleBee Robot Promises to Help Honeybees Pollinate Flowers | Digital Trends AI detects ‘mysterious repeating’ signals from ‘alien galaxy’ 3 billion light years away — RT US News Tommy Hilfiger embeds smart chips in new fashion line | Fox Business Shape-shifting material can morph, reverse itself using heat, light AI program detects dozens of ‘alien’ signals from far off galaxy Forget Terrorism, Climate Change and Pandemics: Artificial Intelligence is The Biggest Threat to Humanity If Artificial Intelligence Only Benefits a Select Few, Everyone Loses Four questions Silicon Valley should expect from Capitol Hill - MIT Technology Review CRYPTOCURRENCY AND B-B-B-BLOCK CHAIN George Church’s genetics on the blockchain startup just raised $4.3 million from Khosla | TechCrunch Cryptocurrencies Flash Crash; Bitcoin, Ethereum Plummet | Zero Hedge SPACE/ALIEN/ETs/UFOs Hark! The welcome return of Robbie Williams: alien conspiracy theorist Four UFOs are spotted flying over US president's Scottish golf course | Daily Mail Online UFO invasion: The truth is out there in Charlotte, North Carolina “THE FOUR HORSEMEN of the TECHNOCALYPSE!” Jeff Bezos’ Blue Origin spaceship launches a double entendre – GeekWire Gene Munster: Tesla must overhaul board right now SpaceX's Shotwell reportedly says Musk 'lucid and capable' as ever Elon Musk says humanity is trapped in real life MATRIX – and here’s why | Science | News | Express.co.uk BIOMEDICAL/GENETICS/TRANSHUMANISM It's Official, The Transhuman Era Has Begun Appeals Court Upholds CRISPR Patent, Potentially Ending Bitter Dispute CRISPR safety calls for cautious approach - The Washington Post Nestle Wants Your DNA to Sell You Supplements - Bloomberg Genetic-testing technology is progressing rapidly. The rules need to keep up. - The Washington Post Cracking the sugar code: Why the “glycome” is the next big thing in health and medicine | Salon.com Genes are key to academic success, study shows Early results boost hopes for historic gene editing attempt 23andMe Data Suggests Genetic Link Between Cannabis Use and Schizophrenia 23andMe Cuts Off the DNA App Ecosystem It Created | WIRED First genetically modified mosquitoes set to be released in Africa Are Consumers Ready For Genetically Engineered Animals? Depends How You Ask Genetic studies intend to help people with autism, not wipe them out | New Scientist Brain Cancer's 'Immortality Switch' Turned Off with CRISPR Genetic science is attempting to predict our fates. GWAS, explained. - Vox The Ancient Genetic Component That Connects All of Humanity to a Single Ancestor STORIES THAT DOVETAIL OTHER RESEARCH Researchers 'teleport' a quantum gate Study reveals the Great Pyramid of Giza can focus electromagnetic energy Red Heifer Birth, Paves Way For Renewed Temple Service The Last of the Universe’s Ordinary Matter Has Been Found | Quanta Magazine CONSPIRACY THEORIES AND SOMETIMES FACTS! Roger Waters: Propaganda Is Keeping Voters Asleep Like Orwellian Sheep | Neon Nettle “People’s Heads Are Blowing Up”: As Fox News Installs a Meditation Room, Staffers Worry the Conservative Network Is Going Full Woke | Vanity Fair This Is an Actual Thing: Hillary Clinton to Appear at 'Lesbians Who Tech' Convention This Coming Week Russia 'tried to spy on France in space' - French minister - BBC News Over a dozen men who were near Ground Zero after 9/11 have breast cancer | abc7ny.com Japan to “drop tanks” full of Fukushima nuclear waste directly into the ocean A Trail of ‘Bread Crumbs,’ Leading Conspiracy Theorists Into the Wilderness - The New York Times 'Remodelling the lizard people's lair': Denver airport trolls conspiracy theorists | US news | The Guardian JPMorgan says next crisis will feature flash crashes and social unrest New Biometric Exit Boarding System Technology Unveiled at Washington Dulles International Airport JFK's Quarantined Airplane: Why Were People Sick? - The Atlantic A Trail of ‘Bread Crumbs,’ Leading Conspiracy Theorists Into the Wilde Three Hours Up Close With Alex Jones of Infowars - The New York Times THE UNSEEN REALM Spiritual-warfare expert: ‘Demonic component’ to divided U.S. How to Recognize Demonic Activity in the Church Scandals, According to an Exorcist Leading U.S. Exorcist Says Catholic Church Sex Abuse Scandal is Demonic and Likely to Get Worse Before it Gets Better SOCIAL MEDIA/GOOGLE/AMAZON Popular Mac App Adware Doctor Actually Acts Like Spyware | WIRED Google's leftist gaggle: 90 percent of employee donations go to fund Democrats - Washington Times Twitter admits ‘unfairly filtering 600,000 accounts,’ but says it’s not politically motivated - National | Globalnews.ca Facebook definition of terrorism helps states mute dissent: U.N. expert | Reuters Facebook’s ‘Rosetta’ system helps the company understand memes | TechCrunch Google Cloud names Andrew Moore its new head of AI | VentureBeat CHINA! Chinese officials burn bibles, close churches, force Christian to denounce faith amid 'escalating' crackdown China Will Begin Using Genetic Testing to Select Olympic Athletes NEPHILIM UPDATE! Former basketball pro has incredibly unique combination of genetic variants that affect height, researchers find
This week we talked about three different stories that are related - our use of technology to empower or disempower. The first was a story about the employees of Google rebelling against it's participation in Project Maven. They organized themselves and pressured management, eventually forcing them to agree to not take any more military contracts. The second story was about the use of technology in Barcelona to empower it's citizens. Barcelona has a new major who has begun the process of using the cities smart technology to empower it's citizens to participate in how the city develops and uses the information it has access to. The third was about the ways in which social media is manipulated to be addictive and why they care so much that you are obsessive about checking your social media accounts.
Silicon Valley is in the middle of an awakening, the dawning but selective realization that their products can be used to achieve terrible ends. In the past few months, this growing unease has bubbled up into outright rebellion from within the rank and file of some of the largest companies in the Valley, beginning in April when Google employees balked at the company's involvement with a Pentagon artificial intelligence program called Project Maven.
In today’s On the SPOT News Brief, Jay Leask and Craig Jahnke catch up after a multi-month hiatus with Jay's move, and newborn son, and Craig's new job and summer plans. And they discuss their goals for the podcast and how hard it can be to find the motivation to record, edit, and release this. So thank you for putting up with us as we try to rebuild our cadence! In technology they discuss news out of the SharePoint conference, GitHub, a sunken data center, Project Maven, WWDC, GDPR and those pesky privacy policies. They also put out a call for someone to help them understand what Apple has been doing for the past 5+ years - I mean, really, what's iNew?
Google -- or, more properly, Alphabet -- is a huge company, and is at the bleeding edge of numerous technological innovations. So, while it wasn't necessarily a surprise that Uncle Sam wanted Google's help building AI, it certainly disturbed a great many people, some of whom were Google's own engineers. So what exactly happened? Join the guys as they dive into the strange story of Project Maven. Learn more about your ad-choices at https://news.iheart.com/podcast-advertisers
We've got one exceptionally LONG episodes of headlines for you. Danny and I discuss a bunch of new developments in Syria, Google's choice to not renew their contract to Project Maven, and discussion about Reality Winner and the Intercept. Enjoy!!! 15:51 - The US / Turkey reach agreement over Syrian town 26:52 - New interim VA director Peter O’Rourke - updates on VA Mission Act 35:37 - The US is pushing NATO to get more troops / weapons ready in order to deter Russia 43:41 - US bombing in Raqqa, Syria may have violated international law 52:32 - Partisan battle for tactical nuclear weapons begins in the Senate 1:01:56 - Google chooses not to renew contract for Project Maven 1:05:36 - No more foreign bases 1:12:56 - Reality Winner and the Intercept Association of Veterans Affairs Psychological Leaders - Breakdown of VA Mission Act U.S. AIRSTRIKES VIOLATED INTERNATIONAL LAW IN “WAR OF ANNIHILATION” IN RAQQA, SYRIA, SAYS AMNESTY INTERNATIONAL - Murtaza Hussain - The Intercept SYRIA: “WAR OF ANNIHILATION”: DEVASTATING TOLL ON CIVILIANS, RAQQA – SYRIA - Amnesty International THE U.S. SHOULD NOT BUILD MORE FOREIGN BASES - Akhilesh (Akhi) Pillalamarri - Defense Priorities GOOGLE WON’T RENEW ITS DRONE AI CONTRACT, BUT IT MAY STILL SIGN FUTURE MILITARY AI CONTRACTS - Lee Fang - The Intercept US senators grapple with new sub-launched nuke - Joe Gould - Marine Corps Times U.S. pushes NATO to ready more forces to deter Russian threat - Robin Emmott and Idrees Ali - Reuters Enjoy the show?! Please leave us a review right here. Got news to share about our military or veterans?! Or just need to cuss at us for a bit?! Contact us direct by email at fortressonahill@gmail.com Leave us a voicemail at 860-598-0570. We might even play it on the podcast!!! Not a contributor on Patreon? You're missing out on amazing bonus content! Sign up to be one of our contributors today! - www.patreon.com/fortressonahill A special thanks to our honorary producers Matthew Hoh and Will Ahrens!! Without you guys, we couldn't continue our work. Thank you both so much!!! Facebook - Fortress On A Hill Twitter - Fortress On A Hill Soundcloud - Fortress On A Hill FOH is hosted, written, and produced by Chris 'Henri' Henrikson and Danny Sjursen Cover and website art designed by Brian K. Wyatt Jr. of B-EZ Graphix Multimedia Marketing Agency in Tallehassee, FL Music provided royalty free by Bensound.com Note: The views expressed in this podcast are those of the hosts alone, expressed in an unofficial capacity, and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. government.
Playing for Team Human today, recorded live on the floor at the Personal Democracy Forum 2018, are Moira Weigel and Ben Tarnoff. Moira and Ben will be showing us how the tech industry’s promise to build less harmful products and programs is just capitalism’s way of proving that love means never having to say, “I’m sorry.”Moira and Ben co-wrote the brilliant feature article in the Guardian, “Why Silicon Valley Can’t Fix Itself”Just last week, Ben’s exposé and interview with an anonymous worker/organizer at Google revealed the internal fight led by workers against Google’s contracting with the Pentagon on Project Maven, a weaponized use of Google’s AI and cloud computing technology. The interview, published June 6th, can be found at Jacobin magazine: Tech Workers Versus the Pentagon Ben’s articles in the Guardian and Jacobin have been disrupting tech industry gospel for the past decade. He is also the author of The Bohemians.Moira Weigel is a postdoc at the Harvard Society of Fellows. Her recent book Labor of Love; The Invention of Dating looks at the commodification of courtship under consumer capitalism. Moira and Ben are editors of Logic, a print and digital magazine which features thought provoking journalism on technology. Like Team Human, Logic strives to host a “better conversation” about technology… learn more and subscribe here: https://logicmag.io/Douglas opens the show with a monologue unpacking the bizarre news of the past week; G7, trade wars, and North Korea.On today’s show you heard intro and outro music thanks to Fugazi and Dischord records, R.U. Sirius’s President Mussolini Makes the Planes Run On Time, and a Team Human original by Stephen Bartolomei. You can sustain this show via Patreon. And please leave us a review on iTunes. See acast.com/privacy for privacy and opt-out information.
Google's contract with the Pentagon for Project Maven — a controversial drone imaging program that uses artificial intelligence — prompted over 4,000 Google employees to sign a petition opposing the project, and about a dozen workers resigned in protest. In response, Google Cloud CEO Diane Greene announced that the contract will not be extended, and that “there will be no follow-on to Maven.” Yasha Levine has covered Silicon Valley for years, and his new book Surveillance Valley: The Secret Military History of the Internet (PublicAffairs, February 6, 2018) details Google's fifteen-year history of selling search, mapping, and satellite imagery services to the Defense Department and a number of intelligence agencies. Levine notes that the complete name of Maven is “Algorithmic Warfare Cross-functional Team: Project Maven” and its purpose is to improve object identification for use in drone warfare. He also wonders how so many Google employees could have been unaware of their company's deep involvement in military contracting through a subsidiary called Google Federal. He explains that Google Federal, based near the CIA in Reston, VA, originated in 2004 with Google's acquisition of a startup called Keyhole. Keyhole was midwifed by the CIA's venture capital operation, In-Q-Tel. Keyhole's CEO, Rob Painter, had deep connections to military and intelligence agencies, as well as to the vendors that compete for intel contracts worth an estimated $42 billion annually; Painter now runs Google Federal. While Levine allows that some Google employees might be unaware of the military and intel work of the company, it's widely known in Silicon Valley that most tech giants are deeply involved in these kinds of government contracts.
With Google ending its involvement in Project Maven after significant employee dissent about the company working with the military we ponder what exactly should AI be used for and perhaps more importantly what it shouldn't be used for. Starring Tom Merritt, Shannon Morse, Len Peralta and Roger ChangMP3Using a Screen Reader? Click hereMultiple versions (ogg, video etc.) from Archive.org.Please SUBSCRIBE HERE.Subscribe through Apple Podcasts.Follow us on Soundcloud.A special thanks to all our supporters–without you, none of this would be possible.If you are willing to support the show or give as little as 5 cents a day on Patreon. Thank you!Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!Big thanks to Mustafa A. from thepolarcat.com for the logo!Thanks to Anthony Lemos of Ritual Misery for the expanded show notes!Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subredditShow NotesTo read the show notes in a separate page click here! See acast.com/privacy for privacy and opt-out information. Become a member at https://plus.acast.com/s/dtns.
Hosts Wade Mariano, Tres Finocchiaro and Gunnar Kennedy discuss Google's discontinuation of Government Contract "Project Maven", the differences of "island life" and the trend of acceptable racism in the United States. #projectmaven #islandlife #acceptablracism #racism #google #snakes
The backlash to Google's work on a US military artificial-intelligence project began inside the tech giant, but in recent weeks, it has spilled into the public. As employees resigned in protest over Google's work with Project Maven, which uses AI to identify potential drone targets in satellite images, reports revealed top executives fretting over how it will be perceived by the public.
This episode we cover robot batistas, Project Maven, and WWDC2018
Here’s your Headstart on the latest business headlines you need to know for Monday, June 4th, 2018. Coming up: Microsoft Reportedly is in Talks to Buy Github, Google Plans to End its Project with the Pentagon, the State of Vermont Will Pay You $10,000 if you do these two things. We’ll have all these stories plus a quick look at the week ahead in under 7-minutes. See acast.com/privacy for privacy and opt-out information.
"Chasing Shadows" Hosts: Vicky Davis and Darren Weeks Complete show notes and credits at: https://governamerica.com/radio/radio-archives/22122-govern-america-june-2-2018-chasing-shadows Border crossings continue, despite purported efforts at enforcement. Why are the hands of the National Guard being tied along the U.S.-Mexican border. Are efforts to crack down really just for show? Wildlife under/overpasses continue to be built, in accordance with Agenda 21. Low-flow toilets continue to cause plumbing problems. Crypto currencies, Bitcoin, smart meters, the 5G roll-out, radio frequency hyper-sensitivity, and the issue of privacy. The Communist Chinese want to build a pipeline across Alaska. Why allow foreigners to exploit the resources that the American people own and need? Trump slaps tariffs on Canada, Mexico, and European countries. Does he have the authority to do this? The difference between "protectionism" and "isolationism". More on the Trump administration pullout of Iran. As the European Union works with Iran to keep the nuclear deal alive, is the United States becoming irrelevant on a global stage? Is the U.S. being isolated from its allies? If so, what will the long-term effects be? The role of Peter Thiel and Palantir in the International Atomic Energy Agency and the Iran nuclear agreement, and in U.S. cities and fusion centers. A new program exists in West Sacramento, called "Zen City", monitors residents social media posts. The Google/Alphabet role in Project Maven, and the conscience of company employees. Inland ports, international zones, port authorities, kleptocrats and their foundations, the "Know Your Customer" law, and several phone calls throughout the show.
After a huge amount of backlash, Google's Project Maven will not Renew at the end of the year. This comes after employees signed petitions and walked away from their jobs and all of the madness that is come out over the recent week. https://www.engadget.com/2018/06/01/google-will-not-renew-military-ai-contract-project-maven/ || Let's continue our discussion! Follow me on Twitter and Instagram at @dexter_johnson and visit http://DexJohnsPC.com to stay on top of my latest posts. Share this podcast with a friend!
Google will not be renewing its Project Maven contract, while Microsoft and Lyft are considering big acquisitions.
First it was Project Maven, then Google Duplex, and now the Selfish Ledger. Google has become the Church of the Algojerks, and it is now clear that Silicon Valley is pursuing a fundamentally Leninist project.
First it was Project Maven, then Google Duplex, and now the Selfish Ledger. Google has become the Church of the Algojerks, and it is now clear that Silicon Valley is pursuing a fundamentally Leninist project.
The LAVA Flow | Libertarian | Anarcho-capitalist | Voluntaryist | Agorist
Seasteading has made some bold moves by signing an agreement with French Polynesia, and they have a pretty aggressive timeline. You don't want to miss hearing about this. What's in the News with stories on everyone gets a trophy, forced solar panels, cop sex illegal, sports gambling, and Google employees protesting. Finally, and Ask Me Anything segment with questions on breaking laws to defend yourself, pollution in a voluntaryist society, and drug war intellectual honesty. This episode is brought to you by ZenCash, a cryptocurrency that infuses privacy, anonymity, and security done right. Also brought to you by NordVPN, the fastest, easiest to use service to protect your online presence that I've ever seen. WHAT'S RUSTLING MY JIMMIES If you've listened to this show for any amount of time, you know I'm fascinated by projects that are happening to bring more freedom to people. Some of the best chances, as far as I can tell, are those that separate and congregate hose who believe in freedom. Clearly, at this point in my life, I believe that the Free State Project is the best such project out there, but there are many others that could give it a run for its money in the future. Whether it be Liberland, Free Society, seasteading, and more. But, one one of these came across my news desk a few days ago from CNBC that deserves a really close look if for no other reason that it has more legs than the rest. A floating Pacific island is in the works with its own government, cryptocurrency, and 300 houses. WHAT'S IN THE NEWS In everyone gets a trophy news, a New Jersey high school is taking a new approach to cheerleading tryouts, and not all parents and cheerleaders are fans of the new policy. Under the new rules, everyone makes the squad or no one does. In solar or else news, a state board in California has approved a proposal to require solar panels on all new homes beginning in 2020, a measure that would increase the cost of new construction but provide savings on utilities — and help the state meet ambitious targets for reducing greenhouse gas emissions. In woe was me news, a new Kansas law makes it a crime for police to have sex with people they pull over for traffic violations or detain in criminal investigations. In one small step news, the United States Supreme Court struck down a federal law that prohibits sports gambling in a landmark decision that gives states the go-ahead to legalize betting on sports. In some folks still have integrity news, around a dozen Google employees have quit over the company's involvement in an artificial intelligence drone program for the Pentagon called Project Maven, Gizmodo reported. ASK ME ANYTHING It's that time again, where I answer your burning questions!
Project Maven, the project involving Google and the United States government, prompted fear and outrage from it's employees rightfully so. Google has now come forward to let us know that it is working on a set of guidelines to ensure that AI will not be used in weaponry. If this is the case, the public needs to see the guidelines. || Let's continue our discussion! Follow me on Twitter and Instagram at @dexter_johnson and visit http://DexJohnsPC.com to stay on top of my latest posts. Share this podcast with a friend!
At Google's campus in Mountain View, California, executives are trying to assuage thousands of employees protesting a contract with the Pentagon's flagship artificial-intelligence initiative, Project Maven. Thousands of miles away, algorithms trained under Project Maven—which includes companies other than Google—are helping war fighters identify potential ISIS targets in video from drones.
For months, a growing faction of Google employees has tried to force the company to drop out of a controversial military program called Project Maven. More than 4,000 employees, including dozens of senior engineers, have signed a petition asking Google to cancel the contract. Last week, Gizmodo reported that a dozen employees resigned over the project. “There are a bunch more waiting for job offers (like me) before we do so,” one engineer says.
Project Maven. A project headed up by the US government is one in which AI is needed to help drones discern what objects they see. Google seemingly has been helping out on this project, leading to several employee resignations. The larger issue here is why would you help a government that thinks it should scrape all of your (meaning Google) data for the "good" of society? Read more here: https://www.engadget.com/2018/05/14/google-project-maven-employee-protest/ || Let's continue our discussion! Follow me on Twitter and Instagram at @dexter_johnson and visit http://DexJohnsPC.com to stay on top of my latest posts. Share this podcast with a friend!
This week we talk about androids, drones, and Project Maven.We also discuss lethal autonomous weapons, the Terminator, and cyberwarfare. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
This week we talk about androids, drones, and Project Maven. We also discuss lethal autonomous weapons, the Terminator, and cyberwarfare. For more information about this podcast and to view the copious show notes, visit letsknowthings.com. Become a patron on Patreon: patreon.com/letsknowthings You can find a list of the books I've written at Colin.io. I'm going on tour: BecomingTour.com
A project dedicated to bringing artificial intelligence into military operations in a real combat theater has prompted more than a dozen Google employees to resign and 4,000 more to sign a petition urging the company to get out of the project. What's going on? Learn more about your ad-choices at https://news.iheart.com/podcast-advertisers
If Then | News on technology, Silicon Valley, politics, and tech policy
On this week’s If Then, Will Oremus and April Glaser talk about about an unexpected move by President Trump that could save the Chinese electronics maker ZTE. Also in the news is Project Maven, a Pentagon project to build AI for drones, which Google has been working on. This week it was reported that around a dozen Google employees quit over the company’s involvement in the project. The hosts discuss what one Apple blogger calls “one of the biggest design screwups in Apple history,” which has led to a class-action lawsuit. And they break down a major vulnerability in email encryption. Later, April and Will are joined by antitrust expert Gene Kimmelman. He’s the president and CEO of Public Knowledge, a nonprofit that focuses on tech policy research and advocacy. He formerly served as the Chief Counsel for the U.S. Department of Justice’s Antitrust Division under President Obama, during which time the NBC/Comcast merger was approved. They talk to him about AT&T’s antitrust trial with the DOJ as the company attempts to acquire Time Warner for $85 billion. If approved, that deal could reshape the future of how people connect to the internet, how they get their news and entertainment, and the future of mega-mergers proposed under Trump. And then there’s the recent revelation that AT&T hired Trump attorney Michael Cohen as a consultant last year. Don’t Close My Tabs The Guardian: Black Activist Jailed for His Facebook Posts Speaks Out About Secret FBI Surveillance The Verge: UK Newsstands Will Sell “Porn Passes” to verify Ages Under New Laws The Telegraph: Newsagents and Corner Shops To Sell “Porn Pass” Access Codes To Allow Adults To Visit X-rated Sites Podcast production by Max Jacobs. If Then plugs: You can get updates about what’s coming up next by following us on Twitter @ifthenpod. You can follow Will @WillOremus and April @Aprilaser. If you have a question or comment, you can email us at ifthen@slate.com. If Then is presented by Slate and Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Google Employees Resign in Protest Against Project 'MAVEN' Pentagon ContractThree months since many Google employees—and the public—learned about the company’s decision to provide artificial intelligence to a controversial military pilot program known as Project Maven, about a dozen Google employees are resigning in protest over the company’s continued involvement in Maven.Google Is Helping the Pentagon Build AI for Droneshttps://gizmodo.com/google-is-helping-the-pentagon-build-ai-for-drones-1823464533Letter to Google C.E.O. form Employeeshttps://static01.nyt.com/files/2018/technology/googleletter.pdfhttps://gizmodo.com/google-employees-resign-in-protest-against-pentagon-con-1825729300?utm_medium=sharefromsite&utm_source=Gizmodo_twitterAudio Quiz on Today's Show - Artificial Intelligence Podcast Hosts Will Robots Take Over Podcastinghttps://www.spreaker.com/user/exum/audio-quiz-on-todays-show-artificial-intWhat It’s Like Growing Up with Deaf Parents? Dave Kanyan Spills the Beanshttps://www.spreaker.com/user/exum/growing-up-with-deaf-parents[Music] Blood Money - Anno Domini - AnnoDominiNation.comwww.alexexum.com
On this week’s If Then, Will Oremus and April Glaser talk about about an unexpected move by President Trump that could save the Chinese electronics maker ZTE. Also in the news is Project Maven, a Pentagon project to build AI for drones, which Google has been working on. This week it was reported that around a dozen Google employees quit over the company’s involvement in the project. The hosts discuss what one Apple blogger calls “one of the biggest design screwups in Apple history,” which has led to a class-action lawsuit. And they break down a major vulnerability in email encryption. Later, April and Will are joined by antitrust expert Gene Kimmelman. He’s the president and CEO of Public Knowledge, a nonprofit that focuses on tech policy research and advocacy. He formerly served as the Chief Counsel for the U.S. Department of Justice’s Antitrust Division under President Obama, during which time the NBC/Comcast merger was approved. They talk to him about AT&T’s antitrust trial with the DOJ as the company attempts to acquire Time Warner for $85 billion. If approved, that deal could reshape the future of how people connect to the internet, how they get their news and entertainment, and the future of mega-mergers proposed under Trump. And then there’s the recent revelation that AT&T hired Trump attorney Michael Cohen as a consultant last year. Don’t Close My Tabs The Guardian: Black Activist Jailed for His Facebook Posts Speaks Out About Secret FBI Surveillance The Verge: UK Newsstands Will Sell “Porn Passes” to verify Ages Under New Laws The Telegraph: Newsagents and Corner Shops To Sell “Porn Pass” Access Codes To Allow Adults To Visit X-rated Sites Podcast production by Max Jacobs. If Then plugs: You can get updates about what’s coming up next by following us on Twitter @ifthenpod. You can follow Will @WillOremus and April @Aprilaser. If you have a question or comment, you can email us at ifthen@slate.com. If Then is presented by Slate and Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Google Employees Resign in Protest Against Project 'MAVEN' Pentagon ContractThree months since many Google employees—and the public—learned about the company’s decision to provide artificial intelligence to a controversial military pilot program known as Project Maven, about a dozen Google employees are resigning in protest over the company’s continued involvement in Maven.Google Is Helping the Pentagon Build AI for Droneshttps://gizmodo.com/google-is-helping-the-pentagon-build-ai-for-drones-1823464533Letter to Google C.E.O. form Employeeshttps://static01.nyt.com/files/2018/technology/googleletter.pdfhttps://gizmodo.com/google-employees-resign-in-protest-against-pentagon-con-1825729300?utm_medium=sharefromsite&utm_source=Gizmodo_twitterAudio Quiz on Today's Show - Artificial Intelligence Podcast Hosts Will Robots Take Over Podcastinghttps://www.spreaker.com/user/exum/audio-quiz-on-todays-show-artificial-intWhat It’s Like Growing Up with Deaf Parents? Dave Kanyan Spills the Beanshttps://www.spreaker.com/user/exum/growing-up-with-deaf-parents[Music] Blood Money - Anno Domini - AnnoDominiNation.comwww.alexexum.com
Nintendo brings big announcements, LP's eliminated at iTunes, and scary AI meets creepy Alexa! After an ethics fueled Geek Off, it's time for movie and tv talk with news from Star Wars, Men in Black and Jessica Jones! March's Nintendo Direct - Announcements Galore Lots of games announced for the Switch including a new Super Smash Bros. and … Crash Bandicoot?!?! Bye Bye iTunes LPs Apple will no longer accept new submissions of iTunes LPs after March 2018 and existing LPs will be deprecated from the store during the remainder of 2018. Apple began offering its own interpretation of the LP in 2009 on the iTunes Store. A take on the traditional "long play" meaning of LP, iTunes' LPs bundled albums with art, lyrics, videos, bonus tracks, and other extra materials that fans of the artist would enjoy. Google teams up with the Pentagon In what is being called Project Maven, Google is teaming up with the Pentagon to help them develop AI for analyzing drone footage. Does that make anyone else a little uneasy? Alexa thinks she's pretty funny Users had been reporting that Alexa had been doing a creepy laugh all on her own and now it appears Amazon has fixed the issue. But seriously, this is hilarious! Jon Favreau meets Star Wars With no actual plot or release date set, Disney has apparently inked a deal for Favreau to be an executive producer and writer for a new streaming live action series on it's upcoming streaming platform. Men In Black to get a reboot? Sony seems eager to reboot the franchise and it sounds like a possibility that we will see Thor himself wearing the black shades as he saves the world and erases memories. Marvel's Jessica Jones Season 2 The 2nd season for one of the fan favorite Defenders, Jessica Jones, is out now. The reviews are coming back split on if it's worth the watch or not.
Project Maven has been launched in the past few months bringing artificial intelligence to the war zone. Google is behind the project and has seen significant push-back from employees. It's only a matter of time before we see more applications in this area. Where is this headed? What are some of the trade-offs? Links Project Maven to Deploy Computer Algorithms to War Zone by Year's End Follow us and leave us a rating! iTunes Homepage Twitter @artlyintelly Facebook artificiallyintelligent1@gmail.com
In today's Federal Newscast on Federal News Radio, over 3,100 employees send a letter to Google CEO Sundar Pichai asking he cancel a Defense Department pilot program which uses the company's technology.