Podcasts about usstratcom

  • 19PODCASTS
  • 25EPISODES
  • 41mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Oct 8, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about usstratcom

Latest podcast episodes about usstratcom

The 'X' Zone Radio Show
Rob McConnell Interviews - CHRISTOPHER LORIC - SESG Explorer

The 'X' Zone Radio Show

Play Episode Listen Later Oct 8, 2024 56:01


Christopher grew up and attended school in Norman, OK. Earned his MBA in 1990 and his MA in National Security and Strategic Studies in 2004 (recertified in 2013). Joined the US Navy and went active on 14 Feb 1993. Deployed with the US Marine Corps twice, including Iraq in 2006. Served on the 7th Fleet Staff during 9/11. He served on 3 Combatant Command staffs: US Forces Korea (Subunified), USSTRATCOM, and USAFRICOM. His last duty was with NATO HQ ACT in Norfolk, VA. After retirement, he moved to East Tennessee in 2021. Loric earned multiple military awards, including Defense Meritorious Service Medal (4x), Meritorious Service Medal (2x), and Navy/Marine Corps Commendation Medal (4x), along with other service and deployed awards. Married to a wonderful Lady, Sirirug.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-x-zone-radio-tv-show--1078348/support.

The Nuclear View
Perspectives on the USSTRATCOM Annual Deterrence Symposium

The Nuclear View

Play Episode Listen Later Aug 28, 2024 42:08


In this episode, Adam and Curtis discuss the recent STRATCOM deterrence symposium and a panel featuring Congressman John Garamendi, who advocated for fewer nukes and elimination of the ICBM. In addition to offering responses to the congressman's arguments, they also review the different perspectives of realists and idealists when it comes to nuclear deterrence.

S2 Underground
The Wire - April 11, 2024

S2 Underground

Play Episode Listen Later Apr 11, 2024 2:01


//The Wire//2100Z April 11, 2024////ROUTINE////BLUF: TENSIONS REMAIN HIGH IN ANTICIPATION OF IRANIAN EMBASSY STRIKE RESPONSE. INCREASE IN STRATCOM ACTIVTY OBSERVED.// -----BEGIN TEARLINE------International Events-Middle East: Anticipation remains high regarding the potential Iranian response to the Embassy strike in Damascus. Russia, Germany, and other European nations have called for all parties to exercise restraint as the conflict between Israel and Iran becomes more direct in response to recent events. AC: It's too soon to tell what the result of this more recent flare-up will be, but claims as to the apocalyptic potential are probably overhyped. Iran may choose to respond with substantial military strikes. Likewise, Israel may respond to the response with an even more substantial tit-for-tat move. However, even considering the seriousness of the situation, a no-holds-barred total war is unlikely at this stage. No major combatant is truly prepared for the war to expand significantly, but the potential for mistakes and diplomatic mis-steps always increases the chances of unplanned escalation.-HomeFront-USA: A slight increase in the baseline of USSTRATCOM activity was observed last night following an increase in HFGCS radio traffic overnight. This morning approximately 4-6x E-6B Mercury aircraft were active around the continental United States as well, along with a few other strategic airframes.-----END TEARLINE-----Analyst Comments: As with all strategic indicators, these data points could be nothing…but could also be something. Normally, this level of traffic is an indicator of a training exercise. The lack of any substantial traffic from the Russian BearNet also indicates a more benign use of American strategic assets. However, increases to the strategic baseline are a useful metric to track as being aware of the movement of strategic assets can be a good indicator of potentially very serious world events.Analyst: S2A1//END REPORT//

NucleCast
David Rehbein - Dispelling the Arms Control Myth; Not All Nukes are the Same

NucleCast

Play Episode Listen Later Jan 9, 2024 34:56


This episode features Dave Rehbein, a retired Army FA-52 nuclear officer, about the differences between tactical and strategic nuclear weapons. They discuss the misconception that all nuclear weapons are the same and explore the effects and scale of different types of nuclear weapons. They also touch on the importance of understanding radiation and fallout, as well as the potential scenarios in which tactical nuclear weapons may be employed. The conversation concludes with a discussion on the W76-2 and the need for a clear definition of tactical nuclear weapons.Takeaways:There are significant differences between tactical and strategic nuclear weapons, including their effects, scale, and intended use.Understanding radiation and fallout is crucial in assessing the impact of nuclear weapons.Tactical nuclear weapons can be employed in specific scenarios to achieve strategic objectives.The W76-2, while providing a low-yield option, has limitations in terms of delivery and responsiveness.Prior to becoming an independent consultant, Dave was the primary technical liaison for Lawrence Livermore National Laboratory (LLNL) at the United States Strategic Command (USSTRATCOM) at Offutt AFB, NE. He served in that capacity for 15 years. Dave joined LLNL after retiring from the U.S. Army. He was a Corps of Engineers officer with specialized expertise in Nuclear Operations and Research. His areas of expertise are nuclear weapons technology, weapons effects, and deterrence theory. In his final active-duty Army assignment Dave served as the Commander of the US Army Element of US Strategic Command and as the Chief of Force Assessments at USSTRATCOM.Socials:Follow on Twitter at @NucleCastFollow on LinkedIn: https://linkedin.com/company/nuclecastpodcastSubscribe RSS Feed: https://rss.com/podcasts/nuclecast-podcast/Rate: https://podcasts.apple.com/us/podcast/nuclecast/id1644921278Email comments and topic/guest suggestions to NucleCast@anwadeter.org

NucleCast
Major Frank W. Perry, Jr. - Helos in the Missile Field

NucleCast

Play Episode Listen Later Jul 6, 2023 30:55


Major Frank W. Perry, Jr. is the Chief, Helicopter Operations Division, Twentieth Air Force. In this position Major Perry advises the Commander, Twentieth Air Force on operational and administrative issues for the Air Force's sole Helicopter Group. He leads planning, coordination, and Operational Plan support for three operational helicopter squadrons, integrating USSTRATCOM and USNORTHCOM missions for over 400 personnel.Prior to this assignment, Major Perry was a student at Air Command and Staff College in Air University. There he performed studies as part of the Russia Research Task Force, garnering the Thomas “Dutch” Miller Award for most relevant research in his field.Major Perry enlisted in the Air Force as a Ground Radio Technician in 2004, receiving his commission through Officer Training School in 2009. Following graduation, he has served in a variety of operational assignments as a UH-1N instructor pilot, and subsequently as a Mi-17 instructor and evaluator pilot. Major Perry has deployed in support of Operations IRAQI FREEDOM, ENDURING FREEDOM, and FREEDOM'S SENTINEL.EPISODE NOTES:Follow NucleCast on Twitter at @NucleCastEmail comments and guest nominations to NucleCast@anwadeter.orgSubscribe to NucleCast podcastRate the show on Apple Podcasts

The Good Trouble Show with Matt Ford
UFOs and Nukes. Two former USAF Officers on when UFOs shut down their nuclear missiles

The Good Trouble Show with Matt Ford

Play Episode Listen Later Mar 22, 2023 58:55


The United States Air Force is hiding something from Congress and the Biden Administration. UFOs have been tampering with our nukes.Two former Minuteman ICBM nuclear launch control officers, Robert Salas, and David Schindele go on the record in this rebroadcast from our interview from last year.The Good Trouble Show:On all major podcasting platforms, search for "The Good Trouble Show with Matt Ford"Patreon: https://www.patreon.com/TheGoodTroubleShowYouTube: https://www.youtube.com/@TheGoodTroubleShowTwitter: https://twitter.com/GoodTroubleShowInstagram: @goodtroubleshowTikTok: https://www.tiktok.com/@goodtroubleshowFacebook: https://www.facebook.com/The-Good-Trouble-Show-With-Matt-Ford-106009712211646#thegoodtroubleshow #politics #latenight

The Nonlinear Library
LW - Would it be good or bad for the US military to get involved in AI risk? by Grant Demaree

The Nonlinear Library

Play Episode Listen Later Jan 2, 2023 2:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Would it be good or bad for the US military to get involved in AI risk?, published by Grant Demaree on January 1, 2023 on LessWrong. Meant as a neutral question. I'm not sure whether this would be good or bad on net: Suppose key elements of the US military took x-risk from misaligned strong AI very seriously. Specifically, I mean: Key scientists at the Defense Threat Reduction Agency. They have a giant budget (~$3B/year) and are literally responsible for x-risks. Current portfolio is focused on nuclear risks with some biosecurity Influential policy folks at the Office of the Undersecretary of Defense for Policy. Think dignified career civil servants, 2 levels below the most senior political appointees Commander's Initiative Group at USSTRATCOM. Folks who have the commander's ear, tend to be well-respected, and have a huge effect on which ideas are taken seriously Why this would be good: The military has far more discretionary budget than anyone else in the world. You could multiply the resources dedicated to AI safety research tenfold overnight The military is a huge source of AI risk (in the sense that advancing AI capabilities faster obviously helps the US in competition with China). If key influencers took the risk seriously, they might be more judicious about their capabilities research A key policy goal is preventing the sharing of AI capabilities research. The military is very good at keeping things secret and has policy levers to make private companies do the same The military is a huge source of legitimacy with the general public. And it seems easier than other routes to legitimacy. I think less than 10 key people actually need to be persuaded on the merits, and everyone else will follow suit If the right person agrees, it's literally possible to get one of the best researchers from this community appointed to lead AI safety research for a major government agency, in the same sense that Wernher von Braun led the space program. You just have to be really familiar with the civil service's intricate rules for hiring Why this would be bad: If someone presents the ideas badly, it's possible to poison the well for later. You could build permanent resistance in the civil service to AI safety ideas. And it's really easy to make that mistake: presentations that work in these agencies are VERY different from what works in the tech community Even if the agency is persuaded, they could make a noisy and expensive but ultimately useless effort A big government project (with lots of middle managers) adds a moral maze element to alignment research Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Would it be good or bad for the US military to get involved in AI risk? by Grant Demaree

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 2, 2023 2:42


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Would it be good or bad for the US military to get involved in AI risk?, published by Grant Demaree on January 1, 2023 on LessWrong. Meant as a neutral question. I'm not sure whether this would be good or bad on net: Suppose key elements of the US military took x-risk from misaligned strong AI very seriously. Specifically, I mean: Key scientists at the Defense Threat Reduction Agency. They have a giant budget (~$3B/year) and are literally responsible for x-risks. Current portfolio is focused on nuclear risks with some biosecurity Influential policy folks at the Office of the Undersecretary of Defense for Policy. Think dignified career civil servants, 2 levels below the most senior political appointees Commander's Initiative Group at USSTRATCOM. Folks who have the commander's ear, tend to be well-respected, and have a huge effect on which ideas are taken seriously Why this would be good: The military has far more discretionary budget than anyone else in the world. You could multiply the resources dedicated to AI safety research tenfold overnight The military is a huge source of AI risk (in the sense that advancing AI capabilities faster obviously helps the US in competition with China). If key influencers took the risk seriously, they might be more judicious about their capabilities research A key policy goal is preventing the sharing of AI capabilities research. The military is very good at keeping things secret and has policy levers to make private companies do the same The military is a huge source of legitimacy with the general public. And it seems easier than other routes to legitimacy. I think less than 10 key people actually need to be persuaded on the merits, and everyone else will follow suit If the right person agrees, it's literally possible to get one of the best researchers from this community appointed to lead AI safety research for a major government agency, in the same sense that Wernher von Braun led the space program. You just have to be really familiar with the civil service's intricate rules for hiring Why this would be bad: If someone presents the ideas badly, it's possible to poison the well for later. You could build permanent resistance in the civil service to AI safety ideas. And it's really easy to make that mistake: presentations that work in these agencies are VERY different from what works in the tech community Even if the agency is persuaded, they could make a noisy and expensive but ultimately useless effort A big government project (with lots of middle managers) adds a moral maze element to alignment research Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Air Force Radio News
Air Force Radio News 16 December 2022

Air Force Radio News

Play Episode Listen Later Dec 16, 2022


Today's Story: New Leader for US Strategic Command

The Nonlinear Library
AF - What does it take to defend the world against out-of-control AGIs? by Steve Byrnes

The Nonlinear Library

Play Episode Listen Later Oct 25, 2022 52:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What does it take to defend the world against out-of-control AGIs?, published by Steve Byrnes on October 25, 2022 on The AI Alignment Forum. Intended audience: People very familiar with AGI safety / alignment discourse. Lots of jargon, lots of unspoken & unjustified background assumptions. Confidence level: What I currently believe and why. I mainly work on technical alignment and don't consider myself an expert on AGI deployment / governance issues. Interested in feedback and pushback. Pretty please with cherries on top do not make important real-world decisions based on this post. Tl;dr: Almost all of my posts are about technical aspects of the alignment problem. This post is instead assuming for the sake of argument that some group will manage to sculpt an AGI's motivations such that it's either under control / corrigible / docile, and/or has prosocial motivations, or is safe for some other reason. But this post is also assuming that it's possible to mess up and make an AGI that seeks power and escapes control. (This post is focused on out-of-control AGI accidents; I'll generally ignore bad actor / misuse and other problems.) The point of this post is to think through whether and how under-control “good” AGIs can defend the world against omnicidal out-of-control “bad” AGIs (including by preventing them from coming into existence in the first place), or if not, what can be done about that problem. This is a topic of ongoing debate in the community: An example pessimist would be Eliezer Yudkowsky, who thinks that we're basically doomed unless one of the first groups with AGI performs a so-called “pivotal act” (more on which in Section 3.5.1) that aggressively prevents any other groups on Earth from making misaligned AGIs. An example (relative) optimist would be Paul Christiano, who argues in The strategy-stealing assumption that, if a big tech company with a giant compute cluster trains a friendly aligned powerful AGI in year Y, we probably have little cause for global concern if it happens that, in year Y+2, some small group in an office park somewhere messes up and makes a misaligned power-seeking AGI, because whatever power- or resource-grabbing strategies that the latter can come up with and execute, the former probably would have come up with and executed those same strategies already—or even better strategies. This post explains why I put myself in the pessimistic camp on this issue. I think Paul's “strategy-stealing assumption” is a very bad assumption, and I will argue more generally that we're pretty much doomed even if the first groups able to develop AGI manage to keep it under control. And this is a major contributor to how my overall current P(AGI doom) winds up so high, like 90%+. Other underlying assumptions: I have lots of beliefs about future AI, including (1) takeoff speed is measured in years (or less), as opposed to decades or centuries; (2) Transformative AI will look like one or more AGI agents with motivations, able to figure things out and get stuff done. I have argued for these assumptions (and others) in various other posts, but here I'm mostly trying to avoid relying on those assumptions in the first place. I'm not sure how well I succeeded; these worldview assumptions are in the back of my mind and probably bleeding through. 1. Ten example scenarios where powerful good AGIs would fail to defend against out-of-control bad AGIs (This section is intended as a series of grimly-amusing vignettes to help set the stage for the somewhat-more-careful analysis in the rest of this post, not as slam-dunk irrefutable arguments.) 1. The tech company has a powerful AI and knows how to keep it under human control. The tech company CEO goes up to the General of USSTRATCOM [the branch of the US military that deals with nuclear war] and says “We know how to...

The Nonlinear Library: Alignment Forum Weekly
AF - What does it take to defend the world against out-of-control AGIs? by Steve Byrnes

The Nonlinear Library: Alignment Forum Weekly

Play Episode Listen Later Oct 25, 2022 52:01


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What does it take to defend the world against out-of-control AGIs?, published by Steve Byrnes on October 25, 2022 on The AI Alignment Forum. Intended audience: People very familiar with AGI safety / alignment discourse. Lots of jargon, lots of unspoken & unjustified background assumptions. Confidence level: What I currently believe and why. I mainly work on technical alignment and don't consider myself an expert on AGI deployment / governance issues. Interested in feedback and pushback. Pretty please with cherries on top do not make important real-world decisions based on this post. Tl;dr: Almost all of my posts are about technical aspects of the alignment problem. This post is instead assuming for the sake of argument that some group will manage to sculpt an AGI's motivations such that it's either under control / corrigible / docile, and/or has prosocial motivations, or is safe for some other reason. But this post is also assuming that it's possible to mess up and make an AGI that seeks power and escapes control. (This post is focused on out-of-control AGI accidents; I'll generally ignore bad actor / misuse and other problems.) The point of this post is to think through whether and how under-control “good” AGIs can defend the world against omnicidal out-of-control “bad” AGIs (including by preventing them from coming into existence in the first place), or if not, what can be done about that problem. This is a topic of ongoing debate in the community: An example pessimist would be Eliezer Yudkowsky, who thinks that we're basically doomed unless one of the first groups with AGI performs a so-called “pivotal act” (more on which in Section 3.5.1) that aggressively prevents any other groups on Earth from making misaligned AGIs. An example (relative) optimist would be Paul Christiano, who argues in The strategy-stealing assumption that, if a big tech company with a giant compute cluster trains a friendly aligned powerful AGI in year Y, we probably have little cause for global concern if it happens that, in year Y+2, some small group in an office park somewhere messes up and makes a misaligned power-seeking AGI, because whatever power- or resource-grabbing strategies that the latter can come up with and execute, the former probably would have come up with and executed those same strategies already—or even better strategies. This post explains why I put myself in the pessimistic camp on this issue. I think Paul's “strategy-stealing assumption” is a very bad assumption, and I will argue more generally that we're pretty much doomed even if the first groups able to develop AGI manage to keep it under control. And this is a major contributor to how my overall current P(AGI doom) winds up so high, like 90%+. Other underlying assumptions: I have lots of beliefs about future AI, including (1) takeoff speed is measured in years (or less), as opposed to decades or centuries; (2) Transformative AI will look like one or more AGI agents with motivations, able to figure things out and get stuff done. I have argued for these assumptions (and others) in various other posts, but here I'm mostly trying to avoid relying on those assumptions in the first place. I'm not sure how well I succeeded; these worldview assumptions are in the back of my mind and probably bleeding through. 1. Ten example scenarios where powerful good AGIs would fail to defend against out-of-control bad AGIs (This section is intended as a series of grimly-amusing vignettes to help set the stage for the somewhat-more-careful analysis in the rest of this post, not as slam-dunk irrefutable arguments.) 1. The tech company has a powerful AI and knows how to keep it under human control. The tech company CEO goes up to the General of USSTRATCOM [the branch of the US military that deals with nuclear war] and says “We know how to...

From the Crows' Nest
[EMSO] Implementing EMSSS I-Plan for the Future Fight with BG. Anthony

From the Crows' Nest

Play Episode Listen Later Dec 1, 2021 15:23


Host Ken Miller sits with Brig. Gen. AnnMarie K. Anthony, Deputy Director, Operations for Joint Electromagnetic Spectrum Operations & Mobilization Assistant, Director of Operations, U.S. Strategic Command (USSTRATCOM) at the 58th International Symposium and Annual Convention. Ken discusses Gen. Anthony's operational perspective on EMSO from her breakout session on the EMSSS I-Plan and her main message to the audience today. They also expand on her vision for USSTRATCOM, the Joint Electromagnetic Warfare Center and Joint Electromagnetic Preparedness for Advanced Combat. To learn more about the 58th International Symposium and Annual Convention and today's topics and to stay updated on EMSO and EW developments, visit our website.Thank you to our episode sponsor, Samtec. 

Data Reveal
S1Ep04. An Inflection Point in Data with Chris Wilson

Data Reveal

Play Episode Listen Later Sep 22, 2021 50:59


Welcome, Chris. 02:20Chris's career journey: Joined the army, his time in active duty, and his work with the Threat Reduction Agenecy.03:29Going to Stratcom, the Missile Defense Agency, USSTRATCOM, and the Qlick Federal team.06:12Defense industrial base manufacturers ensure supply chains because any issue can cause an immediate impact on real-world missions.  08:36Experiencing 9/11 at a Superfund site and his unit's work.10:26Impactful things after 9/11: the communication expedition for data and communication between federal agencies and across different systems. 15:19How do you measure interoperability: being able to communicate with the data in different mission sets. 19:51Having everyone operating from a single source of truth is crucial. 21:50How do we protect individual rights while protecting the sovereignty of the nation and society as a collective? 24:07Importance of leadership in bringing data across the multitude of agencies systems and operational domains 30:32Are we at a point where we can visualize information in these sort of ‘war rooms' like in the movies? 34:05Two driving forces are the functionals leading up by highlighting what we don't know and top parts of government leading down by saying, ‘This is where we have to get to.' 36:30Suicide and sexual harassment prevention examples. 39:13Building teams around data and the role of the translator who sits between the data scientist and the business or mission owner. 44:21From a leadership standpoint if you're going to lead up you want translators on your team. 46:18Define success: what are you looking to achieve? 47:39Chris' AHA moment: Key thing to touch on in the future is data literacy. 49:39Connect with Chris WilsonLinkedIn: linkedin.com/in/christopher-wilson-pmp-45993a3Connect with Mark, Andrew, and Courtney. Mark FedeliLinkedIn: linkedin.com/in/markfedeliTwitter: @markfedeli Andrew ChurchillLinkedIn: linkedin.com/in/fachurchill/Twitter: @FAChurchill Courtney Hastings:LinkedIn: linkedin.com/in/courtneyhastingsTwitter: @chatrhstrategic

Space Strategy
'A Domain for Commerce': Moving from a discovery architecture to a sustained and commerce-centric architecture

Space Strategy

Play Episode Listen Later Jun 24, 2021 57:28


In this episode, Senior Fellow in Defense Studies Peter Garretson interviews General James Cartwright, Harold Brown Chair in Defense Policy Studies for the Center for Strategic and International Studies, former commander of USSTRATCOM, and former Vice Chairman of the Joint Chiefs of Staff. They discuss the Atlantic Council's The Future of Security in Space report, and why a 30-year vision and strategy is a necessity. They also touch on the massive shifts in scale and focus, and the movement from discovery-centric to commerce-centric approaches and architectures, and the role policy and the military can play to promote a rules-based order. They explore the necessity of taking a Cislunar approach, and the posture necessary to leverage the private sector to remain competitive. They examine re-usable space logistics architectures including rocket cargo, and the imperative for the Secretary of the Air Force to work the seam between Air and Space to make sure requirements allow interoperability and transit between the two domains. They assess who has agency in the Joint Requirements System to build the architecture of tomorrow. Gen Cartwright provides insights on how USSPACECOM will be a supported domain and how space is unique and different because of our reliance on machines and eventually AI. Finally, Gen Cartwright stresses the importance for the nation to focus on the transition to a commerce-centric approach to the Space domain and the critical role of the Space Force in supporting commerce. General Cartwright: https://www.csis.org/people/james-e-cartwright OpEd: https://breakingdefense.com/2021/03/the-space-rush-new-us-strategy-must-bring-order-regulation/ Atlantic Council Report: https://www.atlanticcouncil.org/wp-content/uploads/2021/04/TheFutureofSecurityinSpace.pdf USCC testimony (starting page 13): https://www.uscc.gov/sites/default/files/2019-10/April%2025%202019%20Hearing%20Transcript.pdf Work on Space Domain Awareness to Cislunar: https://www.afrl.af.mil/News/Photos/igphoto/2002556344/mediaid/4752579/ Work to extend PNT to Cislunar: https://www.xplore.com/press/releases/2020/04.06.2020_xplore_receives_usaf_award.html AFRL's new Primer on Cislunar Space: https://www.afrl.af.mil/Portals/90/Documents/RV/A%20Primer%20on%20Cislunar%20Space_Dist%20A_PA2021-1271.pdf?ver=vs6e0sE4PuJ51QC-15DEfg%3d%3d Rocket Cargo: https://www.airforcemag.com/rocket-cargo-air-force-fourth-vanguard/ https://www.defensedaily.com/wp-content/uploads/post_attachment/157919.pdf

Cyber and Technology with Mike
01 April 2021 Cyber and Tech News

Cyber and Technology with Mike

Play Episode Listen Later Apr 1, 2021 9:25


In today's podcast we cover four crucial cyber and technology topics, including: 1. U.S. President extends order allowing blocking of cyber criminal assets 2. Akamai warns of bigger DDoS attacks used for extortion 3. USSTRATCOM tweets alarming message 4. Kansas man charged with shutting down key water treatment processes I'd love feedback, feel free to send your comments and feedback to  | cyberandtechwithmike@gmail.com

Ready For Takeoff - Turn Your Aviation Passion Into A Career

Brig. Gen. Paul W. Tibbets IV is Deputy Commander, Air Force Global Strike Command and Deputy Commander, Air Forces Strategic-Air, U.S. Strategic Command, Barksdale Air Force Base, Louisiana. AFGSC provides strategic deterrence, global strike and combat support to USSTRATCOM and other geographic combatant commands. The command comprises more than 33,700 professionals operating at two numbered air forces; 11 active duty, Air National Guard and Air Force Reserve wings, the Joint Global Strike Operations Center and the Nuclear Command, Control and Communications Center. Weapons systems assigned to AFGSC include all U.S. Air Force Intercontinental Ballistic Missiles and bomber aircraft, UH-1N helicopters, E-4B National Airborne Operations Center aircraft and the U.S. Air Force NC3 weapons system. The command organizes, trains, equips and maintains combat-ready forces that provide strategic deterrence, global strike and combat support to USSTRATCOM and other geographic combatant commands. The command is comprised of more than 33,700 professionals operating at two Numbered Air Forces and 11 active-duty, Air National Guard and Air Force Reserve wings. Weapons systems assigned to the command include Minuteman III Intercontinental Ballistic Missiles, B-1, B-2 and B-52 bombers, UH-1N helicopters, the E-4B National Airborne Operations Center aircraft and the Nuclear Command, Control and Communications systems. General Tibbets received his commission through the U.S. Air Force Academy in 1989. Following graduation, he served in a variety of operational assignments as a B-1 pilot, and subsequently as a B-2 pilot. The general has commanded at the squadron and wing levels, and flew combat missions in support of operations in Southwest Asia, the Balkans and Afghanistan. His staff assignments include Executive Officer to the Commander, Eighth Air Force, Chief of the Nuclear and CBRN Defense Policy Branch at NATO Headquarters, Deputy Director of Operations for AFGSC and Deputy Director for Nuclear Operations at U.S. Strategic Command. Prior to his current assignment, he served as the Commander of the 509th Bomb Wing at Whiteman AFB, Missouri. General Tibbets is a command pilot with more than 4,000 flying hours. 

A Better Peace: The War Room Podcast
THE SIGNIFICANCE OF THE ODNI: AN INTERVIEW WITH JAMES CLAPPER

A Better Peace: The War Room Podcast

Play Episode Listen Later Mar 13, 2020 23:56


The President, and all policy makers should have the unvarnished truth as best as the intelligence community can serve it up. A BETTER PEACE welcomes former Director of National Intelligence (DNI), James Clapper to discuss the role of the ODNI and the current state of the position. Clapper joins guest host Genevieve Lester, Chair of Strategic Intelligence at the U.S. Army War College. They examine the strategic importance of the DNI position, the individual chosen to fill it, and the impact on the intelligence community. James Clapper is the former Director of National Intelligence. Genevieve Lester is the DeSerio Chair of Strategic Intelligence at the U.S. Army War College.  The views expressed in this presentation are those of the speakers and do not necessarily reflect those of the U.S. Army War College, U.S. Army, or Department of Defense. Photo Description: The Director of National Intelligence (DNI), the Honorable James R. Clapper (left), prepares to speak during a town hall with members of the intelligence community and U.S. Strategic Commander's (USSTRATCOM) intelligence staff at USSTRATCOM Headquarters, Offutt Air Force Base, Neb., Aug. 23, 2016. Photo Credit: U.S. Air Force photo by Staff Sgt. Jonathan Lovelady Other Posts in the "Intelligence" series: THE ROLE OF INTELLIGENCE TODAYPOLICY SUCCESS VS. INTEL FAILURE?IMPACT (OR NOT) OF INTEL ON STRATEGIC DECISION MAKINGSTRATEGIC ATTACKS AND THEIR FALLOUTNEEDLES IN HAYSTACKS: ANALYZING TODAY’S FLOOD OF INFORMATIONWHERE DOES INTELLIGENCE GO FROM HERE? AN INTERVIEW WITH JAMES CLAPPERTHE DOD-CIA RELATIONSHIP: ARE WE MILITARIZING STRATEGIC INTELLIGENCE?THE SIGNIFICANCE OF THE ODNI: AN INTERVIEW WITH JAMES CLAPPER

CommissionED: The Air Force Officer Podcast
012 - 13N Nuclear & Missile Operations Officer with Maj Greg Carter

CommissionED: The Air Force Officer Podcast

Play Episode Listen Later Dec 4, 2019 82:50


In this episode Colin interviews Maj Greg "Metta" Carter about the 13N Nuclear & Missile Operations Officer career field. Listen to learn about the Air Force's nuclear enterprise, the importance of staff work to officer development, and enlisted commissioning programs.01:48 - Greg's prior enlisted background06:15 - Enlisted commissioning programs21:37 - What is nuclear operations?29:52 - Who is the 13N's customer?36:33 - Who does a 13N work with?39:57 - What does 13N career progression look like?47:05 - Greg's approach to professional development(see The Armed Forces Officer by Richard M. Swain and Albert C. Pierce https://ndupress.ndu.edu/Portals/68/Documents/Books/AFO/Armed-Forces-Officer.pdf)49:03 - Staff work as an officer53:07 - USSTRATCOM57:47 - Greg's leadership philosophy 01:02:11 - Advice for airmen interested in 13N01:07:07 - Greg's future plans/Foreign Affairs01:11:20 - Greg shares some great war stories01:15:47 - How Greg got his callsign01:19:04 - What is means to be an Air Force officer(email Greg at greg.e.carter.mil@mail.mil)Email your questions and comments to airforceofficerpodcast@gmail.com. Join the discussion about the podcast, the Air Force, officership, and the Profession of Arms at https://www.facebook.com/groups/airforceofficerpodcast/.Like us on Facebook: https://www.facebook.com/AirForceOfficerPodcast/Follow us on Instagram: airforceofficerpodcast. Follow us on Twitter: afofficerpod. Follow us on Reddit: u/afofficerpodShare your officer stories of all flavors using #officerAF.

Air Force Radio News
Air Force Radio News 30 April 2019

Air Force Radio News

Play Episode Listen Later Apr 30, 2019


Today's stories: The Romanian Space Agency became the 20th nation to sign a space situational awareness agreement with U.S. Strategic Command. Also, Air Force Chief of Staff General David Goldfein visits participants of the Space Flag exercise.

air force air force chief space situational awareness csaf usstratcom afrn
Air Force Radio News
Air Force Radio News 02 November 2018 B

Air Force Radio News

Play Episode Listen Later Nov 2, 2018


Today's story: U.S. Strategic Command launched Global Thunder 2019, an annual Command and Control exercise that provides training opportunities to assess all USSTRATCOM mission areas and joint and field training operational readiness, with a specific focus on nuclear readiness.

air force command command and control field training operational readiness usstratcom afrn
Extraordinary Women Radio with Kami Guildner
Susan J. Helms: Astronaut, Lieutenant General of USAF and Colorado Women’s Hall of Fame 2018 Inductee – 039

Extraordinary Women Radio with Kami Guildner

Play Episode Listen Later Feb 1, 2018 45:19


Today – I’m so excited to bring this very special Extraordinary Woman Radio interview with Astronaut Susan J. Helms - the first U.S. military woman in space! Susan is a retired Air Force lieutenant general and astronaut who was a crewmember on four space shuttle missions. She holds the world record for the longest space walk (8 hours and 56 minutes), and was the first woman to serve on the International Space Station. Susan is part of my series featuring the Colorado Women's Hall of Fame 2018 Inductees. This is an interview full of stories of space travel and the lessons that accompanied it. Apparently, you can learn a lot about life when you spend nearly 5 months aboard a space station with only 3 others aboard the international space station. There's lots of time to ponder the wonders of the world and the wonders of the Universe. We talk about humanity, life, following your dreams, working hard, training hard and being ready. Here are a few of the golden nuggets from Susan: "Humankind is really all one family as opposed to different countries." "Confidence and competency goes such a long way, as a currency, especially for women struggling with biases." "Payoff comes from the journey, not the destination." Susan was a member of the first class at the Air Force Academy to include women. Lieutenant General Helms was commissioned from the US Air Force Academy in 1980, the first class to admit women into the ranks of the cadet corps. Upon graduation, she served as an F-15 and F-16 weapons separation engineer and a flight test engineer. Following completion of her Masters of Aeronautics and Astronautics at Stanford University, she served on the Faculty of the US Air Force Academy in the Department of Aeronautics. She was subsequently selected to attend the USAF Test Pilot School, Flight Test Engineer Course, Edwards AFB, CA, completing the year long school as a Distinguished Graduate. After graduation, she served as project officer on the CF-18 aircraft as a U.S. Air Force Exchange Officer to the Canadian Aerospace Engineering Test Establishment, at Cold Lake AFB, Alberta, Canada. As a flight test engineer, Lt Gen (R) Helms has flown in 30 types of U.S. and Canadian military aircraft. Selected by NASA in January 1990, Lieutenant General Helms became an astronaut in July 1991. On Jan. 13, 1993, then an Air Force major and a member of the space shuttle Endeavour crew, she became the first U.S. military woman in space. She flew on STS-54 (1993), STS-64 (1994), STS-78 (1996) and STS-101 (2000), and served aboard the International Space Station (ISS) as a member of the Expedition-2 crew (2001). After 12 years at NASA, Lieutenant General Helms transferred to Air Force Space Command in 2002. Over the next 12 years, she served in numerous staff positions and commanded the 45th Space Wing at Cape Canaveral AFS, FL. Her staff assignments include tours at Headquarters Air Force Space Command, Air Education and Training Command, and U.S. Strategic Command, where she was the Director of Plans and Policy (J5). Upon promotion to Lieutenant General, she commanded the 14th Air Force (AF Space Command) and the Joint Functional Component Command for Space (US Strategic Command), in a dualhat position at Vandenberg AFB, CA. As the U.S. Air Force’s operational space component to USSTRATCOM, Lt Gen Helms led more than 20,500 personnel responsible for providing missile warning, space superiority, space situational awareness, satellite operations, space launch and range operations. As Commander, JFCC SPACE, she directed all assigned and attached space forces providing tailored, responsive, local and global space effects in support of national and combatant commander objectives. Lieutenant General Helms retired from military service in 2014. Since retirement, General Helms has spent her time on Board work, consulting and speeches. General Helms is currently on the Board of Trustees for the Aerospace Corporation,

Hudson Institute Events Podcast
U.S. Strategic Command Commander�s Perspective on 21st Century Deterrence

Hudson Institute Events Podcast

Play Episode Listen Later Sep 20, 2017 56:12


On September 20, Hudson Institute hosted General John E. Hyten, who provided an overview of his command vision, outlined his priorities, and explained how deterrence has evolved since the end of the Cold War and how it remains vital to preventing war and preserving peace.

Hudson Institute Events Podcast
U.S. Strategic Command Commander�s Perspective on 21st Century Deterrence

Hudson Institute Events Podcast

Play Episode Listen Later Sep 20, 2017 56:12


On September 20, Hudson Institute hosted General John E. Hyten, who provided an overview of his command vision, outlined his priorities, and explained how deterrence has evolved since the end of the Cold War and how it remains vital to preventing war and preserving peace.

Pritzker Military Museum & Library Podcasts
Admiral Cecil D. Haney, Commander of U.S. Strategic Command

Pritzker Military Museum & Library Podcasts

Play Episode Listen Later Apr 29, 2016 74:08


In his current assignment, Admiral Haney serves as the senior commander of unified military forces from all four branches of the military assigned to USSTRATCOM, and is the leader, steward and advocate of the nation's strategic capabilities. In a special,...

Pritzker Military Museum & Library Podcasts
Admiral Cecil D. Haney, Commander of U.S. Strategic Command

Pritzker Military Museum & Library Podcasts

Play Episode Listen Later Apr 29, 2016 74:08


In his current assignment, Admiral Haney serves as the senior commander of unified military forces from all four branches of the military assigned to USSTRATCOM, and is the leader, steward and advocate of the nation's strategic capabilities. In a special,...