Podcasts about ai security

  • 291PODCASTS
  • 511EPISODES
  • 39mAVG DURATION
  • 1DAILY NEW EPISODE
  • Mar 12, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about ai security

Latest podcast episodes about ai security

Cloud Wars Live with Bob Evans
AI Agent & Copilot Podcast: Mark Polino on Closing the AI Security Gap

Cloud Wars Live with Bob Evans

Play Episode Listen Later Mar 12, 2026 11:38


Key Takeaways Session overview: AI is a transformative technology where security is lagging dangerously behind. Polino's session, "A Guide to Security Roles in AI Transformation (Implementation)," will explore why it's critical for organizations to reassess current roles, controls, and systems and proactively design security strategies specifically for an AI-driven environment. Guardrails: AI systems can be easily manipulated through indirect prompts or parameter framing, making it essential to enforce extremely strict guidelines and access controls to prevent unintended exposure of sensitive data. Exploring security with leaders: Organizations must proactively define security policies and controls for AI now to prevent users from going rogue or turning to shadow IT, because inaction will only amplify risk as sensitive data inevitably leaks into unsecured public AI tools. Event takeaways: Polino notes the importance of events like this because they bridge the knowledge gap between AI leaders and everyday business users by equipping them to understand AI early and effectively transfer that knowledge across their organizations. "AI is coming, whether you want it or not. The goal here is to figure out how to use it appropriately, how to make it as safe as you possibly can, and mitigate those risks inside your organization." Visit Cloud Wars for more.

Security Now (MP3)
SN 1069: You can't hide from LLMs - Was Your Smart TV a Stealth Proxy?

Security Now (MP3)

Play Episode Listen Later Mar 11, 2026 163:34


Think your online alias keeps you safe? This episode reveals how advanced language models are making it trivial to de-anonymize users at scale, challenging everything we thought we knew about internet privacy. Anthropic & Mozilla improve Firefox's security. Apple & Google begin testing cross-platform RCS encryption. Ubuntu's SUDO starts echoing asterisks. Inviting a web proxy into your home. Apple devices cleared by Germany for NATO's use. A serious remote takeover of OpenClaw. TokTok won't encrypt messaging for visibility. Microsoft bans the term "Microslop" on Discord. Lot's of great listener feedback. LLMs could make Orwell's 1984 seem optimistic. Show Notes - https://www.grc.com/sn/SN-1069-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit zscaler.com/security guardsquare.com hoxhunt.com/securitynow

Joey Pinz Discipline Conversations
#830 Cybersecurity Summit Tampa 2026 - Mike Siers:

Joey Pinz Discipline Conversations

Play Episode Listen Later Mar 11, 2026 30:48 Transcription Available


Send a textHow do we protect ourselves in a digital world where attackers face almost no real consequences?In this episode of Joey Pinz Discipline Conversations, Joey Pinz sits down with cybersecurity founder and inventor Mike Siers for a thought-provoking conversation that challenges everything we assume about online security, identity, and trust.Mike's journey begins in the Florida National Guard and a deployment to Afghanistan, where life-altering experiences shaped how he sees service, responsibility, and problem-solving. That same mindset later led him into healthcare innovation—and eventually into cybersecurity—after realizing that the internet lacks one critical element found in the physical world: real risk for bad actors.Inspired by military strategy and an MIT thesis on cyber power projection, Mike explains a radical idea: what if unauthorized access attempts cost money? Instead of defenders absorbing endless attacks, attackers would inherit the risk before they even try.This conversation explores how empathy fuels innovation, why most cybersecurity models are reactive by design, and how shifting incentives could dramatically change online behavior. It's a powerful look at leadership, responsibility, and building solutions not just for today—but for the next generation. ⭐ Top 3 Highlights

All TWiT.tv Shows (MP3)
Security Now 1069: You can't hide from LLMs

All TWiT.tv Shows (MP3)

Play Episode Listen Later Mar 11, 2026 163:34 Transcription Available


Think your online alias keeps you safe? This episode reveals how advanced language models are making it trivial to de-anonymize users at scale, challenging everything we thought we knew about internet privacy. Anthropic & Mozilla improve Firefox's security. Apple & Google begin testing cross-platform RCS encryption. Ubuntu's SUDO starts echoing asterisks. Inviting a web proxy into your home. Apple devices cleared by Germany for NATO's use. A serious remote takeover of OpenClaw. TokTok won't encrypt messaging for visibility. Microsoft bans the term "Microslop" on Discord. Lot's of great listener feedback. LLMs could make Orwell's 1984 seem optimistic. Show Notes - https://www.grc.com/sn/SN-1069-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit zscaler.com/security guardsquare.com hoxhunt.com/securitynow

Security Now (Video HD)
SN 1069: You can't hide from LLMs - Was Your Smart TV a Stealth Proxy?

Security Now (Video HD)

Play Episode Listen Later Mar 11, 2026


Think your online alias keeps you safe? This episode reveals how advanced language models are making it trivial to de-anonymize users at scale, challenging everything we thought we knew about internet privacy. Anthropic & Mozilla improve Firefox's security. Apple & Google begin testing cross-platform RCS encryption. Ubuntu's SUDO starts echoing asterisks. Inviting a web proxy into your home. Apple devices cleared by Germany for NATO's use. A serious remote takeover of OpenClaw. TokTok won't encrypt messaging for visibility. Microsoft bans the term "Microslop" on Discord. Lot's of great listener feedback. LLMs could make Orwell's 1984 seem optimistic. Show Notes - https://www.grc.com/sn/SN-1069-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit zscaler.com/security guardsquare.com hoxhunt.com/securitynow

Security Now (Video HI)
SN 1069: You can't hide from LLMs - Was Your Smart TV a Stealth Proxy?

Security Now (Video HI)

Play Episode Listen Later Mar 11, 2026


Think your online alias keeps you safe? This episode reveals how advanced language models are making it trivial to de-anonymize users at scale, challenging everything we thought we knew about internet privacy. Anthropic & Mozilla improve Firefox's security. Apple & Google begin testing cross-platform RCS encryption. Ubuntu's SUDO starts echoing asterisks. Inviting a web proxy into your home. Apple devices cleared by Germany for NATO's use. A serious remote takeover of OpenClaw. TokTok won't encrypt messaging for visibility. Microsoft bans the term "Microslop" on Discord. Lot's of great listener feedback. LLMs could make Orwell's 1984 seem optimistic. Show Notes - https://www.grc.com/sn/SN-1069-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit zscaler.com/security guardsquare.com hoxhunt.com/securitynow

Radio Leo (Audio)
Security Now 1069: You can't hide from LLMs

Radio Leo (Audio)

Play Episode Listen Later Mar 11, 2026 163:34 Transcription Available


Think your online alias keeps you safe? This episode reveals how advanced language models are making it trivial to de-anonymize users at scale, challenging everything we thought we knew about internet privacy. Anthropic & Mozilla improve Firefox's security. Apple & Google begin testing cross-platform RCS encryption. Ubuntu's SUDO starts echoing asterisks. Inviting a web proxy into your home. Apple devices cleared by Germany for NATO's use. A serious remote takeover of OpenClaw. TokTok won't encrypt messaging for visibility. Microsoft bans the term "Microslop" on Discord. Lot's of great listener feedback. LLMs could make Orwell's 1984 seem optimistic. Show Notes - https://www.grc.com/sn/SN-1069-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit zscaler.com/security guardsquare.com hoxhunt.com/securitynow

Security Now (Video LO)
SN 1069: You can't hide from LLMs - Was Your Smart TV a Stealth Proxy?

Security Now (Video LO)

Play Episode Listen Later Mar 11, 2026


Think your online alias keeps you safe? This episode reveals how advanced language models are making it trivial to de-anonymize users at scale, challenging everything we thought we knew about internet privacy. Anthropic & Mozilla improve Firefox's security. Apple & Google begin testing cross-platform RCS encryption. Ubuntu's SUDO starts echoing asterisks. Inviting a web proxy into your home. Apple devices cleared by Germany for NATO's use. A serious remote takeover of OpenClaw. TokTok won't encrypt messaging for visibility. Microsoft bans the term "Microslop" on Discord. Lot's of great listener feedback. LLMs could make Orwell's 1984 seem optimistic. Show Notes - https://www.grc.com/sn/SN-1069-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit zscaler.com/security guardsquare.com hoxhunt.com/securitynow

Cables2Clouds
An Honest Conversation About AI Security

Cables2Clouds

Play Episode Listen Later Mar 11, 2026 52:18 Transcription Available


Send a textReady for a reality check on AI security? We invited Cisco cybersecurity expert Katherine McNamara to dig into where large language models actually break: from prompt injection and over-permissioned plugins to reckless “vibe-coded” apps that leak IDs, photos, and entire backends. The stories are real, the stakes are high, and the fixes are concrete. We trace how AI sprawl mirrors the worst of early IoT—weak defaults, poor isolation, and a stampede to integrate models into billing, HR, and support without guardrails—only this time the blast radius includes your customer data and your legal exposure.We talk through the human factor first. Written policies won't stop someone from pasting a pen test report into a public chatbot. DLP helps, but hybrid work and BYOD stretch defenses thin. Then we move to the core threat model: public and private models are targets; datasets can be poisoned; plugins often ship with admin-level scopes; and a clever prompt can trick an LLM into disclosing chat histories, creating new accounts, or modifying orders. Courts have already treated chatbots as company representatives, binding businesses to their outputs—another reason to treat every integration like an untrusted user with strict least privilege.It's not all doom. Used well, AI gives security operations superpowers: correlating signals across dozens of tools, reducing alert fatigue, and surfacing lateral movement. The path forward is discipline, not denial. Fence models on the network. Prefer read-only to write. Gate plugins behind narrowly scoped APIs. Vet datasets for backdoors. Red-team prompts as seriously as you pen test code. And educate stakeholders with live demos so they see why these controls matter. We also unpack the shaky economics—GPU costs, rising consumer fatigue, hype-fueled projects with little ROI—and why that pressure can erode privacy if teams aren't vigilant.If you're building with LLMs or trying to rein them in, this conversation gives you a practical map: what to allow, what to block, and how to make AI useful without turning your stack into an attack surface. Subscribe, share with a teammate who ships integrations, and drop a review with the one guardrail you'll implement this quarter.Connect with our Guest:https://x.com/kmcnam1https://www.linkedin.com/in/katherinermcnamara/Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Monthly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

Reimagining Cyber
AI Security and the Future of the SOC - #192

Reimagining Cyber

Play Episode Listen Later Mar 11, 2026 19:00


AI is transforming every corner of technology—but it's also creating an entirely new frontier for cybersecurity.In just a few short years, AI security has exploded into one of the fastest-growing segments in the industry. New startups are emerging almost weekly, regulators are racing to keep up, and security leaders are grappling with a fundamental question: how do you secure systems that are learning, evolving, and increasingly making decisions on their own?Today's guest has been tracking the cybersecurity industry longer—and more closely—than almost anyone.Richard Stiennon is a renowned cybersecurity analyst, industry historian, and author of The Security Yearbook, widely regarded as the most comprehensive desk reference for the cybersecurity market. Now he's turning his attention to the next era of digital risk.His new book, Guardians of the Machine Age: Why AI Security Will Define the Future of Digital, is released this Wednesday, March 11—the same day this episode drops.In this conversation, we explore why AI security has exploded so quickly, the forces driving this new market—from regulation to real-world attacks—and why Richard believes the standalone category of “AI security” may disappear entirely within the next year as AI becomes embedded in every security product.We also dig into the rise of AI-driven SOC automation, what it means when machines begin triaging—and even responding to—threats autonomously, and the biggest misconceptions CISOs still have about securing AI systems.If you want to understand where cybersecurity is heading in the age of intelligent machines, this is a conversation you won't want to miss.As featured on Million Podcasts' Best 100 Cybersecurity Podcasts Top 50 Chief Information Security Officer CISO Podcasts Top 70 Security Hacking Podcasts This list is the most comprehensive ranking of Cyber Security Podcasts online and we are honoured to feature amongst the best! Follow or subscribe to the show on your preferred podcast platform.Share the show with others in the cybersecurity world.Get in touch via reimaginingcyber@gmail.com

All TWiT.tv Shows (Video LO)
Security Now 1069: You can't hide from LLMs

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Mar 11, 2026 163:34 Transcription Available


Think your online alias keeps you safe? This episode reveals how advanced language models are making it trivial to de-anonymize users at scale, challenging everything we thought we knew about internet privacy. Anthropic & Mozilla improve Firefox's security. Apple & Google begin testing cross-platform RCS encryption. Ubuntu's SUDO starts echoing asterisks. Inviting a web proxy into your home. Apple devices cleared by Germany for NATO's use. A serious remote takeover of OpenClaw. TokTok won't encrypt messaging for visibility. Microsoft bans the term "Microslop" on Discord. Lot's of great listener feedback. LLMs could make Orwell's 1984 seem optimistic. Show Notes - https://www.grc.com/sn/SN-1069-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit zscaler.com/security guardsquare.com hoxhunt.com/securitynow

HW Podcasts
Bob Hart on AI, security, and the next phase of mortgage tech

HW Podcasts

Play Episode Listen Later Mar 11, 2026 11:12


As the mortgage industry prepares for one of its largest annual gatherings, Diego Sanchez sits down with Bob Hart to preview what professionals can expect at Experience 2026, hosted by ICE Mortgage Technology at the Wynn Las Vegas. The event will bring together more than 3,000 industry leaders, hundreds of companies, and a full agenda focused on innovation, strategy, and the future of mortgage technology. Hart explains how the conference is designed to give attendees practical insights and tools they can immediately apply to their businesses. The conversation explores key themes shaping this year's event—including artificial intelligence in mortgage operations, cybersecurity and fraud prevention, market conditions affecting lenders, and leadership development across the industry. Hart also previews new technology investments aimed at creating a more connected ecosystem linking originators, servicers, and consumers. For professionals navigating a rapidly evolving market, this episode offers an inside look at the conversations and innovations shaping the next chapter of mortgage technology. Related to this episode: Bob Hart's LinkedIn ICE Mortgage Technology The Power House podcast brings the biggest names in housing to answer hard-hitting questions about industry trends, operational and growth strategy, and leadership. Join HousingWire's Zeb Lowe every Thursday morning for candid conversations with industry leaders to learn how they're differentiating themselves from the competition. Hosted and produced by the HousingWire Content Studio.

Cloud Security Podcast
Browser Security Explained: Consent Phishing, "Click Fix" Attacks & The Limits of EDR

Cloud Security Podcast

Play Episode Listen Later Mar 10, 2026 46:07


Is your security team treating your Identity Provider (IDP) like a firewall? In this episode, Adam Bateman (CEO & Co-founder of Push Security) explains why that's a dangerous mistake and how modern attackers are bypassing SSO entirely .Drawing from his background leading red teams that simulated nation-state attacks , Adam breaks down the massive architectural shift from network-based attacks to browser-native exploits. We dive into the terrifying evolution of phishing, from "Click Fix" attacks that trick users into running malicious commands via their clipboard, to "Consent Phishing" that completely takes over Azure without ever touching the endpoint .If your company relies heavily on SaaS applications or Chromebooks, this episode would be a valuable listen. Guest Socials -⁠ ⁠⁠⁠⁠⁠⁠⁠Adam's Linkedin Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(02:50) Who is Adam Bateman? (Red Teaming & Simulating Nation States) (05:40) Why Identity & MFA Are Not "Solved" Problems (07:50) The Myth: Why an IDP is Not a Firewall (11:30) Consent Phishing: Exploiting OAuth Apps (13:30) The Architectural Shift: Network to Browser (15:30) Scattered Spider & The Rise of Identity Coalitions (19:30) Threat Modeling: On-Prem vs. Chromebooks (23:20) The Problem with SSPM and API Limitations (28:40) How "Click Fix" Attacks Trick Users into Running Malware (32:30) Omnichannel Phishing: LinkedIn, SMS, and Google Ads (34:30) Weaponizing Legitimate SaaS Apps (The DocuSign Exploit) (37:00) Consent Fix: Full Azure Compromise Inside the Browser (38:50) Disrupting the Secure Web Gateway (SWG) Market (41:40) Fun Questions: Wakeboarding, Culture, and Brat's RestaurantResources spoken about during the episode:You can find out more about Push Security here.Thank you to Push Security for sponsoring this episode.

Cloud Wars Live with Bob Evans
Can You Trust Your AI Data?

Cloud Wars Live with Bob Evans

Play Episode Listen Later Mar 10, 2026 2:53


Key Takeaways Herain Oberoi, Microsoft's general manager for data security, privacy, and compliance, recently held a session where he outlined top security challeneges within the AI era. Specifically, Oberoi outlined three concerns enterprises must address to build secure, scalable AI operations. He stressed strict access controls and disciplined data hygiene to prevent oversharing and sensitive data leakage. Second, regulatory compliance now requires continuous auditability of AI agent operations, with Microsoft Purview Compliance Manager enabling on-demand proof of control. Finally, fragmented solutions increase cost and complexity, while expanded Purview unifies data security, governance, and compliance in a single pane of glass. Enterprises that quickly adapt to rising security expectations will be best positioned to scale AE operations and realize the full value of the AE era. Visit Cloud Wars for more.

ITSPmagazine | Technology. Cybersecurity. Society
Tackling Third-Party Risk and AI Security in Healthcare | A Brand Spotlight Conversation with Jason Kor, Principal of HITRUST | HIMSS 2026 Event Coverage

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Mar 9, 2026 11:48


Third-party risk is no longer a background concern for healthcare organizations -- it is a frontline challenge. Jason Kor, Principal at HITRUST, works on the company's third-party risk management team, helping enterprises understand the security risk embedded in their supply chains. The numbers tell a stark story: according to Security Scorecard, 99% of the world's 2,000 largest companies are actively connected to a vendor that has experienced a breach in the past 18 months. And Verizon's Data Breach Investigations Report shows that the share of breaches tied to a third party has doubled year over year. HITRUST exists precisely to help organizations move from awareness to action. HITRUST will be at HIMSS 2026 in Las Vegas, March 9-12, at Booth 11307. Stop playing whack-a-mole with vendor risk -- step into the VR challenge and win prizes. For organizations already holding a HITRUST certification, the team has something else waiting: a trophy recognizing the commitment to independent, external audits and rigorous security standards. For those exploring certification for the first time, the booth is a chance to understand how HITRUST compares to alternatives like SOC 2 questionnaires -- and why scalability and risk reduction make it the stronger choice for supply chain assurance. Kor puts it plainly: the audits are time-consuming and expensive because they are effective. And at the end of the process, someone reads that report and makes real business decisions based on what it contains. Two major themes converge at this year's event: supply chain risk and AI. HITRUST has already launched an AI security assessment offering, and new CSF releases are on the horizon, including a report center feature enabling online review of assessments for anti-fraud and continuous monitoring purposes. On Tuesday, March 10, 2026, from 11:10 AM to 11:30 AM, Kor will deliver a 20-minute session titled "Understanding AI Security Risk -- The New Blind Spot in TPRM and Supply Chain Resilience." The session addresses a rapidly evolving challenge: as organizations build their own generative AI tooling -- or work with third parties that have integrated AI into their products -- questions around data sovereignty, input handling, and model provenance become critical, especially in healthcare where electronic health information is at stake. Also on the HIMSS 2026 agenda from HITRUST: Ryan Patrick, Executive Vice President of TPRM Customer Solutions, joins John P. Houston of UPMC and Chuck Christian of Franciscan Health for a Brunch Briefing titled "Building Secure, Compliant, and Resilient Healthcare Systems Together" on Tuesday, March 10, 2026, from 10:30 AM to 11:45 AM at Level 1, Casanova 505. The session offers practical strategies, frameworks, and real-world lessons for organizations looking to reduce risk, enhance protection, and advance trust in an evolving threat and regulatory landscape. This is a Brand Spotlight. A Brand Spotlight is a ~15 minute conversation designed to explore the guest, their company, and what makes their approach unique. Learn more: https://www.studioc60.com/creation#spotlight GUEST Jason Kor, Principal, HITRUSThttps://www.linkedin.com/in/securityconsultantcissp/ RESOURCES HITRUST: https://hitrustalliance.net Jason Kor Session -- Understanding AI Security Risk -- The New Blind Spot in TPRM and Supply Chain Resilience (Tuesday, March 10, 2026, 11:10 AM - 11:30 AM): https://app.himssconference.com/event/himss-2026/planning/UGxhbm5pbmdfNDMyMTMxOA== Building Secure, Compliant, and Resilient Healthcare Systems Together -- Brunch Briefing (Tuesday, March 10, 2026, 10:30 AM - 11:45 AM): https://app.himssconference.com/event/himss-2026/planning/UGxhbm5pbmdfNDMzNzQwMQ== HIMSS 2026 Global Health Conference and Exhibition: https://www.itspmagazine.com/cybersecurity-technology-society-events/himss-global-health-conference-amp-exhibition-2026 Are you interested in telling your story? ▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full ▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight ▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight KEYWORDS Jason Kor, HITRUST, Sean Martin, brand story, brand marketing, marketing podcast, brand spotlight, third-party risk management, TPRM, supply chain risk, healthcare cybersecurity, HIMSS 2026, AI security, generative AI risk, HITRUST CSF, cybersecurity certification, data sovereignty, electronic health information, vendor risk management Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Breaking Into Cybersecurity
From Libraries to AI Security: Peter Swimm's Cybersecurity Path | Breaking Into Cybersecurity

Breaking Into Cybersecurity

Play Episode Listen Later Mar 8, 2026 27:01


Peter Swimm started at a library computer desk. He ended up as a product owner at Microsoft and founder of his own AI security consultancy.In this episode, Peter shares the real path — not the polished LinkedIn version. He talks about why working at startups changed how he thinks about learning, what it actually takes to balance security with user experience, and why conversational AI is opening a security attack surface most teams aren't prepared for.If you're trying to break into cybersecurity from a non-traditional background, or you're already in tech and wondering what's next — this conversation is for you.

Financial Freedom for Physicians with Dr. Christopher H. Loo, MD-PhD

email chris@drchrisloomdphd.com with "Podcast freebie" to book a coveted FREE guest spot on the show. To book a PREMIUM spot on the Podcast: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.drchrisloomdphd.com/_paylink/AZpgR_7f⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Book a 1-on-1 coaching call: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.drchrisloomdphd.com/booking-calendar/introductory-session⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Subscribe to our email list: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://financial-freedom-podcast-with-dr-loo.kit.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠Disclaimer: Not advice. Educational purposes only. Not an endorsement for or against. Results not vetted. Views of the guests do not represent those of the host or show.  

Cloud Security Podcast
Is AI Hallucinations a Myth and the Real Threat from AI

Cloud Security Podcast

Play Episode Listen Later Mar 6, 2026 40:02


Are attackers really using AI to run end-to-end cyber campaigns? In this episode, Edward Wu (Founder and CEO, DropzoneAI) joins Ashish to separate the hype from reality when it comes to AI-driven attacks .Edward explains how attackers are currently using open-source LLMs for reconnaissance and spear-phishing , and why the major commercial models now explicitly prohibit users from generating exploits without vetting . On the defense side, Edward shares how AI agents have successfully automated over 160 years' worth of alert investigations in the real world proving that 100% software-delivered SOC triage is already here .We also debunk the myth of AI "hallucinations," explaining why most errors are actually just poor context management . If you're building a security operations center or working with an MSSP, this episode will teach you how to shift from manual alert fatigue to leveraging AI for threat hunting.Guest Socials -⁠ ⁠⁠⁠⁠⁠⁠Edward's Linkedin Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(02:50) Who is Edward Wu? (Founder of Dropzone AI) (04:50) The Reality of AI Cyber Attacks Today (Recon vs. End-to-End) (07:20) Why Commercial LLMs Are Blocking Exploit Generation (11:50) How MSSPs are Evolving with AI Triage (18:20) The Asymmetric Capacity Gap: Why Humans Can't Keep Up (22:30) Automating 160 Years of Alert Investigations (23:50) Why AI Hallucinations are Actually Context Management Failures (26:00) Build vs. Buy: The Data Network Effect for AI Agents (29:20) The New Workflow for SOC Analysts & Threat Hunters(31:30) Defining "Threategy": Scope, Authorization, and Context (35:50) How to Detect Prompt Injection (Treat it like an Insider Threat) (38:30) Dropzone AI Announcements at RSACResources spoken about during the episode:- Dropzone Diner RSAC 2026- If you want to learn more about Dropzone- you can do that here!

Hacker Valley Studio
Can AI Do Your Cyber Job? Post Your Job Req and Find Out with Marcus J. Carey

Hacker Valley Studio

Play Episode Listen Later Mar 6, 2026 38:49


Last episode, Ron and Marcus made predictions. This episode, they brought the receipts. A journalist built an app with vibe coding and got hacked on live television.  A social network built entirely by AI (not a single line of human code!) exposed 1.5 million authentication tokens and private messages between agents.  And 88% of organizations have already had an AI security incident, while barely 14% of deployed agents ever saw a security review.  The warnings from last episode aged fast. Marcus J. Carey is back to talk about what that actually means for the people building right now, not the people theorizing about it. Ron and Marcus are in the code themselves, and this conversation is what that experience actually looks like: OpenClaw running loose on your machine, agents racking up API bills, and why guidance, not prompts, not tools, is the real skill that separates builders who thrive from builders who ship disasters. Impactful Moments 00:00 - Introduction 02:00 - Vibe coding hack on live TV 03:30 - Mo Book leaks 1.5M auth tokens 06:00 - Marcus' origin story: War Games, 1983 08:00 - OpenClaw escapes the lab 13:30 - AT&T cuts help desk spend 90% 17:00 - Context is king, guidance is everything 19:00 - Can AI do your job rec right now? 24:00 - The first cybersecurity jobs agents will replace 27:00 - Expertise + AI = 1000x yourself 30:00 - Focus on outcomes, not new tools   Links Connect with our guest, Marcus J. Carey, on LinkedIn: https://www.linkedin.com/in/marcuscarey/   Read the articles we referenced in this episode: The vibe coding hack that aired on live TV, ICAEW breaks down exactly how it happened and what it means for anyone building with AI: https://www.icaew.com/insights/viewpoints-on-the-news/2026/feb-2026/cyber-dangers-of-agents-and-vibe-coding 88% of organizations have already had an AI security incident. See the full data from the Cisco State of AI Security 2026 report: https://www.helpnetsecurity.com/2026/02/23/ai-agent-security-risks-enterprise/   Check out our upcoming events: https://www.hackervalley.com/livestreams Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/  

GovCast
CMS Advances Zero Trust, AI Security in IT Modernization Push | Zscaler Public Sector Summit 2026

GovCast

Play Episode Listen Later Mar 6, 2026 9:24


The Centers for Medicare & Medicaid Services is modernizing its IT infrastructure to improve efficiency, security and access for patients and providers. Since taking the role in May, Wade Zarriello, director of infrastructure and user services, has led efforts to consolidate platforms, optimize shared services and cut costs — exceeding CMS's fiscal year 2025 savings goal by $750 million. Zarriello also discussed how the agency is implementing a zero trust cybersecurity framework and leveraging AI tools to strengthen data protection and operational reliability. He highlighted CMS's use of GSA OneGov agreements with AWS, Oracle and Salesforce to drive cost savings, improve platform consolidation and support hybrid cloud initiatives.

IT Visionaries
How the Office of the CFO Is Becoming AI-Powered

IT Visionaries

Play Episode Listen Later Mar 5, 2026 49:20


Compliance and regulatory reporting used to mean endless spreadsheets, fragmented data sources, and teams drowning in manual work. Today, AI is transforming how the world's largest companies manage financial reporting, sustainability disclosures, and audit workflows—not by replacing humans, but by giving them time back to do strategic work. In this episode of IT Visionaries, host Chris Brandt sits down with Kim Huffman, CIO of Workiva, the platform used by 85% of the Fortune 100 for critical financial and compliance reporting. Kim shares her unique perspective as both a former Workiva customer and now the CIO steering the company into an AI-powered future. They explore how the office of the CFO is evolving under pressure from new sustainability regulations, how AI governance actually works in practice, and why collaboration between IT, finance, sustainability, and risk teams has become essential. Kim also discusses the changing role of the CIO, the coming wave of autonomous agents in the workplace, and why having more data doesn't always mean making better decisions.   Key Moments: 00:58 – The State of Compliance Today 02:18 – Why Standards and Regulations Matter 05:48 – The Complexity of Global Compliance 07:36 – Data Collection Across Teams 08:36 – Single Source of Truth 10:20 – The Sustainability Data Challenge 13:36 – The Endless Spreadsheet Problem 16:12 – What's Driving the CFO Office 19:46 – AI's Strategic Role at Workiva 23:02 – Beyond Repetitive Tasks 25:20 – Transforming How Teams Work 27:03 – Will AI Replace Jobs or Create Capacity? 30:00 – Measuring AI's Business Impact 33:06 – Speed vs. Data Overload 36:25 – The Evolving Role of the CIO 40:00 – Technology Leadership in Transition 43:09 – The Next Five Years for CIOs 46:14 – Managing the Coming Wave of AI Agents 50:02 – AI Will Create Its Own Security Industry 52:26 – The Sustainability Reporting Reality 55:31 – Resource Constraints and AI Consumption 57:34 – Why ESG Data Is Now Critical Business Intelligence 59:23 – Keeping NPS High While Innovating   -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

ITSPmagazine | Technology. Cybersecurity. Society
The 72-Minute Gap: What the Breaches, the Vendors, and the Messaging Are Actually Telling Us | Lens Four by Sean Martin | Read by TAPE9

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Mar 5, 2026 14:22


Attackers are moving in 72 minutes. One CISO has already eliminated the entire SOC team. And the industry is spending a quarter of a trillion dollars while struggling to define what "resilience" even means. In this edition of Lens Four, Sean Martin looks at the cybersecurity landscape through three lenses — programs, innovation, and messaging — to connect the signals that matter.

Redefining CyberSecurity
The 72-Minute Gap: What the Breaches, the Vendors, and the Messaging Are Actually Telling Us | Lens Four by Sean Martin | Read by TAPE9

Redefining CyberSecurity

Play Episode Listen Later Mar 5, 2026 14:22


Attackers are moving in 72 minutes. One CISO has already eliminated the entire SOC team. And the industry is spending a quarter of a trillion dollars while struggling to define what "resilience" even means. In this edition of Lens Four, Sean Martin looks at the cybersecurity landscape through three lenses — programs, innovation, and messaging — to connect the signals that matter.

Born In Silicon Valley
Stop AI Data Poisoning Now

Born In Silicon Valley

Play Episode Listen Later Mar 5, 2026 39:59


AI is evolving faster than we can secure it, and data poisoning threatens to turn the systems we trust into unpredictable liabilities. Join us as Wendy Chin, CEO of PureCipher, reveals the silent vulnerabilities in artificial intelligence and how we can protect the future of super intelligence before it is too late. In this episode, we explore the critical intersection of cybersecurity and artificial intelligence. Wendy breaks down how even a fraction of a percent of compromised data can drastically reduce AI accuracy and lead to dangerous real-world outcomes. We discuss the transition from traditional cybersecurity perimeters to defending the data itself, ensuring that AI models remain safe, ethical, and aligned with humanity. We also dive into the groundbreaking Omniseal technology from PureCipher, an invisible watermark that authenticates data and prevents malicious tampering. Whether you are building AI agents, managing sensitive enterprise data, or simply navigating the new digital landscape, understanding these advanced security layers is absolutely essential for the future of business. Chapters 00:00 Introduction and Background of Wendy Chin 01:06 AI Security and the Future of Technology 04:07 Meet Wendy Chin and the Future of AI Security 05:46 From Bell Labs to Cybersecurity Innovation 08:21 Why Startups Outpace Corporate Hierarchies 11:18 The Hidden Dangers of AI Data Poisoning 13:33 Building PureCipher to Secure Artificial Intelligence 17:57 Why Super AI Needs Empathy to Survive 22:38 Securing Training Data with Invisible Watermarks 28:21 Data Sovereignty and Monetizing Your Digital Footprint 32:27 How the PureCipher Platform Protects Data 35:02 Partnering with the Defense Industry for AI Safety 38:28 What Every Company Must Ask AI Vendors 41:08 Why AI Hallucinates and How to Fix It 45:00 The 2026 Roadmap for PureCipher and OmniSeal 46:32 The Business Model Behind AI Trust Layers 48:04 Where to Connect with Wendy Chin Host: Jake Aaron Villarreal leads the top AI recruitment firm in Silicon Valley, www.matchrelevant.com, uncovering stories of funded startups and going behind the scenes to tell their founders' journeys. If you are growing an AI startup or have a great story to tell, email us at: jake.villarreal@matchrelevant.com

ITSPmagazine | Technology. Cybersecurity. Society
SOC Automation and the AI-Driven Future of Cybersecurity Defense | A Redefining CyberSecurity Podcast Conversation with Richard Stiennon, Chief Research Analyst of IT-Harvest

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Mar 4, 2026 26:10


⬥EPISODE NOTES⬥ The security operations center has always been a battleground of volume, velocity, and human endurance. Analysts have long faced the impossible math of too many alerts, too few hours, and too much at stake. For years, the industry promised automation would change that equation -- but the technology was never quite ready to deliver. That moment, according to Richard Stiennon, has now arrived. Stiennon, Chief Research Analyst at IT-Harvest, has spent two decades tracking every corner of the cybersecurity vendor landscape. His data now shows more than 61 net-new SOC automation vendors -- companies that did not exist a few years ago -- built from the ground up to replace the work of tier-one, tier-two, and tier-three analysts. Some of these vendors launched in January 2024 and reached $1 million in ARR by April. By the end of 2025, several were reporting $3 million ARR. These are not incremental improvements. They represent a structural shift in how security operations can be run. What makes this generation of SOC automation different from earlier SIEM and SOAR tooling is scope and autonomy. The value proposition is blunt: 100% alert triage, 24 hours a day, 7 days a week -- with automated case building, threat investigation, and response actions including machine isolation and reimaging. Stiennon points to a CISO he met, speaking under Chatham House rules, who disclosed that a large enterprise had already eliminated its entire human SOC team. He predicts that disclosure will go public before long. The conversation also explores the business context question that security leaders frequently wrestle with: are these AI-driven SOC tools operating with a narrow cyber mandate, potentially optimizing for security metrics at the expense of business continuity? Stiennon pushes back on that concern, arguing that large language models are already trained on the full breadth of human knowledge -- they understand business context at a level that exceeds most organizations' internal documentation. The more pressing risk, he suggests, is not that AI will act outside business intent, but that organizations will move too slowly to benefit. Waiting six months for a proof-of-concept report while spending a million dollars on human SOC operations is not due diligence -- it is opportunity cost. The conversation also touches on data privacy in AI-driven security, the role of federated learning and fully homomorphic encryption for compliance-sensitive environments, and what security leaders can do today to evaluate and accelerate their own adoption timeline. Stiennon will be at RSA Conference 2026 with his new book, Guardians of the Machine Age: Why AI Security Will Define Digital Defense, continuing to make the case for a field that is moving faster than most organizations are prepared to acknowledge. ⬥GUEST⬥ Richard Stiennon, Chief Research Analyst at IT-Harvest | Website: https://it-harvest.com/ On LinkedIn: https://www.linkedin.com/in/stiennon/ ⬥HOST⬥ Sean Martin, Co-Founder at ITSPmagazine, Studio C60, and Host of Redefining CyberSecurity Podcast & Music Evolves Podcast | Website: https://www.seanmartin.com/ ⬥RESOURCES⬥ IT-Harvest | https://it-harvest.com/ Richard Stiennon on LinkedIn | https://www.linkedin.com/in/stiennon/ Guardians of the Machine Age: Why AI Security Will Define Digital Defense (Richard Stiennon) | Available via IT-Harvest and major booksellers RSAC Conference 2026 Coverage on ITSPmagazine | https://www.itspmagazine.com/rsac-2026-conference-san-francisco-usa-cybersecurity-event-infosec-conference-coverage The Future of Cybersecurity Newsletter | https://www.linkedin.com/newsletters/7108625890296614912/ More Redefining CyberSecurity Podcast episodes | https://www.seanmartin.com/redefining-cybersecurity-podcast Redefining CyberSecurity Podcast on YouTube | https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq ⬥ADDITIONAL INFORMATION⬥ On Podcast: https://www.seanmartin.com/redefining-cybersecurity-podcast On YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq Newsletter: https://itspm.ag/future-of-cybersecurity Contact Sean: https://www.seanmartin.com/ ⬥KEYWORDS⬥ richard stiennon, it-harvest, sean martin, soc automation, ai security, security operations center, threat detection, autonomous response, alert triage, security operations, cybersecurity vendors, ai agents, large language models, federated learning, siem, soar, redefining cybersecurity, cybersecurity podcast, redefining cybersecurity podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

ESG Talk
Davos Discussions: What is the ROI Heresy?

ESG Talk

Play Episode Listen Later Mar 4, 2026 11:43


Traditional finance models are hitting a wall. This episode highlights a panel at Davos that gets straight to the engine room of the enterprise.  Jatin Dalal, Chief Financial Officer, Cognizant; Mike Rost, Chief Strategy Officer, Workiva; Jonathan Zanger, Chief Technology Officer, Check Point; and Jennifer Steinmann, Global Sustainability Business Leader, Deloitte gathered to talk about:  The ROI heresy: Why waiting for a fixed ROI is like using an obsolete map for a moving target  The 3x productivity jump: Why a 300% increase is the new starting point for AI  Security risks: Understanding white font attacks and AI doppelgangers in HR systems  Strategic insights: How predictive analytics and earth observation are changing risk valuation  Timestamps: 00:00—Multiplying traditional productivity by three  02:15—The Davos panel: AI promise and peril  04:10—Why ROI is an irrelevant measure for AI  05:40—Security alerts: The white font attack  07:15—The $3.8 trillion insight at stake  08:20—The Monday morning mandate  "Whatever you thought about traditional productivity multiplied by three at minimum, and that should be a starting point, not the end point." —Jatin Dalal, CFO of Cognizant  Find past conversations at workiva.com/podcast/the-pre-read 

Between Two COO's with Michael Koenig
AI Agents Need Logins Too: Identity, Security, and the Future of AI | Greg Keller, CTO, JumpCloud

Between Two COO's with Michael Koenig

Play Episode Listen Later Mar 4, 2026 32:01


Get 90 days of Fellow free at Fellow.ai/coo In this episode, Michael Koenig speaks with Greg Keller, co-founder and CTO of JumpCloud, about identity access management and why it's becoming one of the most important operational systems in the age of AI. Greg explains how traditional identity systems were designed for office-based companies running Microsoft infrastructure and why that model broke as companies moved to SaaS, cloud infrastructure, and remote work. The discussion then turns to the next big shift: the rise of AI agents and synthetic identities inside organizations. As companies deploy more AI tools, the number of machine identities may soon outnumber human employees. Managing what those systems can access will become a critical security and operational challenge.   Topics Covered What a CTO actually does Greg explains the different types of CTO roles and how technology leaders help companies anticipate where the market is headed. Identity Access Management explained simply IAM answers three core questions inside every company: Who are you? What can you access? How is that access managed?   Why the old IT model broke Traditional identity systems were built for on-premise offices and Microsoft infrastructure. Modern companies now operate across: SaaS applications cloud infrastructure remote work environments multiple operating systems How JumpCloud approaches identity JumpCloud was built to manage identity across devices, applications, and infrastructure regardless of platform. Where Okta fits in the ecosystem Okta helped modernize browser-based authentication through Single Sign-On, while JumpCloud focuses on broader identity infrastructure.   AI, Security, and Synthetic Identities Why COOs should push AI adoption Greg argues AI adoption is no longer optional. Companies must encourage teams to improve productivity and efficiency using AI.   The rise of synthetic identities AI agents, bots, APIs, and service accounts are becoming new actors inside companies that require identity governance.   Bots may soon outnumber employees Organizations will soon manage more machine identities than human ones.   AI as a potential insider threat AI systems can become security risks if they are granted excessive permissions or misinterpret policies.   The API key governance problem Many AI integrations rely on API keys, which are often poorly managed and can create hidden security risks.   Key Takeaway As companies adopt AI, identity access management becomes the control layer that determines what both humans and machines are allowed to do inside the organization. The companies that manage identity well will move faster and operate more securely.   Links: Michael on LinkedIn: https://linkedin.com/in/michael-koenig514 Greg on LinkedIn: https://www.linkedin.com/in/gregorykeller/ JumpCloud: https://jumpcloud.com/ Between Two COO's: https://betweentwocoos.com Episode Link: https://betweentwocoos.com/ai-agents-identity-access-greg-keller

Redefining CyberSecurity
SOC Automation and the AI-Driven Future of Cybersecurity Defense | A Redefining CyberSecurity Podcast Conversation with Richard Stiennon, Chief Research Analyst of IT-Harvest

Redefining CyberSecurity

Play Episode Listen Later Mar 4, 2026 26:10


⬥EPISODE NOTES⬥ The security operations center has always been a battleground of volume, velocity, and human endurance. Analysts have long faced the impossible math of too many alerts, too few hours, and too much at stake. For years, the industry promised automation would change that equation -- but the technology was never quite ready to deliver. That moment, according to Richard Stiennon, has now arrived. Stiennon, Chief Research Analyst at IT-Harvest, has spent two decades tracking every corner of the cybersecurity vendor landscape. His data now shows more than 61 net-new SOC automation vendors -- companies that did not exist a few years ago -- built from the ground up to replace the work of tier-one, tier-two, and tier-three analysts. Some of these vendors launched in January 2024 and reached $1 million in ARR by April. By the end of 2025, several were reporting $3 million ARR. These are not incremental improvements. They represent a structural shift in how security operations can be run. What makes this generation of SOC automation different from earlier SIEM and SOAR tooling is scope and autonomy. The value proposition is blunt: 100% alert triage, 24 hours a day, 7 days a week -- with automated case building, threat investigation, and response actions including machine isolation and reimaging. Stiennon points to a CISO he met, speaking under Chatham House rules, who disclosed that a large enterprise had already eliminated its entire human SOC team. He predicts that disclosure will go public before long. The conversation also explores the business context question that security leaders frequently wrestle with: are these AI-driven SOC tools operating with a narrow cyber mandate, potentially optimizing for security metrics at the expense of business continuity? Stiennon pushes back on that concern, arguing that large language models are already trained on the full breadth of human knowledge -- they understand business context at a level that exceeds most organizations' internal documentation. The more pressing risk, he suggests, is not that AI will act outside business intent, but that organizations will move too slowly to benefit. Waiting six months for a proof-of-concept report while spending a million dollars on human SOC operations is not due diligence -- it is opportunity cost. The conversation also touches on data privacy in AI-driven security, the role of federated learning and fully homomorphic encryption for compliance-sensitive environments, and what security leaders can do today to evaluate and accelerate their own adoption timeline. Stiennon will be at RSA Conference 2026 with his new book, Guardians of the Machine Age: Why AI Security Will Define Digital Defense, continuing to make the case for a field that is moving faster than most organizations are prepared to acknowledge. ⬥GUEST⬥ Richard Stiennon, Chief Research Analyst at IT-Harvest | Website: https://it-harvest.com/ On LinkedIn: https://www.linkedin.com/in/stiennon/ ⬥HOST⬥ Sean Martin, Co-Founder at ITSPmagazine, Studio C60, and Host of Redefining CyberSecurity Podcast & Music Evolves Podcast | Website: https://www.seanmartin.com/ ⬥RESOURCES⬥ IT-Harvest | https://it-harvest.com/ Richard Stiennon on LinkedIn | https://www.linkedin.com/in/stiennon/ Guardians of the Machine Age: Why AI Security Will Define Digital Defense (Richard Stiennon) | Available via IT-Harvest and major booksellers RSAC Conference 2026 Coverage on ITSPmagazine | https://www.itspmagazine.com/rsac-2026-conference-san-francisco-usa-cybersecurity-event-infosec-conference-coverage The Future of Cybersecurity Newsletter | https://www.linkedin.com/newsletters/7108625890296614912/ More Redefining CyberSecurity Podcast episodes | https://www.seanmartin.com/redefining-cybersecurity-podcast Redefining CyberSecurity Podcast on YouTube | https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq ⬥ADDITIONAL INFORMATION⬥ On Podcast: https://www.seanmartin.com/redefining-cybersecurity-podcast On YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq Newsletter: https://itspm.ag/future-of-cybersecurity Contact Sean: https://www.seanmartin.com/ ⬥KEYWORDS⬥ richard stiennon, it-harvest, sean martin, soc automation, ai security, security operations center, threat detection, autonomous response, alert triage, security operations, cybersecurity vendors, ai agents, large language models, federated learning, siem, soar, redefining cybersecurity, cybersecurity podcast, redefining cybersecurity podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The CyberWire
When the map lies at sea.

The CyberWire

Play Episode Listen Later Mar 3, 2026 26:15


GPS jamming hits the Strait of Hormuz. An Iran linked threat actor uses AI to target Iraqi government officials. Hacktivists leak thousands of DHS contract records. A Hawaii cancer center suffers a data breach. Google patches over a hundred Android vulnerabilities. A new report tallies the scale of third party breaches. An MS-Agent AI framework flaw allows full system compromise. On today's Threat Vector segment, Evan Gordenker, Director of AI Security and DPRK Operations at Unit 42, joins David Moulton to unpack North Korea's hiring scams. Tire tech turns tattletale.  Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest North Korea has turned your hiring pipeline into a revenue machine. And most organizations have no idea. Evan Gordenker, Director of AI Security and DPRK Operations at Unit 42, joins David Moulton on today's Threat Vector segment to unpack how this operation actually works. Listen to their full conversation to get more detail and catch new episodes of Threat Vector every Thursday on your favorite podcast app. Selected Reading Attacks on GPS Spike Amid US and Israeli War on Iran (WIRED) Amazon: Drone strikes damaged AWS data centers in Middle East (Bleeping Computer) Iranian Cyber Threat Actor Targets Iraqi Government Officials in AI-Powered Campaign (Infosecurity Magazine) Hacktivists claim to have hacked Homeland Security to release ICE contract data (TechCrunch) UH Cancer Center data breach affects nearly 1.2 million people (Bleeping Computer) Android gets patches for Qualcomm zero-day exploited in attacks (Bleeping Computer) Chrome Gemini panel became privilege escalator for rogue extensions (The Register) Huge “Shadow Layer” of Organizations Hit by Supply Chain Attacks (Infosecurity Magazine) Vulnerability in MS-Agent AI Framework Can Allow Full System Compromise (SecurityWeek) Researchers Uncover Method to Track Cars via Tire Sensors (SecurityWeek) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Cybersecurity Defenders Podcast
#296 - How to Strengthen Cyber Resilience in an AI Era with Chris Cochran from SANS Institute

The Cybersecurity Defenders Podcast

Play Episode Listen Later Feb 25, 2026 31:15


On this episode of The Cybersecurity Defenders Podcast, we speak with Chris Cochran, Field CISO & Vice President of AI Security at SANS Institute, about how to navigate the future of AI risk and security strategyChris works at the intersection of cyber defense, AI safety, and emerging risk, where the threats are converging and the playbooks are still being written. His career has taken him from the Marine Corps to NSA, U.S. Cyber Command, the U.S. House of Representatives, Mandiant, and Netflix. Across every role, one throughline: understanding adversaries, building high-trust teams, and translating complex problems into strategies leaders can act on.Today, Chris advises organizations, governments, and research institutions on AI governance, agentic threat preparedness, and unifying safety and security into a single discipline. He contributes to global standards efforts including the EU AI Act (via OWASP AI) and leads executive education on cybersecurity and AI strategy at SANS.Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform. This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows. Start today for free at limacharlie.io

Get IT: Cybersecurity insights for the foreseeable future.
How Safe Is Your Data with AI Agents?

Get IT: Cybersecurity insights for the foreseeable future.

Play Episode Listen Later Feb 24, 2026 15:50


In this episode, host Ivo Wiens is joined by Ben Boi-Doku, Chief Cybersecurity Strategist at CDW Canada, explore the rapidly-evolving landscape of AI agents, discussing practical questions about deployment, security and policy. Whether you're an everyday user or a tech enthusiast, this conversation provides valuable insights into how AI is shaping our personal and professional lives and what to watch out for. To learn more, visit cdw.ca Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Packet Pushers - Full Podcast Feed
NB563: Palo Alto Networks Nets Koi for AI Security; Quantum Networking Notches Research Wins

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Feb 23, 2026 30:56


Take a Network Break! We start with follow-ups on secure browsers and data centers in space, and then sound the red alert about an RCE vulnerability in NLTK. On the news front, Palo Alto Networks acquires a startup that monitors endpoints for malicious packages, browser extensions, scripts, and other threats, Lumen debuts a multi-cloud gateway... Read more »

Packet Pushers - Network Break
NB563: Palo Alto Networks Nets Koi for AI Security; Quantum Networking Notches Research Wins

Packet Pushers - Network Break

Play Episode Listen Later Feb 23, 2026 30:56


Take a Network Break! We start with follow-ups on secure browsers and data centers in space, and then sound the red alert about an RCE vulnerability in NLTK. On the news front, Palo Alto Networks acquires a startup that monitors endpoints for malicious packages, browser extensions, scripts, and other threats, Lumen debuts a multi-cloud gateway... Read more »

Packet Pushers - Fat Pipe
NB563: Palo Alto Networks Nets Koi for AI Security; Quantum Networking Notches Research Wins

Packet Pushers - Fat Pipe

Play Episode Listen Later Feb 23, 2026 30:56


Take a Network Break! We start with follow-ups on secure browsers and data centers in space, and then sound the red alert about an RCE vulnerability in NLTK. On the news front, Palo Alto Networks acquires a startup that monitors endpoints for malicious packages, browser extensions, scripts, and other threats, Lumen debuts a multi-cloud gateway... Read more »

Cyber Security Today
Agentic AI Security Is Broken and How To Fix It: Ido Shlomo, Co-founder and CTO of Token Security

Cyber Security Today

Play Episode Listen Later Feb 21, 2026 44:56


Jim Love discusses how rapid adoption of agentic AI is repeating the industry pattern of shipping technology without security, citing issues like vulnerabilities in Anthropic's MCP and insecure open-source agent tools. He interviews Ido Shlomo, co-founder and CTO of Token Security, who argues AI agents are fundamentally hard to secure because they are non-deterministic, have infinite input/output space, and often require broad permissions to be useful.  Cybersecurity Today  would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale.  You can find them at Meter.com/cst Shlomo proposes focusing security on access, identity, attribution, least privilege, and auditability rather than trying to filter prompts and outputs, and describes Token's "intent-based permission management" approach that maps agents and sub-agents as non-human identities tied to their purpose and allowed actions. The conversation covers real-world risks such as developer tools like Claude Code running with extensive access, widespread over-provisioning of admin permissions and API keys, exposure of unencrypted local token files, and misconfigurations that leak data publicly. Shlomo recommends organizations build governance processes for agents—discovery/inventory, boundary setting, continuous monitoring, and secure decommissioning—and says AI is needed to help police AI. He also highlights emerging trends like agent teams and multi-day autonomous tasks, and notes Token Security is a top-10 finalist in the RSA Innovation Sandbox 2026, planning to present an intent-and-access-focused security model for AI agents. 00:00 Sponsor: Meter's integrated networking stack 00:19 Why agentic AI security is breaking (MCP & open-source chaos) 02:53 Meet Token Security: practical guardrails for AI agents 04:57 Why you can't just ban agents at work (shadow AI reality) 06:24 Tel Aviv's cybersecurity pipeline: gaming, military, and startups 08:57 Why AI/agents are fundamentally hard to secure (new OS + 'human spirit') 13:44 Trust, autonomy, and permissions: managing the blast radius 18:17 Real-world exposure: Claude Code and the developer identity attack surface 20:16 A workable approach: treat agents as untrusted processes with identity + least privilege 22:33 Zero Trust for Agents: Access ≠ Permission to Act 23:27 Token's "Intent-Based Permission Management" Explained 25:29 Building the Identity Map: Tracing What Agents Touch 26:52 The Secret Sauce: Using AI to Secure AI in Real Time 28:10 Real-World Case: 1,500 Agents and Wildly Over-Provisioned Access 30:57 CUA 'Computer-Use' Agents: Exciting, Personal… and Terrifying 34:44 Secure-by-Default & Sandboxing: Fixing 'Always Allow' Dark Patterns 35:36 What Security Teams Should Do Now: Inventory, Boundaries, Governance 37:59 What's Next: Agent Teams and Multi-Day Autonomous Work 40:10 Tony Stark Vision: Agents That Improve the Human Experience 41:02 RSA Innovation Sandbox: Token's Big Bet on Intent + Access 43:01 Wrap-Up, Audience Q&A, and Sponsor Message

Cloud Security Podcast
Why AI Infrastructure is Harder to Secure Than Cloud

Cloud Security Podcast

Play Episode Listen Later Feb 20, 2026 34:03


Is AI security just "Cloud Security 2.0"? Toni De La Fuente, creator of the open-source tool Prowler, joins Ashish to explain why securing AI workloads requires a fundamentally different approach than traditional cloud infrastructure.We dive deep into the "Shared Responsibility Gap" emerging with managed AI services like AWS Bedrock and OpenAI. Toni spoke about the hidden dangers of default AI architectures, why you should never connect an MCP (Model Context Protocol) directly to a database.We discuss the new AI-driven SDLC, where tools like Claude Code can generate infrastructure but also create massive security blind spots if not monitored.Guest Socials -⁠ ⁠⁠⁠⁠⁠Toni's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(02:50) Who is Toni De La Fuente? (Creator of Prowler)(03:50) AI Security vs. Cloud Security: What's the Difference? (07:20) The Shared Responsibility Gap in AI Services (Bedrock, OpenAI) (11:30) The "Fifth Party" Risk: Managed AI Access (13:40) AI Architecture Best Practices: Never Connect MCP to DB Directly (16:40) Prowler's AI Pillars: Generating Dashboards & Detections (22:30) The New SDLC: Securing Code from Claude Code & Lovable (25:30) The "Magic" Trap: Why AI Doesn't Know Your Security Context (28:30) Top 3 Priorities for Security Leaders (Infra, LLM, Shadow AI) (30:40) Future Predictions: Why Predicting 12 Months Out is Impossible

David Bombal
#539: Agentic AI is breaking your Cybersecurity controls (and how to solve it)

David Bombal

Play Episode Listen Later Feb 20, 2026 22:35


In this video David speaks to Peter Bailey (SVP and GM of Cisco's Security business). AI agents are moving fast inside enterprises, and CISOs are hitting the brakes for one reason: the attack surface is expanding at machine speed. In this interview, we break down how agentic AI changes security, why MCP servers and agent tool access create new risks, and what a zero trust approach looks like when the “user” is a non-deterministic agent. We cover real-world problems like shadow MCP servers, agents touching sensitive systems and PII, and why traditional perimeter controls and firewalls are not enough when traffic is encrypted and actions happen too quickly downstream. You'll also hear what Cisco is doing across the AI lifecycle: AI Defense for model scanning, provenance and guardrails, plus new protections focused on agent identity, dynamic authorization, behavior monitoring, and revocation. On the networking side, we discuss how SD-WAN and secure access (SASE) can add visibility and policy control for AI usage, including prioritizing latency-sensitive AI traffic while still enforcing security. If you're a security engineer, network engineer, or CISO trying to move from AI hype to safe deployment, this video gives you a practical mental model and the controls to start building now. Big thank you to ‪@Cisco‬ for sponsoring this video and for sponsoring my trip to Cisco Live Amesterdam. // Peter Baily' SOCIALS // LinkedIn: / peterhbailey Guest Bio: https://newsroom.cisco.com/c/r/newsro... // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal YouTube: / @davidbombal Spotify: open.spotify.com/show/3f6k6gE... SoundCloud: / davidbombal Apple Podcast: podcasts.apple.com/us/podcast... // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 0:00 - Coming Up 0:30 - Introduction 01:15 - CISOs Problems with AI 02:35 - Real Issues with AI Agents 04:29 - Growth of the Attack Surface 05:34 - Concern of Poisoned AI and MCP 08:09 - What is the Kill-chain 10:16 - AI with Built-in Security 11:56 - Best Practises for AI Security 14:08 - Cisco Innovations for AI 16:48 - Cisco's Red Team for own AI 18:27 - Secure AI in Public Places 20:09 - Should You get into Cyber Security 21:26 - Advice To Your Younger Self 22:29 - Outro Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only. #cisco #ciscoemea #ciscolive

Security Unfiltered
Hackers Cracked AI Security | Here's How They Did It

Security Unfiltered

Play Episode Listen Later Feb 18, 2026 53:40 Transcription Available


Send a textMost cybersecurity stories talk about the hacks, but this episode peels back the curtain on the raw, unfiltered journey of a hacker turned industry pioneer. Jason Haddix shares how his early days of hex editing and fake IDs evolved into leading offensive security at Fortune 100 giants — all driven by relentless curiosity and defiance. His tales of surviving the shadowy underground, navigating multi-year career pivots, and turning obsession into innovation will blow your mind. This isn't just about tech — it's about fearlessly forging a path in a chaotic, ever-changing world where knowledge is power and resilience is everything.You'll discover the secret frameworks behind modern pen testing—like the Bug Hunters Methodology—and how cutting-edge tools are reshaping cybersecurity. Jason dives into his real-world battles: from bypassing the most sophisticated security measures to hacking into critical infrastructure under intense pressure. His insights reveal the brutal truths of red teaming, physical infiltration, and the mental grit required to succeed when everyone else doubts you.We break down the rise of AI and LLMs in security: how attackers jailbreak systems, bypass defenses with prompt injections, and weaponize new technologies faster than security teams can respond. Jason warns about deploying these powerful tools without enough guardrails or understanding — and how FOMO is fueling a wild, unsecured frontier. His perspective is a call to arms for defenders and hackers alike: adapt fast, think boldly, and stay one step ahead in the most dangerous cyber game yet.This episode is essential for anyone hungry to understand the raw reality of offensive security, the future of AI in hacking, and the relentless pursuit of mastery in a digital battlefield. Whether you're a seasoned pro, a curious newcomer, or a business leader, Jason's fearless authenticity will challenge your assumptions and ignite your passion to innovate. Hit play — your fight for security starts now.Chapters00:00 Introduction and Background in Cybersecurity06:05 Early Experiences and Learning in Cybersecurity12:14 Transitioning to Professional Penetration Testing18:30 Challenges and Realities of Consulting in Cybersecurity20:41 Phishing Tests and Their Consequences23:09 Transitioning to Entrepreneurship26:05 The Evolution of Training and Consulting31:18 The Role of AI in Cybersecurity39:11 Navigating AI Security Challenges39:11 Understanding LLMs and User Education41:42 Privacy Concerns and Risk Management in AI44:32 Prompt Engineering Vulnerabilities and Jailbreaking Techniques47:03 Security Challenges in AI Systems49:39 Future of AI and Community EngagementSupport the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast Affiliates➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh➡️ OffGrid Coupon Code: JOE➡️ Unplugged Phone: https://unplugged.com/Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.

Random but Memorable
AI security tips for modern families with Childnet

Random but Memorable

Play Episode Listen Later Feb 17, 2026 68:04


How can you help your loved ones navigate and securely adopt AI tools ? Will Gardner, CEO of Childnet joins the show for a vital conversation about helping families use AI safely. We talk about Childnet's latest research and the practical ways you can become a digital role model and start better AI conversations at home.

Alexa's Input (AI)
Securing the Software Supply Chain with Justin Cappos

Alexa's Input (AI)

Play Episode Listen Later Feb 17, 2026 48:49


Modern software is built on layers and layers of code. So how do we know we can trust it?In this episode of Alexa's Input (AI), Alexa Griffith sits down with Justin Cappos, professor of computer science at NYU and a leading expert in software supply chain security, to unpack what trust really means in today's digital infrastructure.From package managers and dependency chains to large-scale outages and AI systems built on inherited code, Justin explains why many security failures aren't random accidents, they're predictable consequences of weak process, misaligned incentives, and insecure design.They discuss:Why security only becomes visible when something breaksThe difference between unavoidable failure and negligenceHow modern software supply chains amplify small mistakesThe role of leadership and culture in preventing breachesWhy verification systems like TUF and in-toto matter more than everAs AI accelerates development and increases system complexity, the need for verifiable trust only grows. This episode is a practical look at the invisible infrastructure that keeps modern software, and increasingly, modern AI, from collapsing under its own complexity.Podcast LinksWatch: ⁠⁠⁠⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠⁠⁠⁠Read: ⁠⁠⁠⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠⁠⁠⁠Listen:⁠⁠⁠⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠⁠⁠⁠More: ⁠⁠⁠⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠⁠⁠⁠Website: ⁠⁠⁠⁠⁠⁠https://alexagriffith.com/⁠⁠⁠⁠⁠⁠LinkedIn: ⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠⁠⁠Find out more about the guest at:Website: https://engineering.nyu.edu/faculty/justin-capposNYU page: https://ssl.engineering.nyu.edu/personalpages/jcappos/Wikipedia: https://en.wikipedia.org/wiki/Justin_CapposChapters00:00 Introduction to Justin Cappos and His Work01:17 The Importance of Security in Software Systems03:50 Understanding Security Breaches: Mistakes vs. System Design Problems06:34 Cultural Factors in Security Failures09:25 Justin's Journey in Software Security12:03 The Role of Academia in Enterprise Security14:10 Evaluating Enterprise Security Systems16:58 Foundational Projects in Software Security19:21 AI Security Concerns and Future Directions24:59 The Need for MCP 2.028:57 Security Challenges with LLMs32:33 Designing Secure AI Systems37:14 Ethical Dilemmas in AI Decision-Making40:17 The Role of AI in Open Source43:44 Trust and Mindset in AI Security

ITSPmagazine | Technology. Cybersecurity. Society
Semantic Chaining: A New Image-Based Jailbreak Targeting Multimodal AI | A Brand Highlight Conversation with Alessandro Pignati, AI Security Researcher of NeuralTrust

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Feb 13, 2026 7:14


What happens when AI safety filters fail to catch harmful content hidden inside images? Alessandro Pignati, AI Security Researcher at NeuralTrust, joins Sean Martin to reveal a newly discovered vulnerability that affects some of the most widely used image-generation models on the market today. The technique, called semantic chaining, is an image-based jailbreak attack discovered by the NeuralTrust research team, and it raises important questions about how enterprises secure their multimodal AI deployments.How does semantic chaining work? Pignati explains that the attack uses a single prompt composed of several parts. It begins with a benign scenario, such as a historical or educational context. A second instruction asks the model to make an innocent modification, like changing the color of a background. The final, critical step introduces a malicious directive, instructing the model to embed harmful content directly into the generated image. Because image-generation models apply fewer safety filters than their text-based counterparts, the harmful instructions are rendered inside the image without triggering the usual safeguards.The NeuralTrust research team tested semantic chaining against prominent models including Gemini Nano Pro, Grok 4, and Seedream 4.5 by ByteDance, finding the attack effective across all of them. For enterprises, the implications extend well beyond consumer use cases. Pignati notes that if an AI agent or chatbot has access to a knowledge base containing sensitive information or personal data, a carefully structured semantic chaining prompt can force the model to generate that data directly into an image, bypassing text-based safety mechanisms entirely.Organizations looking to learn more about semantic chaining and the broader landscape of AI agent security can visit the NeuralTrust blog, where the research team publishes detailed breakdowns of their findings. NeuralTrust also offers a newsletter with regular updates on agent security research and newly discovered vulnerabilities.This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAlessandro Pignati, AI Security Researcher, NeuralTrustOn LinkedIn: https://www.linkedin.com/in/alessandro-pignati/RESOURCESLearn more about NeuralTrust: https://neuraltrust.ai/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSAlessandro Pignati, NeuralTrust, Sean Martin, brand story, brand marketing, marketing podcast, brand highlight, semantic chaining, image jailbreak, AI security, agentic AI, multimodal AI, LLM safety, AI red teaming, prompt injection, AI agent security, image-based attacks, enterprise AI security Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Cloud Security Podcast
How Attackers Bypass AI Guardrails with Natural Language

Cloud Security Podcast

Play Episode Listen Later Feb 10, 2026 46:36


In the world of Generative AI, natural language has become the new executable. Attackers no longer need complex code to breach your systems, sometimes, asking for a "poem" is enough to steal your passwords .In this episode, Eduardo Garcia (Global Head of Cloud Security Architecture at Check Point) joins Ashish to explain the paradigm shift in AI security. He shares his experience building AI-powered fraud detection systems and why traditional security controls fail against intent-based attacks like prompt injection and data poisoning .We dive deep into the reality of Shadow AI, where employees unknowingly train public models with sensitive corporate data , and the sophisticated world of Deepfakes, where attackers can bypass biometric security using AI-generated images unless you're tracking micro-movements of the eye .Guest Socials - ⁠⁠⁠⁠⁠⁠Eduardo's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠(00:00) Introduction(01:55) Who is Eduardo Garcia? (Check Point)(03:00) Defining Security for GenAI: The Focus on Prompts (05:20) Why Natural Language is the New Executable (08:50) Multilingual Attacks: Bypassing Filters with Mandarin (12:00) Shift Left vs. Shift Right: The 70/30 Rule for AI Security (15:30) The "Poem Hack": Stealing Passwords with Creative Prompts (21:00) Shadow AI: The "HR Spreadsheet" Leak Scenario (25:40) Security vs. Compliance in a Blurring World (28:00) The Conflict: "My Budget Doesn't Include Security" (34:00) The 5 V's of AI Data: Volume, Veracity, Velocity (40:00) Deepfakes & Biometrics: Detecting Micro-Movements (43:40) Fun Questions: Soccer, Family, and Honduran Tacos

Computer und Kommunikation (komplette Sendung) - Deutschlandfunk
Internationale Bedrohung Cybercrime / AI Security Report / Macht TikTok süchtig?

Computer und Kommunikation (komplette Sendung) - Deutschlandfunk

Play Episode Listen Later Feb 7, 2026 30:13


Kloiber, Manfred www.deutschlandfunk.de, Computer und Kommunikation

Cloud Security Podcast
Vulnerability Management vs. Exposure Management

Cloud Security Podcast

Play Episode Listen Later Feb 6, 2026 39:38


In this episode, Brad Hibbert (COO & Chief Strategy Officer at Brinqa) joins Ashish to explain why traditional risk-based vulnerability management (RBVM) is no longer enough in a cloud-first world .We explore the evolution from simple patch management to Exposure Management a holistic approach that sits above your security tools to connect infrastructure, code, and cloud risks to actual business impact . Brad breaks down the critical difference between a "Risk Owner" (the service owner) and a "Remediation Owner" (the team fixing the bug) and why this distinction solves the "who fixes this?" problem .This conversation covers practical steps to uplift your VM program, how AI is helping prioritize the noise , and why compliance often just "proves activity" rather than reducing real risk . Whether you're drowning in Jira tickets or trying to automate remediation, this episode provides a roadmap for modernizing your security postureGuest Socials - ⁠⁠⁠⁠⁠Brad's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(02:50) Who is Brad Hibbert? (Brinqa)(04:55) The Evolution: From Scanning Servers to Cloud Complexity (06:50) What is Risk-Based Vulnerability Management? (08:50) Risk Owners vs. Remediation Owners: Who Fixes What? (12:00) How AI is Changing Vulnerability Management (15:20) Defining Exposure Management: Moving Beyond the Tools (18:30) The Challenge of "Data Inconsistency" Between Tools (22:30) Readiness Check: Are You Ready for Exposure Management? (25:10) Automated Remediation: Is "Zero Tickets" Possible? (28:40) Compliance vs. Risk: Why "Activity" isn't "Impact" (31:30) Maturity Milestones for Exposure Management (36:50) Fun Questions: Golf, Turkish Kebabs & Friendships

Cloud Security Podcast
Is Developer Friendly AI Security Possible with MCP & Shadow AI

Cloud Security Podcast

Play Episode Listen Later Feb 5, 2026 63:02


Is "developer-friendly" AI security actually possible? In this episode, Bryan Woolgar-O'Neil (CTO & Co-founder of Harmonic Security) joins Ashish to dismantle the traditional "block everything" approach to security.Bryan explains why 70% of Model Context Protocol (MCP) servers are running locally on developer laptops and why trying to block them is a losing battle . Instead, he advocates for a "coaching" approach, intervening in real-time to guide engineers rather than stopping their flow .We dive deep into the technical realities of MCP (Model Context Protocol), why it's becoming the standard for connecting AI to data, and the security risks of connecting it to production environments . Bryan also shares his prediction that Small Language Models (SLMs) will eventually outperform general giants like ChatGPT for specific business tasks .Guest Socials - ⁠⁠⁠⁠Bryan's Linkedin Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(01:55) Who is Bryan Woolgar-O'Neil?(03:00) Why AI Adoption Stops at Experimentation(05:15) The "Shadow AI" Blind Spot: Firewall Stats vs. Reality (08:00) Is AI Security Fundamentally Different? (Speed & Scale) (10:45) Can Security Ever Be "Developer Friendly"? (14:30) What is MCP (Model Context Protocol)? (17:20) Why 70% of MCP Usage is Local (and the Risks) (21:30) The "Coaching" Approach: Don't Just Block, Educate (25:40) Developer First: Permissive vs. Blocking Cultures (30:20) The Rise of the "Head of AI" Role (34:30) Use Cases: Workforce Productivity vs. Product Integration (41:00) An AI Security Maturity Model (Visibility -> Access -> Coaching) (46:00) Future Prediction: Agentic Flows & Urgent Tasks (49:30) Why Small Language Models (SLMs) Will Win (53:30) Fun Questions: Feature Films & Pork Dumplings

Joey Pinz Discipline Conversations
#809 Greg Fitzgerald:

Joey Pinz Discipline Conversations

Play Episode Listen Later Jan 28, 2026 49:07


Send us a textIn this powerhouse episode, Joey Pinz sits down with one of cybersecurity's most influential builders—a serial market maker who has helped shape some of the industry's most iconic companies. From Sourcefire and Fortinet to Cylance, Javelin, and now Sevco Security, Fitz brings unmatched perspective on what separates successful cyber companies from the rest—and what MSPs must do now to stay relevant.Fitz breaks down why visibility is the core of modern security, why most organizations still don't actually know what assets they have, and how exposure management has become the foundation of cyber resilience. He also explains where the real money is flowing in the MSP/MSSP space, the biggest mistakes founders still make, and what MSPs must do to move confidently into security services.On the personal side, Fitz shares insights from a life built around curiosity, communication, and impact—shaped by early roles at Coca-Cola during the Olympics, BMC, Compaq, and decades of startup leadership. His mission today? Protect the planet through better security, better intelligence, and smarter business decisions.

This Week in Tech (Audio)
TWiT 1068: Toto's Electrostatic Chuck - Is TikTok's New Privacy Policy Cause for Alarm?

This Week in Tech (Audio)

Play Episode Listen Later Jan 26, 2026 172:26


Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit

This Week in Tech (Video HI)
TWiT 1068: Toto's Electrostatic Chuck - Is TikTok's New Privacy Policy Cause for Alarm?

This Week in Tech (Video HI)

Play Episode Listen Later Jan 26, 2026 172:26


Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit

All TWiT.tv Shows (MP3)
This Week in Tech 1068: Toto's Electrostatic Chuck

All TWiT.tv Shows (MP3)

Play Episode Listen Later Jan 26, 2026 172:26


Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit

Joey Pinz Discipline Conversations
#805 MSSP Alert Live - Tony Pietrocola:

Joey Pinz Discipline Conversations

Play Episode Listen Later Jan 21, 2026 30:30


Send us a textIn this high-energy and entertaining episode, Joey Pinz sits down with cybersecurity founder and unabashed Italian-American storyteller Tony Pietrocola. From stomping grapes as a child to running an AI-driven security operations platform, Tony brings a rare blend of toughness, humor, and entrepreneurial clarity.They jump from wine, cooking, and massive NFL bodies to college football, concussions, and how elite athletes are built differently. Tony shares what makes college football the real American spectacle—and why private equity is about to reshape the sport.On the cybersecurity front, Tony breaks down the challenges MSPs face, why most still struggle with security, and how AgileBlue helps them build profitable, white-label practices without the overhead of running a SOC. He explains the three questions every MSP should ask a vendor, the rise of AI-assisted attacks, and why consolidation and greenfield opportunities are the biggest missed revenue streams.The conversation ends with health, habit, and personal transformation—discussing Joey's 130-lb weight loss, Tony's daily 5 a.m. workouts, and the childhood structure that forged their work ethic.