Podcasts about ai security

  • 271PODCASTS
  • 458EPISODES
  • 39mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 5, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about ai security

Latest podcast episodes about ai security

Cloud Security Podcast
Is Developer Friendly AI Security Possible with MCP & Shadow AI

Cloud Security Podcast

Play Episode Listen Later Feb 5, 2026 63:02


Is "developer-friendly" AI security actually possible? In this episode, Bryan Woolgar-O'Neil (CTO & Co-founder of Harmonic Security) joins Ashish to dismantle the traditional "block everything" approach to security.Bryan explains why 70% of Model Context Protocol (MCP) servers are running locally on developer laptops and why trying to block them is a losing battle . Instead, he advocates for a "coaching" approach, intervening in real-time to guide engineers rather than stopping their flow .We dive deep into the technical realities of MCP (Model Context Protocol), why it's becoming the standard for connecting AI to data, and the security risks of connecting it to production environments . Bryan also shares his prediction that Small Language Models (SLMs) will eventually outperform general giants like ChatGPT for specific business tasks .Guest Socials - ⁠⁠⁠⁠Bryan's Linkedin Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(01:55) Who is Bryan Woolgar-O'Neil?(03:00) Why AI Adoption Stops at Experimentation(05:15) The "Shadow AI" Blind Spot: Firewall Stats vs. Reality (08:00) Is AI Security Fundamentally Different? (Speed & Scale) (10:45) Can Security Ever Be "Developer Friendly"? (14:30) What is MCP (Model Context Protocol)? (17:20) Why 70% of MCP Usage is Local (and the Risks) (21:30) The "Coaching" Approach: Don't Just Block, Educate (25:40) Developer First: Permissive vs. Blocking Cultures (30:20) The Rise of the "Head of AI" Role (34:30) Use Cases: Workforce Productivity vs. Product Integration (41:00) An AI Security Maturity Model (Visibility -> Access -> Coaching) (46:00) Future Prediction: Agentic Flows & Urgent Tasks (49:30) Why Small Language Models (SLMs) Will Win (53:30) Fun Questions: Feature Films & Pork Dumplings

The Voice of Reason with Andy Hooser
Angela Nakalembe: Government Shutdown 2.0, Trump Cabinet Meeting, and AI Security Risks

The Voice of Reason with Andy Hooser

Play Episode Listen Later Jan 29, 2026 36:48


Guest Angela Nakalembe, AI and Safety Expert, joins to discuss increase in AI technology, and the challenges to decipher what is real or fake online. Discussion of concerns to children, AI online bullying, tools for education, and more.  Democrats threaten another government shutdown until Congress defunds ICE. What? Discussion of appropriations process, Democrats attempt to redeem themselves for political gain during election season, and how far could a government shutdown go. 

Joey Pinz Discipline Conversations
#809 Greg Fitzgerald:

Joey Pinz Discipline Conversations

Play Episode Listen Later Jan 28, 2026 49:07


Send us a textIn this powerhouse episode, Joey Pinz sits down with one of cybersecurity's most influential builders—a serial market maker who has helped shape some of the industry's most iconic companies. From Sourcefire and Fortinet to Cylance, Javelin, and now Sevco Security, Fitz brings unmatched perspective on what separates successful cyber companies from the rest—and what MSPs must do now to stay relevant.Fitz breaks down why visibility is the core of modern security, why most organizations still don't actually know what assets they have, and how exposure management has become the foundation of cyber resilience. He also explains where the real money is flowing in the MSP/MSSP space, the biggest mistakes founders still make, and what MSPs must do to move confidently into security services.On the personal side, Fitz shares insights from a life built around curiosity, communication, and impact—shaped by early roles at Coca-Cola during the Olympics, BMC, Compaq, and decades of startup leadership. His mission today? Protect the planet through better security, better intelligence, and smarter business decisions.

Trust Issues
EP 24 - FOMO, identity, and the realities of AI at scale

Trust Issues

Play Episode Listen Later Jan 27, 2026 47:09


In this episode of Security Matters, host David Puner sits down with Ariel Pisetzky, chief information officer at CyberArk, for a candid look at the fast‑evolving intersection of AI, cybersecurity, and IT innovation. As organizations race to adopt AI, the fear of missing out is driving rapid decisions—often without enough consideration for identity, security, or long‑term impact.Ariel shares practical insights on what it really takes to secure AI at scale, from combating AI‑enabled phishing attacks to managing agent identities and reducing growing risks in the software supply chain. The conversation explores how leaders can balance innovation with identity‑centric guardrails, understand the economics of AI adoption, and push for the democratization of IT without losing control. Whether you're a CIO, an IT leader, or simply curious about the future of cybersecurity, this episode offers clear, actionable guidance to help you stay ahead in 2026 and beyond.

This Week in Tech (Audio)
TWiT 1068: Toto's Electrostatic Chuck - Is TikTok's New Privacy Policy Cause for Alarm?

This Week in Tech (Audio)

Play Episode Listen Later Jan 26, 2026 172:26


Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit

This Week in Tech (Video HI)
TWiT 1068: Toto's Electrostatic Chuck - Is TikTok's New Privacy Policy Cause for Alarm?

This Week in Tech (Video HI)

Play Episode Listen Later Jan 26, 2026 172:26


Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit

All TWiT.tv Shows (MP3)
This Week in Tech 1068: Toto's Electrostatic Chuck

All TWiT.tv Shows (MP3)

Play Episode Listen Later Jan 26, 2026 172:26


Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit

Radio Leo (Audio)
This Week in Tech 1068: Toto's Electrostatic Chuck

Radio Leo (Audio)

Play Episode Listen Later Jan 26, 2026 172:26


Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit

All TWiT.tv Shows (Video LO)
This Week in Tech 1068: Toto's Electrostatic Chuck

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Jan 26, 2026 172:26 Transcription Available


Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit

SAP and Enterprise Trends Podcasts from Jon Reed (@jonerp) of diginomica.com
Are ERP and SCM vendors ready for AI security? - the Louis Columbus interview

SAP and Enterprise Trends Podcasts from Jon Reed (@jonerp) of diginomica.com

Play Episode Listen Later Jan 25, 2026 30:10


In the January edition of the Enterprise month in review, we interviewed Louis Columbus with the burning question: are ERP and supply chain vendors ready for AI security, and new attack vectors like prompt injection? Louis has been nailing this on his Venture Beat AI security blog - so we put him in the hot seat to see what we could learn. This podcast is only the Columbus interview, which has been optimized for sound quality. If you want to see the full video replay with slides, check: https://youtube.com/live/-DQBB6mYJ_g.

SAP and Enterprise Trends Podcasts from Jon Reed (@jonerp) of diginomica.com
Enterprise month in review - ERP (and supply chain) vendors aren't ready for AI security

SAP and Enterprise Trends Podcasts from Jon Reed (@jonerp) of diginomica.com

Play Episode Listen Later Jan 25, 2026 64:43


Are ERP and supply chain vendors ready for AI security, and new attack vectors like prompt injection? Our special guest Louis Columbus says no. Louis has been nailing this on his Venture Beat AI security blog - let's put him in the hot seat and see what we can learn. Your hosts Brian Sommer and Jon Reed will also share their underrated news stories of the month, and unleash their enterprise highs and lows via the infamous slide deck. As always, bring your savviest (and snarkiest) commentary and let's get this done. Note: this is the full show, including our first 20 minutes of underrated news stories and final whiffs. The interview with Louis Columbus is also being issued as a separate audio podcast. If you want to see the video replay with slides, check: https://youtube.com/live/-DQBB6mYJ_g.

Business of Tech
Authority Challenges for MSPs: Deepfake Risks, AI Security Shifts, and Vendor Accountability

Business of Tech

Play Episode Listen Later Jan 22, 2026 17:31


Escalating distrust in identity systems and misuse of AI are forcing a shift in security accountability for small and midsize businesses. Recent analysis highlights that the prevalence of deepfake-driven business email compromise and non-human digital identities is eroding confidence in traditional protective solutions. According to Techyle and supporting reports referenced by Dave Sobel, the ratio of non-human to human identities in organizations is now 144:1, further complicating authority and responsibility for managed service providers (MSPs). As trust in exclusive third-party control disintegrates, co-managed security models are becoming standard, repositioning decision-making and liability.The rise of AI-generated data—described as “AI slop”—has prompted increased adoption of zero trust models, with 84% of CIOs reportedly increasing funding for generative AI initiatives. However, as rogue AI agents are recognized as a significant insider threat, current security services are often ill-equipped to manage these new vulnerabilities. Regulatory bodies, including CISA, have issued guidance noting that the integration of AI into critical infrastructure introduces greater risk of outages and security breaches, particularly when governance remains ambiguous. High-profile vulnerabilities in open-source AI platforms used within cloud environments further highlight the persistence of operational risks.Adjacent technology updates include new releases from vendors such as 1Password, WatchGuard, JumpCloud, and ControlUp. These offerings focus on enhancing phishing prevention, expanding managed detection and response, and automating endpoint management for MSPs. However, Dave Sobel emphasizes that these tools introduce additional layers of automation and integration without adequately clarifying who ultimately holds authority and accountability when failures or breaches occur. There is a consistent warning that stacking solutions or outsourcing core functions without redefining operational control creates gaps between action and oversight.For MSPs and IT leaders, the key takeaway is that security risk is no longer defined by missing technology but by unclear governance, undefined authority, and misaligned incentives. Without explicit contractual and operational delineation of responsibility when deploying AI and automation, service providers are increasingly exposed to liability by default. The advice is to move beyond tool-centric strategies and focus on process clarity: define who authorizes, audits, and terminates non-human identities; establish which parties approve automation actions; and ensure clients understand shared responsibilities to mitigate silent risk accumulation. Four things to know today00:00 TechAisle Warns SMB Security Will Shift in 2026 as Identity Attacks and AI Agents Redefine Risk05:44 AI Moves Deeper Into Critical Infrastructure as Open-Source and Human Weaknesses Expand the Attack Surface09:35 MSP Security Platforms Automate Phishing Prevention and MDR—Outpacing Governance and Control Models12:12 AI-Powered MSP Tools Promise Control and Efficiency, But Shift Responsibility by Default This is the Business of Tech.    Supported by:  https://scalepad.com/dave/

Joey Pinz Discipline Conversations
#805 MSSP Alert Live - Tony Pietrocola:

Joey Pinz Discipline Conversations

Play Episode Listen Later Jan 21, 2026 30:30


Send us a textIn this high-energy and entertaining episode, Joey Pinz sits down with cybersecurity founder and unabashed Italian-American storyteller Tony Pietrocola. From stomping grapes as a child to running an AI-driven security operations platform, Tony brings a rare blend of toughness, humor, and entrepreneurial clarity.They jump from wine, cooking, and massive NFL bodies to college football, concussions, and how elite athletes are built differently. Tony shares what makes college football the real American spectacle—and why private equity is about to reshape the sport.On the cybersecurity front, Tony breaks down the challenges MSPs face, why most still struggle with security, and how AgileBlue helps them build profitable, white-label practices without the overhead of running a SOC. He explains the three questions every MSP should ask a vendor, the rise of AI-assisted attacks, and why consolidation and greenfield opportunities are the biggest missed revenue streams.The conversation ends with health, habit, and personal transformation—discussing Joey's 130-lb weight loss, Tony's daily 5 a.m. workouts, and the childhood structure that forged their work ethic.

Cloud Security Podcast
Why AI Can't Replace Detection Engineers: Build vs. Buy & The Future of SOC

Cloud Security Podcast

Play Episode Listen Later Jan 21, 2026 52:08


Is the AI SOC a reality, or just vendor hype? In this episode, Antoinette Stevens (Principal Security Engineer at Ramp) joins Ashish to dissect the true state of AI in detection engineering.Antoinette shares her experience building detection program from scratch, explaining why she doesn't trust AI to close alerts due to hallucinations and faulty logic . We explore the "engineering-led" approach to detection, moving beyond simple hunting to building rigorous testing suites for detection-as-code .We discuss the shrinking entry-level job market for security roles , why software engineering skills are becoming non-negotiable , and the critical importance of treating AI as a "force multiplier, not your brain".Guest Socials - ⁠⁠⁠Antoinette's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(02:25) Who is Antoinette Stevens?(04:10) What is an "Engineering-Led" Approach to Detection? (06:00) Moving from Hunting to Automated Testing Suites (09:30) Build vs. Buy: Is AI Making it Easier to Build Your Own Tools? (11:30) Using AI for Documentation & Playbook Updates (14:30) Why Software Engineers Still Need to Learn Detection Domain Knowledge (17:50) The Problem with AI SOC: Why ChatGPT Lies During Triage (23:30) Defining AI Concepts: Memory, Evals, and Inference (26:30) Multi-Agent Architectures: Using Specialized "Persona" Agents (28:40) Advice for Building a Detection Program in 2025 (Back to Basics) (33:00) Measuring Success: Noise Reduction vs. False Positive Rates (36:30) Building an Alerting Data Lake for Metrics (40:00) The Disappearing Entry-Level Security Job & Career Advice (44:20) Why Junior Roles are Becoming "Personality Hires" (48:20) Fun Questions: Wine Certification, Side Quests, and Georgian Food

AI Briefing Room
EP-457 Ai Security Surge

AI Briefing Room

Play Episode Listen Later Jan 20, 2026 2:10


```html welcome to wall-e's tech briefing for tuesday, january 20th! delve into today's pressing tech topics: ai security funding surge: venture capitalists double down on ai security startups after a rogue ai incident. witness ai secures $58 million to enhance defenses against unchecked ai capabilities. meta's strategic shift: amidst financial strains and declining interest, meta lays off 1,500 employees and closes vr game studios, redirecting focus to ai and ar, particularly ray-ban ar glasses. bioticsai's fda milestone: fresh from a techcrunch disrupt victory, bioticsai receives fda approval for its ai-powered fetal ultrasound tech, poised to transform prenatal care across the u.s. upcoming techcrunch startup battlefield 200: a premier platform for emerging startups, the 2026 edition promises networking, investment opportunities, and a $100,000 prize. applications open mid-february. stay tuned for tomorrow's tech updates! ```

Microsoft Cloud IT Pro Podcast
Episode 419 – Security and AI: Security Store, Security Copilot, and Agents

Microsoft Cloud IT Pro Podcast

Play Episode Listen Later Jan 15, 2026 25:16 Transcription Available


Welcome to Episode 419 of the Microsoft Cloud IT Pro Podcast. In this episode, Ben is once again live from Workplace Ninjas and is joined by John Joyner, an 18-year Microsoft MVP in Cloud Security and Azure Management. They discuss some of the announcements from Microsoft Ignite focused around Microsoft Security as well as diving deep into the new Security Store, AI agents, Security Compute Units (SCUs), and how Microsoft is making enterprise AI security more accessible and affordable than ever. Key topics include the phishing triage agent, conditional access optimization, E5 integration with included SCUs, and the strategic consolidation of security services into the Defender XDR portal. Whether you’re a security professional or IT administrator, this conversation provides valuable insights into Microsoft’s AI-driven security roadmap and how to stay ahead of AI-powered threats. Your support makes this show possible! Please consider becoming a premium member for access to live shows and more. Check out our membership options. Show Notes John Joyner on LinkedIn John Joyner’s Blog John Joyner’s Books Corica Technologies What is Microsoft Security Copilot? Security Store Microsoft Security Copilot agents overview Learn about Security Copilot inclusion in Microsoft 365 E5 subscription Microsoft Security Copilot Phishing Triage Agent in Microsoft Defender John Joyner John Joyner is an inventor, author, speaker, and professor specializing in datacenter and enterprise cloud computing. He serves as Senior Director of Technology at Corsica Technologies (formerly AccountabilIT), where he delivers next-generation technology management services to customers worldwide as a cloud architect helping businesses stay competitive. John is a Microsoft Azure MVP and Security MVP, having been recognized eighteen times (2007-2026) as a Microsoft Most Valuable Professional for his exceptional technical expertise, leadership, speaking experience, online influence, and commitment to solving real-world problems. He holds a Bachelor of Science in Business Administration with an Emphasis in Human Resources Management from the University of Colorado at Boulder. From 2007 to 2024, John served as an Adjunct Professor at the University of Arkansas Little Rock, teaching a pro-bono cloud computing management course open to all Arkansas residents. As an author, John co-wrote the 2021 book “Azure Arc-Enabled Kubernetes and Server” from Apress and contributed to four editions of the industry-standard “System Center Operations Manager: Unleashed” from SAMS Publishing (2005-2013). Between 2012 and 2015, he authored weekly cloud and datacenter columns for CBS Technology publications including TechRepublic and ZDNet. A retired U.S. Navy Lieutenant Commander and computer scientist, John worked for NATO in Europe and aboard an aircraft carrier in the Pacific. He earned the Computer Scientist sub-specialty and served as chief of network operations for NATO during the former Yugoslavia conflict. He is also a veteran of the Persian Gulf War. Outside of technology, John’s personal passions include 4-wheeling in his ‘Black Ops’ Jeep Wrangler and running a visionary art clothing company called Lit Like Luma. About the sponsors Would you like to become the irreplaceable Microsoft 365 resource for your organization? Let us know!

Trust Issues
EP 23 - Red teaming AI governance: catching model risk early

Trust Issues

Play Episode Listen Later Jan 14, 2026 34:37


AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agents to make decisions, recommendations, and even commitments far beyond their intended authority.Isom, former director of AI and technology at the U.S. Department of Energy (DOE) and now founder and CEO of IsAdvice & Consulting, explains why AI red teaming must extend beyond cybersecurity, how to stress test AI governance before something breaks, and why human oversight, escalation paths, and clear limits are essential for responsible AI.The conversation examines real-world examples of AI drift, unintended or unethical model behavior, data lineage failures, procurement and vendor blind spots, and the rising need for scalable AI governance, AI security, responsible AI practices, and enterprise red teaming as organizations adopt generative AI.Whether you work in cybersecurity, identity security, AI development, or technology leadership, this episode offers practical insights for managing AI risk and building systems that stay aligned, accountable, and trustworthy.

Cloud Security Podcast
AI Vulnerability Management: Why You Can't Patch a Neural Network

Cloud Security Podcast

Play Episode Listen Later Jan 13, 2026 41:20


Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this episode, Sapna Paul (Senior Manager at Dayforce) explains why there are no "Patch Tuesdays" for AI models .Sapna breaks down the three critical layers of AI vulnerability management: protecting production models, securing the data layer against poisoning, and monitoring model behavior for technically correct but ethically flawed outcomes . We discuss how to update your risk register to speak the language of business and the essential skills security professionals need to survive in an AI-first world .The conversation also covers practical ways to use AI within your security team to combat alert fatigue , the importance of explainability tools like SHAP and LIME , and how to align with frameworks like the NIST AI RMF and the EU AI Act .Guest Socials - ⁠⁠Sapna's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(02:00) Who is Sapna Paul?(02:40) What is Vulnerability Management in the Age of AI? (05:00) Defining the New Asset: Neural Networks & Models (07:00) The 3 Layers of AI Vulnerability (Production, Data, Behavior) (10:20) Updating the Risk Register for AI Business Risks (13:30) Compliance vs. Innovation: Preventing AI from Going Rogue (18:20) Using AI to Solve Vulnerability Alert Fatigue (23:00) Skills Required for Future VM Professionals (25:40) Measuring AI Adoption in Security Teams (29:20) Key Frameworks: NIST AI RMF & EU AI Act (31:30) Tools for AI Security: Counterfit, SHAP, and LIME (33:30) Where to Start: Learning & Persona-Based Prompts (38:30) Fun Questions: Painting, Mentoring, and Vegan Ramen

Check Point CheckMates Cyber Security Podcast
S08E01: AI Security and More!

Check Point CheckMates Cyber Security Podcast

Play Episode Listen Later Jan 9, 2026 11:40


PhoneBoy discusses AI Risk Mapping, AI Security Masters, and some great posts from the CheckMates community you may have missed.AI Risk MappingAI Security Masters SeriesAI Security Masters Session 1: How AI is Reshaping Our WorldHow to Chat with Your Check Point Gateways using Claude DesktopCheck Point MCP ServersThis Month's Spotlight - 3 Revisions Features You Should Start Using Today - October 2025Session Flow for AdministratorsVideos for Configuring Access Control and Threat PreventionHTTPS Inspection Inbound With More Than One CertificateConfiguring an AWS to Onsite VPNCan We Disable the HTTP Protocol ParserSSH Inspected by HTTPS InspectionConfiguring SSH Inspection in Threat PreventionPerformance Limitations of Virtual Switches with Legacy VSX?Upcoming events:CheckMates Fest 2026 on 14 January 2026Quantum SD-WAN Monitoring TechTalk on 21 January 2026AI Security Masters Session 2: Hacking with AI, The Dark Side of Innovation on 22 January 2026

Screaming in the Cloud
Avery Pennarun on Tailscale's Evolution: From Mesh VPN to AI Security Gateway

Screaming in the Cloud

Play Episode Listen Later Jan 8, 2026 44:19


Corey Quinn sits down with Avery Pennarun, co-founder and CEO of Tailscale, for a deep dive into how the company is reinventing networking for the modern era. From finally making VPNs behave the way they should to tackling AI security with zero-click authentication, Avery shares candid insights on building infrastructure people actually love using, and love talking about.They get into everything: surviving 100% year-over-year growth, why running on two tailnets at once is pure chaos, and how Tailscale makes “secure by default” feel effortless. Plus, they dig into why FreeBSD firewalls needed some tough love, the uncomfortable truth behind POCs, and even the surprisingly useful trick of turning your Apple TV into an exit node.About Avery: Avery Pennarun is the co-founder and CEO of Tailscale, where he's redefining secure networking with a simple, Zero Trust approach. A veteran software engineer with experience ranging from startups to Google, he's known for turning complex systems into approachable, user-friendly tools. His contributions to projects like wvdial, bup, and sshuttle reflect his belief that great technology should be both powerful and easy to use. With a mix of technical depth and dry humor, Avery shares insights on modern networking, internet evolution, and the realities of scaling a startup.Highlights:(0:00) Introduction to Tailscale and Security(00:52) Sponsorship and Personal Experiences(02:07) Technical Deep Dive into Tail Scale(06:10) Challenges and Future of Tail Scale(22:45) Building the Tail Net's API(23:54) Connecting Cloud Providers with Tailscale(25:22) Tailscale as a Security Solution(26:44) Innovations and Future of TailscaleSponsored by: duckbillhq.com

Cybercrime Magazine Podcast
AI Security. Protecting Today's Organizations. Brandyn Murtagh, Bug Bounty Hunter.

Cybercrime Magazine Podcast

Play Episode Listen Later Jan 8, 2026 17:34


Brandyn Murtagh is a full-time bug bounty-hunter and ethical ‘White Hat' hacker who is the founder of MurtaSec. In this episode, he joins host Heather Engel to discuss AI threats and their impact on the security community, as well as his unique approach to threat modeling, the dual nature of AI, and more. • For more on cybersecurity, visit us at https://cybersecurityventures.com

Cybercrime Magazine Podcast
AI Security Podcast. Social Engineering. Teresa Zielinski, GE Vernova & Brian Long, Adaptive.

Cybercrime Magazine Podcast

Play Episode Listen Later Jan 7, 2026 15:10


Brian Long is the CEO & Co-Founder at Adaptive Security. In this episode, he joins host Paul John Spaulding and Teresa Zielinski, Vice President and Global CISO at GE Vernova, to discuss social engineering and how it is evolving in light of artificial intelligence advancements. The AI Security Podcast is brought to you by Adaptive Security, the leading provider of AI-powered social engineering prevention solutions, and OpenAI's first and only cybersecurity investment. To learn more about our sponsor, visit https://AdaptiveSecurity.com

Microsoft Business Applications Podcast
The AI-Security Tradeoff Every Leader Must Solve

Microsoft Business Applications Podcast

Play Episode Listen Later Dec 31, 2025 22:48 Transcription Available


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM The episode explores how Chris Stegh sees organisations balancing AI adoption with data security, governance and practical risk management. It covers the real barriers to scaling AI, why perfect data hygiene is unrealistic, and how leaders can use tools like Copilot, Purview and agentic AI to create safe, high‑value use cases while improving long‑term resilience.

DeFi Slate
2026 Crypto & AI Predictions with Haseeb Qureshi

DeFi Slate

Play Episode Listen Later Dec 31, 2025 65:29


In this episode, we sit down with Haseeb Qureshi from Dragonfly to review his 2025 predictions, grade his calls, and reveal what's actually coming in 2026. Haseeb breaks down stablecoins exploding 60% through neo-banking cards, DeFi consolidating into 3 major players, Big Tech acquiring crypto wallets, and why prediction markets will steamroll everything.We discuss:-Bitcoin Hits 150K, But Altcoin Dominance Declines-Why EVM Won The Architecture War-Stablecoins Explode 60% Through Neo-Banking Cards-DeFi Perps Consolidate Into 3 Major Players-Big Tech Acquires A Crypto Wallet-Fortune 100 Companies Launch More Blockchains-Equity Perps Take Off & Insider Trading Scandals Hit-Buyer's Remorse On Crypto Regulation-Why Prediction Markets Will Steamroll EverythingTimestamps:00:00 Intro 04:22 AI Agent Predictions Review06:02 EVM vs SVM Market Dominance07:38 Kalshi Ad, YEET Ad, Trezor Ad11:55 Ethereum's Bullish Reversal15:11 Corporate Chain Reality Check20:40 App Chain Migration Challenges23:28 The Death of Airdrops & Points27:29 Asteroid Mining & Gold's Bitcoin Risk31:35 Dragonfly's Biggest Wins & Losses31:50 Halliday Ad, infiniFi Ad, Hibachi Ad36:17 2026: Fintech Chains Will Underwhelm39:16 Big Tech Wallet Acquisition Coming44:20 DeFi Perps 40-30-20 Consolidation48:35 Equity Perps & Insider Trading Scandals52:13 Stablecoins Grow 60% Via Neo-Banking59:50 Crypto Regulation Buyer's Remorse1:03:05 Prediction Markets Dominate1:04:34 AI Security & Software Engineering FocusWebsite: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd...Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+TsM1CRpWFgk1NGZhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

Paul's Security Weekly
SentinelOne and AWS Shape the Future of AI Security with Purple AI - Rachel Park, Brian Mendenhall - SWN #542

Paul's Security Weekly

Play Episode Listen Later Dec 30, 2025 37:41


SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-542

The Fearless Mindset
2025 Highlights: AI, Security, Leadership, and the Cost of Getting It Wrong

The Fearless Mindset

Play Episode Listen Later Dec 30, 2025 18:45


As we reflect on 2025, this episode pulls together the most important themes shaping the year ahead — from the rapid acceleration of AI and automation, to the evolving realities of security, leadership, and trust in an increasingly complex world.What was once hidden behind the scenes is now accessible to everyone. AI has moved from the “Matrix” into daily workflows, forcing organizations to rethink efficiency, security, and human value. At the same time, rising geopolitical tension, information warfare, and emerging technologies like quantum computing are redefining what risk really looks like — both for businesses and for people.This conversation also explores the human side of 2025: leadership under pressure, the importance of culture, mentorship, and professionalism, and why kindness, trust, and preparation are no longer “soft skills,” but strategic advantages.From executive protection and estate management to corporate security, AI leverage, and career longevity, this episode highlights where leaders must adapt — and where getting it wrong even once can have lasting consequences.KEY HIGHLIGHTSAI has crossed a critical threshold — no longer theoretical, but operational, accessible, and increasingly powerfulAutomation and optimization are now survival tools, not optional efficienciesSecurity threats are no longer siloed — digital, physical, personal, and reputational risks are deeply interconnectedQuantum computing looms as a disruptive force that could render today's encryption obsoleteExecutive protection is expanding beyond the C-suite into broader personnel and brand securityLeadership today requires relationship capital, situational awareness, and long-term thinkingCulture, kindness, and mentorship deliver measurable performance and retention advantagesCareers are becoming less linear — leverage, adaptability, and mindset matter more than pedigreeTo hear more episodes of The Fearless Mindset podcast, you can go to https://the-fearless-mindset.simplecast.com/ or listen on major podcasting platforms such as Apple, Google Podcasts, Spotify, etc. You can also subscribe to the Fearless Mindset YouTube Channel to watch episodes on video. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Paul's Security Weekly TV
SentinelOne and AWS Shape the Future of AI Security with Purple AI - Brian Mendenhall, Rachel Park - SWN #542

Paul's Security Weekly TV

Play Episode Listen Later Dec 30, 2025 37:41


SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Show Notes: https://securityweekly.com/swn-542

Hack Naked News (Audio)
SentinelOne and AWS Shape the Future of AI Security with Purple AI - Rachel Park, Brian Mendenhall - SWN #542

Hack Naked News (Audio)

Play Episode Listen Later Dec 30, 2025 37:41


SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-542

Hack Naked News (Video)
SentinelOne and AWS Shape the Future of AI Security with Purple AI - Brian Mendenhall, Rachel Park - SWN #542

Hack Naked News (Video)

Play Episode Listen Later Dec 30, 2025 37:41


SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Show Notes: https://securityweekly.com/swn-542

Lenny's Podcast: Product | Growth | Career
The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Dec 21, 2025 92:41


Sander Schulhoff is an AI researcher specializing in AI security, prompt injection, and red teaming. He wrote the first comprehensive guide on prompt engineering and ran the first-ever prompt injection competition, working with top AI labs and companies. His dataset is now used by Fortune 500 companies to benchmark their AI systems security, he's spent more time than anyone alive studying how attackers break AI systems, and what he's found isn't reassuring: the guardrails companies are buying don't actually work, and we've been lucky we haven't seen more harm so far, only because AI agents aren't capable enough yet to do real damage.We discuss:1. The difference between jailbreaking and prompt injection attacks on AI systems2. Why AI guardrails don't work3. Why we haven't seen major AI security incidents yet (but soon will)4. Why AI browser agents are vulnerable to hidden attacks embedded in webpages5. The practical steps organizations should take instead of buying ineffective security tools6. Why solving this requires merging classical cybersecurity expertise with AI knowledge—Brought to you by:Datadog—Now home to Eppo, the leading experimentation and feature flagging platform: https://www.datadoghq.com/lennyMetronome—Monetization infrastructure for modern software companies: https://metronome.com/GoFundMe Giving Funds—Make year-end giving easy: http://gofundme.com/lenny—Transcript: https://www.lennysnewsletter.com/p/the-coming-ai-security-crisis—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/181089452/my-biggest-takeaways-from-this-conversation—Where to find Sander Schulhoff:• X: https://x.com/sanderschulhoff• LinkedIn: https://www.linkedin.com/in/sander-schulhoff• Website: https://sanderschulhoff.com• AI Red Teaming and AI Security Masterclass on Maven: https://bit.ly/44lLSbC—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Sander Schulhoff and AI security(05:14) Understanding AI vulnerabilities(11:42) Real-world examples of AI security breaches(17:55) The impact of intelligent agents(19:44) The rise of AI security solutions(21:09) Red teaming and guardrails(23:44) Adversarial robustness(27:52) Why guardrails fail(38:22) The lack of resources addressing this problem(44:44) Practical advice for addressing AI security(55:49) Why you shouldn't spend your time on guardrails(59:06) Prompt injection and agentic systems(01:09:15) Education and awareness in AI security(01:11:47) Challenges and future directions in AI security(01:17:52) Companies that are doing this well(01:21:57) Final thoughts and recommendations—Referenced:• AI prompt engineering in 2025: What works and what doesn't | Sander Schulhoff (Learn Prompting, HackAPrompt): https://www.lennysnewsletter.com/p/ai-prompt-engineering-in-2025-sander-schulhoff• The AI Security Industry is Bullshit: https://sanderschulhoff.substack.com/p/the-ai-security-industry-is-bullshit• The Prompt Report: Insights from the Most Comprehensive Study of Prompting Ever Done: https://learnprompting.org/blog/the_prompt_report?srsltid=AfmBOoo7CRNNCtavzhyLbCMxc0LDmkSUakJ4P8XBaITbE6GXL1i2SvA0• OpenAI: https://openai.com• Scale: https://scale.com• Hugging Face: https://huggingface.co• Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition: https://www.semanticscholar.org/paper/Ignore-This-Title-and-HackAPrompt%3A-Exposing-of-LLMs-Schulhoff-Pinto/f3de6ea08e2464190673c0ec8f78e5ec1cd08642• Simon Willison's Weblog: https://simonwillison.net• ServiceNow: https://www.servicenow.com• ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts: https://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html• Alex Komoroske on X: https://x.com/komorama• Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack: https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack• MathGPT: https://math-gpt.org• 2025 Las Vegas Cybertruck explosion: https://en.wikipedia.org/wiki/2025_Las_Vegas_Cybertruck_explosion• Disrupting the first reported AI-orchestrated cyber espionage campaign: https://www.anthropic.com/news/disrupting-AI-espionage• Thinking like a gardener not a builder, organizing teams like slime mold, the adjacent possible, and other unconventional product advice | Alex Komoroske (Stripe, Google): https://www.lennysnewsletter.com/p/unconventional-product-advice-alex-komoroske• Prompt Optimization and Evaluation for LLM Automated Red Teaming: https://arxiv.org/abs/2507.22133• MATS Research: https://substack.com/@matsresearch• CBRN: https://en.wikipedia.org/wiki/CBRN_defense• CaMeL offers a promising new direction for mitigating prompt injection attacks: https://simonwillison.net/2025/Apr/11/camel• Trustible: https://trustible.ai• Repello: https://repello.ai• Do not write that jailbreak paper: https://javirando.com/blog/2024/jailbreaks—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

Telecom Reseller
Radware on AI Security at Machine Speed: What Telecom Providers Must Prepare for in 2026, Podcast

Telecom Reseller

Play Episode Listen Later Dec 19, 2025


Doug Green, Publisher of Technology Reseller News, spoke with Travis Volk, Vice President of Global Technology Solutions and GTM, Carrier at Radware, about how artificial intelligence is reshaping the security landscape for telecom providers as the industry heads into 2026. The discussion focused on the accelerating pace of attacks, the shrinking window to respond to vulnerabilities, and why traditional, human-paced security models are no longer sufficient. Volk explained that telecom networks are now facing machine-speed attacks, where newly disclosed vulnerabilities are often exploited within hours, not weeks or months. “Recent CVEs are being exploited at breakneck speeds,” he noted, emphasizing that nearly a third of disclosed vulnerabilities are weaponized within 24 hours. This reality is forcing providers to rethink patching, maintenance, and runtime protection strategies—especially as attackers increasingly chain small flaws into large-scale, sophisticated attacks. A key theme of the conversation was the convergence of offensive and defensive security. As applications become more API-driven and agentic, service providers must adopt continuous, automated testing and inline protection that can detect business-logic attacks in real time. Volk highlighted Radware's use of AI-driven analytics and visualization to map API flows, identify abnormal behavior, and enforce protections such as object-level authorization at scale—capabilities that are critical for encrypted, high-value workloads. Looking ahead, Volk described “good” security in 2026 as a living, observable system that prioritizes risk, automates both pre-runtime and runtime defenses, and enables data-driven decisions without adding operational complexity. Radware is already delivering these capabilities through flexible deployment models—virtual, physical, containerized, and cloud-based—allowing carriers to implement unified policy frameworks today. As Volk put it, AI is no longer optional: it is essential to keeping networks secure, resilient, and available in an era where attacks move faster than humans can respond. Learn more about Radware at https://www.radware.com/. Software Mind Telco Days 2025: On-demand online conference Engaging Customers, Harnessing Data

ITSPmagazine | Technology. Cybersecurity. Society
AI Adoption Without Readiness: When AI Ambition Collides With Data Reality | A TrustedTech Brand Story Conversation with Julian Hamood, Founder and Chief Visionary Officer at TrustedTech

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Dec 17, 2025 34:16


As organizations race to adopt AI, many discover an uncomfortable truth: ambition often outpaces readiness. In this episode of the ITSPmagazine Brand Story Podcast, host Sean Martin speaks with Julian Hamood, Founder and Chief Visionary Officer at TrustedTech, about what it really takes to operationalize AI without amplifying risk, chaos, or misinformation.Julian shares that most organizations are eager to activate tools like AI agents and copilots, yet few have addressed the underlying condition of their environments. Unstructured data sprawl, fragmented cloud architectures, and legacy systems create blind spots that AI does not fix. Instead, AI accelerates whatever already exists, good or bad.A central theme of the conversation is readiness. Julian explains that AI success depends on disciplined data classification, permission hygiene, and governance before automation begins. Without that groundwork, organizations risk exposing sensitive financial, HR, or executive data to unintended audiences simply because an AI system can surface it.The discussion also explores the operational reality beneath the surface. Most environments are a patchwork of Azure, AWS, on-prem infrastructure, SaaS platforms, and custom applications, often shaped by multiple IT leaders over time. When AI is layered onto this complexity without architectural clarity, inaccurate outputs and flawed business decisions quickly follow.Sean and Julian also examine how AI initiatives often emerge from unexpected places. Legal teams, business units, and individual contributors now build their own AI workflows using low-code and no-code tools, frequently outside formal IT oversight. At the same time, founders and CFOs push for rapid AI adoption while resisting the investment required to clean and secure the foundation.The episode highlights why AI programs are never one-and-done projects. Ongoing maintenance, data validation, and security oversight are essential as inputs change and systems evolve. Julian emphasizes that organizations must treat AI as a permanent capability on the roadmap, not a short-term experiment.Ultimately, the conversation frames AI not as a shortcut, but as a force multiplier. When paired with disciplined architecture and trusted guidance, AI enables scale, speed, and confidence. Without that discipline, it simply magnifies existing problems.Note: This story contains promotional content. Learn more.GUESTJulian Hamood, Founder and Chief Visionary Officer at TrustedTech | On LinkedIn: https://www.linkedin.com/in/julian-hamood/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Spotlight Brand Story: https://www.studioc60.com/content-creation#spotlight▶︎ Highlight Brand Story: https://www.studioc60.com/content-creation#highlightKeywords: sean martin, julian hamood, trusted tech, ai readiness, data governance, ai security, enterprise ai, brand story, brand marketing, marketing podcast, brand story podcast, brand spotlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
⚡️Jailbreaking AGI: Pliny the Liberator & John V on Red Teaming, BT6, and the Future of AI Security

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 16, 2025


Note: this is Pliny and John's first major podcast. Voices have been changed for opsec. From jailbreaking every frontier model and turning down Anthropic's Constitutional AI challenge to leading BT6, a 28-operator white-hat hacker collective obsessed with radical transparency and open-source AI security, Pliny the Liberator and John V are redefining what AI red-teaming looks like when you refuse to lobotomize models in the name of "safety." Pliny built his reputation crafting universal jailbreaks—skeleton keys that obliterate guardrails across modalities—and open-sourcing prompt templates like Libertas, predictive reasoning cascades, and the infamous "Pliny divider" that's now embedded so deep in model weights it shows up unbidden in WhatsApp messages. John V, coming from prompt engineering and computer vision, co-founded the Bossy Discord (40,000 members strong) and helps steer BT6's ethos: if you can't open-source the data, we're not interested. Together they've turned down enterprise gigs, pushed back on Anthropic's closed bounties, and insisted that real AI security happens at the system layer—not by bubble-wrapping latent space. We sat down with Pliny and John to dig into the mechanics of hard vs. soft jailbreaks, why multi-turn crescendo attacks were obvious to hackers years before academia "discovered" them, how segmented sub-agents let one jailbroken orchestrator weaponize Claude for real-world attacks (exactly as Pliny predicted 11 months before Anthropic's recent disclosure), why guardrails are security theater that punishes capability while doing nothing for real safety, the role of intuition and "bonding" with models to navigate latent space, how BT6 vets operators on skill and integrity, why they believe Mech Interp and open-source data are the path forward (not RLHF lobotomization), and their vision for a future where spatial intelligence, swarm robotics, and AGI alignment research happen in the open—bootstrapped, grassroots, and uncompromising. We discuss: What universal jailbreaks are: skeleton-key prompts that obliterate guardrails across models and modalities, and why they're central to Pliny's mission of "liberation" Hard vs. soft jailbreaks: single-input templates vs. multi-turn crescendo attacks, and why the latter were obvious to hackers long before academic papers The Libertas repo: predictive reasoning, the Library of Babel analogy, quotient dividers, weight-space seeds, and how introducing "steered chaos" pulls models out-of-distribution Why jailbreaking is 99% intuition and bonding with the model: probing token layers, syntax hacks, multilingual pivots, and forming a relationship to navigate latent space The Anthropic Constitutional AI challenge drama: UI bugs, judge failures, goalpost moving, the demand for open-source data, and why Pliny sat out the $30k bounty Why guardrails ≠ safety: security theater, the futility of locking down latent space when open-source is right behind, and why real safety work happens in meatspace (not RLHF) The weaponization of Claude: how segmented sub-agents let one jailbroken orchestrator execute malicious tasks (pyramid-builder analogy), and why Pliny predicted this exact TTP 11 months before Anthropic's disclosure BT6 hacker collective: 28 operators across two cohorts, vetted on skill and integrity, radical transparency, radical open-source, and the magic of moving the needle on AI security, swarm intelligence, blockchain, and robotics — Pliny the Liberator X: https://x.com/elder_plinius GitHub (Libertas): https://github.com/elder-plinius/L1B3RT45 John V X: https://x.com/JohnVersus BT6 & Bossy BT6: https://bt6.gg Bossy Discord: Search "Bossy Discord" or ask Pliny/John V on X Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Meet Pliny the Liberator and John V 00:01:50 The Philosophy of AI Liberation and Jailbreaking 00:03:08 Universal Jailbreaks: Skeleton Keys to AI Models 00:04:24 The Cat-and-Mouse Game: Attackers vs Defenders 00:05:42 Security Theater vs Real Safety: The Fundamental Disconnect 00:08:51 Inside the Libertas Repo: Prompt Engineering as Art 00:16:22 The Anthropic Challenge Drama: UI Bugs and Open Source Data 00:23:30 From Jailbreaks to Weaponization: AI-Orchestrated Attacks 00:26:55 The BT6 Hacker Collective and BASI Community 00:34:46 AI Red Teaming: Full Stack Security Beyond the Model 00:38:06 Safety vs Security: Meat Space Solutions and Final Thoughts

The Secure Developer
A Vision For The Future Of Enterprise AI Security With Sanjay Poonen

The Secure Developer

Play Episode Listen Later Dec 16, 2025 27:30


Episode SummaryThe future of cyber resilience lies at the intersection of data protection, security, and AI. In this conversation, Cohesity CEO Sanjay Poonen joins Danny Allan to explore how organisations can unlock new value by unifying these domains. Sanjay outlines Cohesity's evolution from data protection to security in the ransomware era, to today's AI-focused capabilities, and explains why the company's vast secondary data platform is becoming a foundation for next-generation analytics.Show NotesIn this episode, Sanjay Poonen shares his journey from SAP and VMware to leading Cohesity, highlighting the company's mission to protect, secure, and provide insights on the world's data. He explains the concept of the "data iceberg," where visible production data represents only a small fraction of enterprise assets, while vast amounts of "dark" secondary data remain locked in backups and archives. Poonen discusses how Cohesity is transforming this secondary data from a storage efficiency problem into a source of business intelligence using generative AI and RAG, particularly for unstructured data like documents and images.The conversation delves into the technical integration of Veritas' NetBackup data mover onto Cohesity's file system, creating a unified platform for security scanning and AI analytics. Poonen also elaborates on Cohesity's collaboration with NVIDIA, explaining how they are building AI applications like Gaia on the NVIDIA stack to enable on-premises and sovereign cloud deployments. This approach allows highly regulated industries, such as banking and the public sector, to utilize advanced AI capabilities without exposing sensitive data to public clouds.Looking toward the future, Poonen outlines Cohesity's "three acts": data protection, security (ransomware resilience), and AI-driven insights. He and Danny Allan discuss the critical importance of identity resilience, noting that in an AI-driven world, the security perimeter shifts from network boundaries to the identities of both human users and autonomous AI agents.LinksCohesityNvidiaSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn

ITSPmagazine | Technology. Cybersecurity. Society
Black Hat Europe 2025 Wrap-Up: Suzy Pallett on Global Expansion, AI Threats, and Defending Together | On Location Coverage With Sean Martin & Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Dec 13, 2025 19:19


____________Guests:Suzy PallettPresident, Black Hat. Cybersecurity.On LinkedIn: https://www.linkedin.com/in/suzy-pallett-60710132/The Cybersecurity Community Finds Its Footing in Uncertain TimesThere is something almost paradoxical about the cybersecurity industry. It exists because of threats, yet it thrives on trust. It deals in technical complexity, yet its beating heart is fundamentally human: people gathering, sharing knowledge, and collectively deciding that defending each other matters more than protecting proprietary advantage.This tension—and this hope—was on full display at Black Hat Europe 2025 in London, which just wrapped up at the ExCel Centre with attendance growing more than 25 percent over last year. For Suzy Pallett, the newly appointed President of Black Hat, the numbers tell only part of the story."What I've found from this week is the knowledge sharing, the insights, the open source tools that we've shared, the demonstrations that have happened—they've been so instrumental," Pallett shared in a conversation with ITSPmagazine. "Cybersecurity is unlike any other industry I've ever been close to in the strength of that collaboration."Pallett took the helm in September after Steve Wylie stepped down following eleven years leading the brand through significant growth. Her background spans over two decades in global events, most recently with Money20/20, the fintech conference series. But she speaks of Black Hat not as a business to be managed but as a community to be served.The event itself reflected the year's dominant concerns. AI agents and supply chain vulnerabilities emerged as central themes, continuing conversations that dominated Black Hat USA in Las Vegas just months earlier. But Europe brought its own character. Keynotes ranged from Max Meets examining whether ransomware can actually be stopped, to Linus Neumann questioning whether compliance checklists might actually expose organizations to greater risk rather than protecting them."He was saying that the compliance checklists that we're all being stressed with are actually where the vulnerabilities lie," Pallett explained. "How can we work more collaboratively together so that it's not just a compliance checklist that we get?"This is the kind of question that sits at the intersection of technology and policy, technical reality and bureaucratic aspiration. It is also the kind of question that rarely gets asked in vendor halls but deserves space in our collective thinking.Joe Tidy, the BBC journalist behind the EvilCorp podcast, delivered a record-breaking keynote attendance on day two, signaling the growing appetite for cybersecurity stories that reach beyond the practitioner community into broader public consciousness. Louise Marie Harrell spoke on technical capacity and international accountability—a reminder that cyber threats respect no borders and neither can our responses.What makes Black Hat distinct, Pallett noted, is that the conversations happening on the business hall floor are not typical expo fare. "You have the product teams, you have the engineers, you have the developers on those stands, and it's still product conversations and technical conversations."Looking ahead, Pallett's priorities center on listening. Review boards, advisory boards, pastoral programs, scholarships—these are the mechanisms through which she intends to ensure Black Hat remains, in her words, "a platform for them and by them."The cybersecurity industry faces a peculiar burden. What used to happen in twelve years now happens in two days, as Pallett put it. The pace is exhausting. The threats keep evolving. The cat-and-mouse game shows no signs of ending.But perhaps that is precisely why events like this matter. Not because they offer solutions to every problem, but because they remind an industry under constant pressure that it is not alone in the fight. That collaboration is not weakness. That sharing knowledge freely is not naïve—it is strategic.Black Hat Europe 2025 may have ended, but the conversations it sparked will carry forward into 2026 and beyond.____________HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More

Cybercrime Magazine Podcast
Cybercrime News For Dec. 12, 2025. Chrome's New AI Security Targets Hackers. WCYB Digital Radio.

Cybercrime Magazine Podcast

Play Episode Listen Later Dec 12, 2025 2:49


The Cybercrime Magazine Podcast brings you daily cybercrime news on WCYB Digital Radio, the first and only 7x24x365 internet radio station devoted to cybersecurity. Stay updated on the latest cyberattacks, hacks, data breaches, and more with our host. Don't miss an episode, airing every half-hour on WCYB Digital Radio and daily on our podcast. Listen to today's news at https://soundcloud.com/cybercrimemagazine/sets/cybercrime-daily-news. Brought to you by our Partner, Evolution Equity Partners, an international venture capital investor partnering with exceptional entrepreneurs to develop market leading cyber-security and enterprise software companies. Learn more at https://evolutionequity.com

ITSPmagazine | Technology. Cybersecurity. Society
Nothing Has Changed in Cybersecurity Since the 80s — And That's the Real Problem | A Conversation with Steve Mancini | Redefining Society and Technology with Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Dec 7, 2025 43:03


Dr. Steve Mancini: https://www.linkedin.com/in/dr-steve-m-b59a525/Marco Ciappelli: https://www.marcociappelli.com/Nothing Has Changed in Cybersecurity Since War Games — And That's Why We're in Trouble"Nothing has changed."That's not what you expect to hear from someone with four decades in cybersecurity. The industry thrives on selling the next revolution, the newest threat, the latest solution. But Dr. Steve Mancini—cybersecurity professor, Homeland Security veteran, and Italy's Honorary Consul in Pittsburgh—wasn't buying any of it. And honestly? Neither was I.He took me back to his Commodore 64 days, writing basic war dialers after watching War Games. The method? Dial numbers, find an open line, try passwords until one works. Translate that to today: run an Nmap scan, find an open port, brute force your way in. The principle is identical. Only the speed has changed.This resonated deeply with how I think about our Hybrid Analog Digital Society. We're so consumed with the digital evolution—the folding screens, the AI assistants, the cloud computing—that we forget the human vulnerabilities underneath remain stubbornly analog. Social engineering worked in the 1930s, it worked when I was a kid in Florence, and it works today in your inbox.Steve shared a story about a family member who received a scam call. The caller asked if their social security number "had a six in it." A one-in-nine guess. Yet that simple psychological trick led to remote software being installed on their computer. Technology gets smarter; human psychology stays the same.What struck me most was his observation about his students—a generation so immersed in technology that they've become numb to breaches. "So what?" has become the default response. The data sells, the breaches happen, you get two years of free credit monitoring, and life goes on. Groundhog Day.But the deeper concern isn't the breaches. It's what this technological immersion is doing to our capacity for critical thinking, for human instinct. Steve pointed out something that should unsettle us: the algorithms feeding content to young minds are designed for addiction, manipulating brain chemistry with endorphin kicks from endless scrolling. We won't know the full effects of a generation raised on smartphones until they're forty, having scrolled through social media for thirty years.I asked what we can do. His answer was simple but profound: humans need to decide how much they want technology in their lives. Parents putting smartphones in six-year-olds' hands might want to reconsider. Schools clinging to the idea that they're "teaching technology" miss the point—students already know the apps better than their professors. What they don't know is how to think without them.He's gone back to paper and pencil tests. Old school. Because when the power goes out—literally or metaphorically—you need a brain that works independently.Ancient cultures, Steve reminded me, built civilizations with nothing but their minds, parchment, and each other. They were, in many ways, a thousand times smarter than us because they had no crutches. Now we call our smartphones "smart" while they make us incrementally dumber.This isn't anti-technology doom-saying. Neither Steve nor I oppose technological progress. The conversation acknowledged AI's genuine benefits in medicine, in solving specific problems. But this relentless push for the "easy button"—the promise that you don't have to think, just click—that's where we lose something essential.The ultimate breach, we concluded, isn't someone stealing your data. It's breaching the mind itself. When we can no longer think, reason, or function without the device in our pocket, the hackers have already won—and they didn't need to write a single line of code.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

IT Visionaries
The AI Security Blind Spots Every Company Should Fix Now

IT Visionaries

Play Episode Listen Later Dec 4, 2025 62:44


Security used to be a headache. Now it is a growth engine.In this episode of IT Visionaries, host Chris Brandt sits down with Taylor Hersom, Founder and CEO of Eden Data and former CISO, to break down how fast growing companies can turn cybersecurity and compliance into a true competitive advantage. Taylor explains why frameworks like SOC 2, ISO 27001, and emerging AI standards such as ISO 42001 are becoming essential for winning enterprise business. He also shares how to future proof controls, connect compliance work to real business goals, and avoid the costly pitfalls that stall companies during scale.Taylor also highlights the biggest blind spots in AI security, including model training risks, improper data handling, and the challenges created by relying on free AI tools. If you are building a SaaS product or selling into large companies, this conversation shows how trust, transparency, and strong security practices directly drive revenue. Key Moments:  00:00 — The Hidden Risks of Scattered Company Data04:11 — Why Early-Stage Teams Lose Control of Security08:22 — Compliance Becomes a Competitive Advantage12:33 — SOC 2 vs ISO 27001: What Founders Need to Know16:44 — Framework Overload and How to Navigate It20:55 — Mapping Security Controls to Business Objectives25:06 — The Gap Between Compliance Audits and Real Threats29:17 — Startup Security Blind Spots That Lead to Breaches33:28 — Rising AI Risks Leaders Aren't Preparing For37:39 — Building Customer Trust Through Transparency41:50 — Protecting AI Models and Sensitive Customer Data46:01 — Why Free AI Tools Create Hidden Data Exposure50:12 — Automating Security Controls for Scale54:23 — Continuous Compliance Beats Annual Audits58:34 — Final Takeaways on Security, Trust, and Growth -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

ITSPmagazine | Technology. Cybersecurity. Society
AI, Quantum, and the Changing Role of Cybersecurity | ISC2 Security Congress 2025 Coverage with Jon France, Chief Information Security Officer at ISC2 | On Location with Sean Martin and Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Dec 3, 2025 26:22


What Security Congress Reveals About the State of CybersecurityThis discussion focuses on what ISC2 Security Congress represents for practitioners, leaders, and organizations navigating constant technological change. Jon France, Chief Information Security Officer at ISC2, shares how the event brings together thousands of cybersecurity practitioners, certification holders, chapter leaders, and future professionals to exchange ideas on the issues shaping the field today.  Themes That Stand OutAI remains a central point of attention. France notes that organizations are grappling not only with adoption but with the shift in speed it introduces. Sessions highlight how analysts are beginning to work alongside automated systems that sift through massive data sets and surface early indicators of compromise. Rather than replacing entry-level roles, AI changes how they operate and accelerates the decision-making path. Quantum computing receives a growing share of focus as well. Attendees hear about timelines, standards emerging from NIST, and what preparedness looks like as cryptographic models shift.  Identity-based attacks and authorization failures also surface throughout the program. With machine-driven compromises becoming easier to scale, the community explores new defenses, stronger controls, and the practical realities of machine-to-machine trust. Operational technology, zero trust, and machine-speed threats create additional urgency around modernizing security operations centers and rethinking human-to-machine workflows.  A Place for Every Stage of the CareerFrance describes Security Congress as a cross-section of the profession: entry-level newcomers, certification candidates, hands-on practitioners, and CISOs who attend for leadership development. Workshops explore communication, business alignment, and critical thinking skills that help professionals grow beyond technical execution and into more strategic responsibilities.  Looking Ahead to the Next CongressThe next ISC2 Security Congress will be held in October in the Denver/Aurora area. France expects AI and quantum to remain key themes, along with contributions shaped by the call-for-papers process. What keeps the event relevant each year is the mix of education, networking, community stories, and real-world problem-solving that attendees bring with them.The ISC2 Security Congress 2025 is a hybrid event taking place from October 28 to 30, 2025 Coverage provided by ITSPmagazineGUEST:Jon France, Chief Information Security Officer at ISC2 | On LinkedIn: https://www.linkedin.com/in/jonfrance/HOST:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comFollow our ISC2 Security Congress coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/isc2-security-congress-2025Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageISC2 Security Congress: https://www.isc2.orgNIST Post-Quantum Cryptography Standards: https://csrc.nist.gov/projects/post-quantum-cryptographyISC2 Chapters: https://www.isc2.org/chaptersWant to share an Event Briefing as part of our event coverage? Learn More

Redefining CyberSecurity
AI, Quantum, and the Changing Role of Cybersecurity | ISC2 Security Congress 2025 Coverage with Jon France, Chief Information Security Officer at ISC2 | On Location with Sean Martin and Marco Ciappelli

Redefining CyberSecurity

Play Episode Listen Later Dec 3, 2025 26:22


What Security Congress Reveals About the State of CybersecurityThis discussion focuses on what ISC2 Security Congress represents for practitioners, leaders, and organizations navigating constant technological change. Jon France, Chief Information Security Officer at ISC2, shares how the event brings together thousands of cybersecurity practitioners, certification holders, chapter leaders, and future professionals to exchange ideas on the issues shaping the field today.  Themes That Stand OutAI remains a central point of attention. France notes that organizations are grappling not only with adoption but with the shift in speed it introduces. Sessions highlight how analysts are beginning to work alongside automated systems that sift through massive data sets and surface early indicators of compromise. Rather than replacing entry-level roles, AI changes how they operate and accelerates the decision-making path. Quantum computing receives a growing share of focus as well. Attendees hear about timelines, standards emerging from NIST, and what preparedness looks like as cryptographic models shift.  Identity-based attacks and authorization failures also surface throughout the program. With machine-driven compromises becoming easier to scale, the community explores new defenses, stronger controls, and the practical realities of machine-to-machine trust. Operational technology, zero trust, and machine-speed threats create additional urgency around modernizing security operations centers and rethinking human-to-machine workflows.  A Place for Every Stage of the CareerFrance describes Security Congress as a cross-section of the profession: entry-level newcomers, certification candidates, hands-on practitioners, and CISOs who attend for leadership development. Workshops explore communication, business alignment, and critical thinking skills that help professionals grow beyond technical execution and into more strategic responsibilities.  Looking Ahead to the Next CongressThe next ISC2 Security Congress will be held in October in the Denver/Aurora area. France expects AI and quantum to remain key themes, along with contributions shaped by the call-for-papers process. What keeps the event relevant each year is the mix of education, networking, community stories, and real-world problem-solving that attendees bring with them.The ISC2 Security Congress 2025 is a hybrid event taking place from October 28 to 30, 2025 Coverage provided by ITSPmagazineGUEST:Jon France, Chief Information Security Officer at ISC2 | On LinkedIn: https://www.linkedin.com/in/jonfrance/HOST:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comFollow our ISC2 Security Congress coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/isc2-security-congress-2025Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageISC2 Security Congress: https://www.isc2.orgNIST Post-Quantum Cryptography Standards: https://csrc.nist.gov/projects/post-quantum-cryptographyISC2 Chapters: https://www.isc2.org/chaptersWant to share an Event Briefing as part of our event coverage? Learn More

Help Me With HIPAA
HSCC AI Security Efforts Preview - Ep 537

Help Me With HIPAA

Play Episode Listen Later Nov 28, 2025 52:24


If you thought AI in healthcare was just about cool robots and faster diagnoses, surprise! There's a whole army of volunteers wrangling the chaos behind the scenes, and our own Donna Grindle is leading the charge. In this episode, we take a peek into the AI cyber-security kitchen of the Health Sector Coordinating Council, where they're cooking up definitions, glossaries, and playbooks faster than AI can generate cat videos. It's education, governance, and cyber-risk planning, all served with a side of snark and sincerity. More info at HelpMeWithHIPAA.com/537

AWS Podcast
#747: Unpacking Automated Reasoning: From Mathematical Logic to Practical AI Security

AWS Podcast

Play Episode Listen Later Nov 24, 2025 38:02


Discover how AWS leverages automated reasoning to enhance AI safety, trustworthiness, and decision-making. Byron Cook (Vice President and Distinguished Scientist) explains the evolution of reasoning tools from limited, PhD-driven solutions to scalable, user-friendly systems embedded in everyday business operations. He highlights real-world examples such as mortgage approvals, security policies, and how formal logic and theorem proving are used to verify answers and reduce hallucinations in large language models. This episode delves into the exciting potential of neurosymbolic AI to bridge the gap between complex mathematical logic and practical, accessible AI solutions. Join us for a deep dive into how these innovations are shaping the next era of trustworthy AI, with insights into tackling intractable problems, verifying correctness, and translating complex proofs into natural language for broader use. https://aws.amazon.com/what-is/automated-reasoning/

The Deep Dive Radio Show and Nick's Nerd News
My Poetry Style Defeats Your AI Security Style

The Deep Dive Radio Show and Nick's Nerd News

Play Episode Listen Later Nov 24, 2025 5:19


My Poetry Style Defeats Your AI Security Style by Nick Espinosa, Chief Security Fanatic

Alexa's Input (AI)
Shift Left Your AI Security with SonnyLabs Founder Liana Tomescu

Alexa's Input (AI)

Play Episode Listen Later Nov 17, 2025 64:23


In this episode of Alexa's Input (AI) Podcast, host Alexa Griffith sits down with Liana Tomescu, founder of Sonny Labs and host of the AI Hacks podcast. Dive into the world of AI security and compliance as Liana shares her journey from Microsoft to founding her own company. Discover the challenges and opportunities in making AI applications secure and compliant, and learn about the latest in AI regulations, including the EU AI Act. Whether you're an AI enthusiast or a tech professional, this episode offers valuable insights into the evolving landscape of AI technology.LinksSonnyLabs Website: https://sonnylabs.ai/SonnyLabs LinkedIn: https://www.linkedin.com/company/sonnylabs-ai/Liana's LinkedIn: https://www.linkedin.com/in/liana-anca-tomescu/Alexa's LinksLinkTree: https://linktr.ee/alexagriffithAlexa's Input YouTube Channel: https://www.youtube.com/@alexa_griffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Substack: https://alexasinput.substack.com/KeywordsAI security, compliance, female founder, Sunny Labs, EU AI Act, cybersecurity, prompt injection, AI agents, technology innovation, startup journeyChapters00:00 Introduction to Liana Tomescu and Sunny Labs02:53 The Journey of a Female Founder in Tech05:49 From Microsoft to Startup: The Transition09:04 Exploring AI Security and Compliance11:41 The Role of Curiosity in Entrepreneurship14:52 Understanding Sunny Labs and Its Mission17:52 The Importance of Community and Networking20:42 MCP: Model Context Protocol Explained23:54 Security Risks in AI and MCP Servers27:03 The Future of AI Security and Compliance38:25 Understanding Prompt Injection Risks45:34 The Shadow AI Phenomenon45:48 Navigating the EU AI Act52:28 Banned and High-Risk AI Practices01:00:43 Implementing AI Security Measures01:17:28 Exploring AI Security Training

Cybercrime Magazine Podcast
AI Security Podcast. Facing AI-Powered Attacks. Adam Keown, Eastman & Brian Long, Adaptive Security.

Cybercrime Magazine Podcast

Play Episode Listen Later Nov 12, 2025 13:17


Brian Long is the CEO & Co-Founder at Adaptive Security. In this episode, he joins host Paul John Spaulding and Adam Keown, CISO at Eastman, a Fortune 500 company focused on developing materials that enhance the quality of life while addressing climate change, the global waste crisis, and supporting a growing global population. Together, they discuss the rise of AI-powered social engineering, including various attack methods, and how businesses can face these threats. The AI Security Podcast is brought to you by Adaptive Security, the leading provider of AI-powered social engineering prevention solutions, and OpenAI's first and only cybersecurity investment. To learn more about our sponsor, visit https://AdaptiveSecurity.com

The Steve Gruber Show
Brendan Steinhauser | AI Security in a Changing Global Landscape

The Steve Gruber Show

Play Episode Listen Later Nov 6, 2025 11:00


Brendan Steinhauser, CEO of the Alliance for Secure AI, joins the show to discuss the implications of the recent Trump–Xi meeting for AI security and global technology governance. He explains how leadership discussions between world powers influence the development, deployment, and regulation of artificial intelligence, and why ensuring secure, responsible AI is critical for national and international safety. Brendan also highlights potential risks, collaboration opportunities, and the growing importance of robust AI security frameworks to protect infrastructure, data, and technological innovation.

No Password Required
No Password Required Podcast Episode 65 — Steve Orrin

No Password Required

Play Episode Listen Later Nov 4, 2025 44:51


Keywordscybersecurity, technology, AI, IoT, Intel, startups, security culture, talent development, career advice  SummaryIn this episode of No Password Required, host Jack Clabby and Kayleigh Melton engage with Steve Orrin, the federal CTO at Intel, discussing the evolving landscape of cybersecurity, the importance of diverse teams, and the intersection of technology and security. Steve shares insights from his extensive career, including his experiences in the startup scene, the significance of AI and IoT, and the critical blind spots in cybersecurity practices. The conversation also touches on nurturing talent in technology and offers valuable advice for young professionals entering the field.  TakeawaysIoT is now referred to as the Edge in technology.Diverse teams bring unique perspectives and solutions.Experience in cybersecurity is crucial for effective team building.The startup scene in the 90s was vibrant and innovative.Understanding both biology and technology can lead to unique career paths.AI and IoT are integral to modern cybersecurity solutions.Organizations often overlook the importance of security in early project stages.Nurturing talent involves giving them interesting projects and autonomy.Young professionals should understand the hacker mentality to succeed in cybersecurity.Customer feedback is essential for developing effective security solutions.  TitlesThe Edge of Cybersecurity: Insights from Steve OrrinNavigating the Intersection of Technology and Security  Sound bites"IoT is officially called the Edge.""We're making mainframe sexy again.""Surround yourself with people smarter than you."  Chapters00:00 Introduction to Cybersecurity and the Edge01:48 Steve Orrin's Role at Intel04:51 The Evolution of Security Technology09:07 The Startup Scene in the 90s13:00 The Intersection of Biology and Technology15:52 The Importance of AI and IoT20:30 Blind Spots in Cybersecurity25:38 Nurturing Talent in Technology28:57 Advice for Young Cybersecurity Professionals32:10 Lifestyle Polygraph: Fun Questions with Steve

ai technology advice young innovation evolution startups artificial intelligence collaboration networking mentorship cybersecurity biology intel cto compliance organizations intersection required diverse governance machine learning nurturing misinformation iot surround homeland security poker autonomy lovecraft team building passwords deepfakes internet of things federal government community engagement critical thinking hellraiser blind spots body language collectibles phishing emerging technologies cloud computing hackathons hands on learning jim collins scalability encryption defcon call of cthulhu career journey data protection team dynamics good to great social engineering built to last leadership roles world series of poker zero trust summaryin ai ethics pinhead cryptography predictive analytics intelligence community experiential learning firmware veterans administration edge computing department of defense intel corporation learning from failure threat intelligence pattern recognition orrin startup culture bruce schneier creative collaboration human psychology ai security ethical hacking customer focus physical security performance optimization technology leadership applied ai innovation culture fedramp capture the flag behavioral analysis web security kali linux federal programs cybersecurity insights government technology pathfinding puzzle box continuous monitoring nurturing talent reliability engineering failure analysis buffer overflow poker tells quality of service
Edge of NFT Podcast
Building the Future of Web3: Real Assets, AI Security & Crypto Adoption

Edge of NFT Podcast

Play Episode Listen Later Oct 29, 2025 52:30


Recorded live at the Input Whispers: Jazz and Cigars event in Singapore, this special compilation episode created in partnership with Input PR, brings together four insightful conversations exploring the evolving frontiers of Web3, tokenization, fraud prevention, payments, and digital security.In this exclusive collection, co-host Josh Kriger sits down with some of the leading minds shaping the future of blockchain:Edwin Mata, CEO and co-founder of Brickken, on how Real World Assets (RWAs) and tokenization are revolutionizing capital markets and democratizing investment access.Pascal Podvin, co-founder and CRO of Nsure.ai, on leveraging AI to fight fraud and strengthen KYC in an increasingly complex crypto ecosystem.Konstantins Vasilenko, co-founder and CBDO of Paybis, on simplifying crypto onboarding, bridging fiat and digital currencies, and the global rise of crypto debit cards and stablecoins.Alex Katz, co-founder and CEO of Kerberus, on redefining real-time Web3 security, achieving zero user losses, and setting new standards for digital trust.From tokenized assets to next-generation security and payments, this episode captures the dynamic pulse of Web3 innovation straight from Singapore's vibrant crypto scene.Support us through our Sponsors! ☕

Business of Tech
OpenAI Goes For-Profit, New AI Security Threats Emerge, and Microsoft 365 Copilot Expands Features

Business of Tech

Play Episode Listen Later Oct 29, 2025 13:10


OpenAI has officially transitioned to a for-profit corporation, a move approved by Delaware Attorney General Kathy Jennings. This restructuring allows OpenAI to raise capital more effectively while maintaining oversight from its original non-profit entity. Microsoft now holds a 27% stake in the new structure, valued at over $100 billion, and OpenAI has committed to purchasing $250 billion in Microsoft Azure cloud services. This agreement includes provisions for Artificial General Intelligence (AGI), which will require verification from an independent expert panel before any declarations are made. Critics have raised concerns about the potential compromise of the non-profit's independence under this new arrangement.Research from cybersecurity firm SPLX indicates that AI agents, such as OpenAI's Atlas, are becoming new security threats due to vulnerabilities that allow malicious actors to manipulate their outputs. A survey revealed that only 17.5% of U.S. business leaders have an AI governance program in place, highlighting a significant gap in responsible AI use. The National Institute of Standards and Technology emphasizes the importance of identity governance in managing AI risks, suggesting that organizations must embed identity controls throughout AI deployment to mitigate potential threats.Additionally, a critical vulnerability in Microsoft Windows Server Update Services (WSUS) is currently being exploited, with around 100,000 instances reported in just one week. This vulnerability allows unauthenticated actors to execute arbitrary code on affected systems, raising concerns among cybersecurity experts, especially since Microsoft has not updated its guidance on the matter. Meanwhile, Microsoft 365 Copilot has introduced a new feature enabling users to build applications and automate workflows using natural language, which could lead to governance challenges as employees create their own automations.For Managed Service Providers (MSPs) and IT service leaders, these developments underscore the need for enhanced governance and security measures. The shift of OpenAI to a for-profit model signals a tighter integration with Microsoft, necessitating familiarity with Azure's AI stack. The vulnerabilities associated with AI agents and the WSUS exploit highlight the importance of proactive security measures. MSPs should prioritize establishing governance frameworks around AI usage and ensure robust identity management to mitigate risks associated with these emerging technologies.Four things to know today00:00 OpenAI Officially Becomes a For-Profit Corporation, Cementing $100B Partnership with Microsoft03:30 AI Agents Are Becoming a Security Nightmare—Because No One Knows Who They Really Are07:53 Hackers Are Targeting WSUS Servers — and You Could Be Distributing Malware Without Knowing It09:28 Microsoft's New Copilot Features Turn AI from Assistant to App Creator, Raising Governance Questions This is the Business of Tech.    Supported by:  https://scalepad.com/dave/https://getflexpoint.com/msp-radio/

Identity At The Center
#382 - Sponsor Spotlight - HYPR

Identity At The Center

Play Episode Listen Later Oct 29, 2025 48:22


This episode is sponsored by HYPR. Visit hypr.com/idac to learn more.In this episode from Authenticate 2025, Jim McDonald and Jeff Steadman are joined by Bojan Simic, Co-Founder and CEO of HYPR, for a sponsored discussion on the evolving landscape of identity and security.Bojan shares his journey from software engineer to cybersecurity leader and dives into the core mission of HYPR: providing fast, consistent, and secure identity controls that complement existing investments. The conversation explores the major themes from the conference, including the push for passkey adoption at scale and the challenge of securely authenticating AI agents.A key focus of the discussion is the concept of "Know Your Employee" (KYE) in a continuous manner, a critical strategy for today's remote and hybrid workforces. Bojan explains how the old paradigm of one-time verification is failing, especially in the face of sophisticated, AI-powered social engineering attacks like those used by Scattered Spider. They discuss the issue of "identity sprawl" across multiple IDPs and why consolidation isn't always the answer. Instead, Bojan advocates for a flexible, best-of-breed approach that provides a consistent authentication experience and leverages existing security tools.Connect with Bojan: https://www.linkedin.com/in/bojansimic/Learn more about HYPR: https://www.hypr.com/idacConnect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at idacpodcast.comChapter Timestamps:00:00 - Introduction at Authenticate 202500:23 - Sponsored Episode Welcome: Bojan Simic, CEO of HYPR01:11 - How Bojan Simic Got into Identity and Cybersecurity02:10 - The Elevator Pitch for HYPR04:03 - The Buzz at Authenticate 2025: Passkeys and Securing AI Agents05:29 - The Trend of Continuous "Know Your Employee" (KYE)07:33 - Is Your MFA Program Enough Anymore?09:44 - Hackers Don't Break In, They Log In: The Scattered Spider Threat11:19 - How AI is Scaling Social Engineering Attacks Globally13:08 - When a Breach Happens, Who's on the Hook? IT, Security, or HR?16:23 - What is the Right Solution for Identity Practitioners?17:05 - The Critical Role of Internal Marketing for Technology Adoption22:27 - The Problem with Identity Sprawl and the Fallacy of IDP Consolidation25:47 - When is it Time to Move On From Your Existing Identity Tools?28:16 - The Role of Document-Based Identity Verification in the Enterprise32:31 - What Makes HYPR's Approach Unique?35:33 - How Do You Measure the Success of an Identity Solution?36:39 - HYPR's Philosophy: Never Leave a User Stranded39:00 - Authentication as a Tier Zero, Always-On Capability40:05 - Is Identity Part of Your Disaster Recovery Plan?41:36 - From the Ring to the C-Suite: Bojan's Past as a Competitive Boxer47:03 - How to Learn More About HYPRKeywords:IDAC, Identity at the Center, Jeff Steadman, Jim McDonald, Bojan Simic, HYPR, Passkeys, Know Your Employee, KYE, Continuous Identity, Identity Verification, Authenticate 2025, Phishing Resistant, Social Engineering, Scattered Spider, AI Security, Identity Sprawl, Passwordless Authentication, FIDO, MFA, IDP Consolidation, Zero Trust, Cybersecurity, IAM, Identity and Access Management, Enterprise Security