POPULARITY
On this episode of The Cybersecurity Defenders Podcast, we speak with Chris Cochran, Field CISO & Vice President of AI Security at SANS Institute, about how to navigate the future of AI risk and security strategyChris works at the intersection of cyber defense, AI safety, and emerging risk, where the threats are converging and the playbooks are still being written. His career has taken him from the Marine Corps to NSA, U.S. Cyber Command, the U.S. House of Representatives, Mandiant, and Netflix. Across every role, one throughline: understanding adversaries, building high-trust teams, and translating complex problems into strategies leaders can act on.Today, Chris advises organizations, governments, and research institutions on AI governance, agentic threat preparedness, and unifying safety and security into a single discipline. He contributes to global standards efforts including the EU AI Act (via OWASP AI) and leads executive education on cybersecurity and AI strategy at SANS.Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform. This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows. Start today for free at limacharlie.io
In this episode, host Ivo Wiens is joined by Ben Boi-Doku, Chief Cybersecurity Strategist at CDW Canada, explore the rapidly-evolving landscape of AI agents, discussing practical questions about deployment, security and policy. Whether you're an everyday user or a tech enthusiast, this conversation provides valuable insights into how AI is shaping our personal and professional lives and what to watch out for. To learn more, visit cdw.ca Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Take a Network Break! We start with follow-ups on secure browsers and data centers in space, and then sound the red alert about an RCE vulnerability in NLTK. On the news front, Palo Alto Networks acquires a startup that monitors endpoints for malicious packages, browser extensions, scripts, and other threats, Lumen debuts a multi-cloud gateway... Read more »
Take a Network Break! We start with follow-ups on secure browsers and data centers in space, and then sound the red alert about an RCE vulnerability in NLTK. On the news front, Palo Alto Networks acquires a startup that monitors endpoints for malicious packages, browser extensions, scripts, and other threats, Lumen debuts a multi-cloud gateway... Read more »
Take a Network Break! We start with follow-ups on secure browsers and data centers in space, and then sound the red alert about an RCE vulnerability in NLTK. On the news front, Palo Alto Networks acquires a startup that monitors endpoints for malicious packages, browser extensions, scripts, and other threats, Lumen debuts a multi-cloud gateway... Read more »
Jim Love discusses how rapid adoption of agentic AI is repeating the industry pattern of shipping technology without security, citing issues like vulnerabilities in Anthropic's MCP and insecure open-source agent tools. He interviews Ido Shlomo, co-founder and CTO of Token Security, who argues AI agents are fundamentally hard to secure because they are non-deterministic, have infinite input/output space, and often require broad permissions to be useful. Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/cst Shlomo proposes focusing security on access, identity, attribution, least privilege, and auditability rather than trying to filter prompts and outputs, and describes Token's "intent-based permission management" approach that maps agents and sub-agents as non-human identities tied to their purpose and allowed actions. The conversation covers real-world risks such as developer tools like Claude Code running with extensive access, widespread over-provisioning of admin permissions and API keys, exposure of unencrypted local token files, and misconfigurations that leak data publicly. Shlomo recommends organizations build governance processes for agents—discovery/inventory, boundary setting, continuous monitoring, and secure decommissioning—and says AI is needed to help police AI. He also highlights emerging trends like agent teams and multi-day autonomous tasks, and notes Token Security is a top-10 finalist in the RSA Innovation Sandbox 2026, planning to present an intent-and-access-focused security model for AI agents. 00:00 Sponsor: Meter's integrated networking stack 00:19 Why agentic AI security is breaking (MCP & open-source chaos) 02:53 Meet Token Security: practical guardrails for AI agents 04:57 Why you can't just ban agents at work (shadow AI reality) 06:24 Tel Aviv's cybersecurity pipeline: gaming, military, and startups 08:57 Why AI/agents are fundamentally hard to secure (new OS + 'human spirit') 13:44 Trust, autonomy, and permissions: managing the blast radius 18:17 Real-world exposure: Claude Code and the developer identity attack surface 20:16 A workable approach: treat agents as untrusted processes with identity + least privilege 22:33 Zero Trust for Agents: Access ≠ Permission to Act 23:27 Token's "Intent-Based Permission Management" Explained 25:29 Building the Identity Map: Tracing What Agents Touch 26:52 The Secret Sauce: Using AI to Secure AI in Real Time 28:10 Real-World Case: 1,500 Agents and Wildly Over-Provisioned Access 30:57 CUA 'Computer-Use' Agents: Exciting, Personal… and Terrifying 34:44 Secure-by-Default & Sandboxing: Fixing 'Always Allow' Dark Patterns 35:36 What Security Teams Should Do Now: Inventory, Boundaries, Governance 37:59 What's Next: Agent Teams and Multi-Day Autonomous Work 40:10 Tony Stark Vision: Agents That Improve the Human Experience 41:02 RSA Innovation Sandbox: Token's Big Bet on Intent + Access 43:01 Wrap-Up, Audience Q&A, and Sponsor Message
Is AI security just "Cloud Security 2.0"? Toni De La Fuente, creator of the open-source tool Prowler, joins Ashish to explain why securing AI workloads requires a fundamentally different approach than traditional cloud infrastructure.We dive deep into the "Shared Responsibility Gap" emerging with managed AI services like AWS Bedrock and OpenAI. Toni spoke about the hidden dangers of default AI architectures, why you should never connect an MCP (Model Context Protocol) directly to a database.We discuss the new AI-driven SDLC, where tools like Claude Code can generate infrastructure but also create massive security blind spots if not monitored.Guest Socials - Toni's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:50) Who is Toni De La Fuente? (Creator of Prowler)(03:50) AI Security vs. Cloud Security: What's the Difference? (07:20) The Shared Responsibility Gap in AI Services (Bedrock, OpenAI) (11:30) The "Fifth Party" Risk: Managed AI Access (13:40) AI Architecture Best Practices: Never Connect MCP to DB Directly (16:40) Prowler's AI Pillars: Generating Dashboards & Detections (22:30) The New SDLC: Securing Code from Claude Code & Lovable (25:30) The "Magic" Trap: Why AI Doesn't Know Your Security Context (28:30) Top 3 Priorities for Security Leaders (Infra, LLM, Shadow AI) (30:40) Future Predictions: Why Predicting 12 Months Out is Impossible
In this video David speaks to Peter Bailey (SVP and GM of Cisco's Security business). AI agents are moving fast inside enterprises, and CISOs are hitting the brakes for one reason: the attack surface is expanding at machine speed. In this interview, we break down how agentic AI changes security, why MCP servers and agent tool access create new risks, and what a zero trust approach looks like when the “user” is a non-deterministic agent. We cover real-world problems like shadow MCP servers, agents touching sensitive systems and PII, and why traditional perimeter controls and firewalls are not enough when traffic is encrypted and actions happen too quickly downstream. You'll also hear what Cisco is doing across the AI lifecycle: AI Defense for model scanning, provenance and guardrails, plus new protections focused on agent identity, dynamic authorization, behavior monitoring, and revocation. On the networking side, we discuss how SD-WAN and secure access (SASE) can add visibility and policy control for AI usage, including prioritizing latency-sensitive AI traffic while still enforcing security. If you're a security engineer, network engineer, or CISO trying to move from AI hype to safe deployment, this video gives you a practical mental model and the controls to start building now. Big thank you to @Cisco for sponsoring this video and for sponsoring my trip to Cisco Live Amesterdam. // Peter Baily' SOCIALS // LinkedIn: / peterhbailey Guest Bio: https://newsroom.cisco.com/c/r/newsro... // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal YouTube: / @davidbombal Spotify: open.spotify.com/show/3f6k6gE... SoundCloud: / davidbombal Apple Podcast: podcasts.apple.com/us/podcast... // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 0:00 - Coming Up 0:30 - Introduction 01:15 - CISOs Problems with AI 02:35 - Real Issues with AI Agents 04:29 - Growth of the Attack Surface 05:34 - Concern of Poisoned AI and MCP 08:09 - What is the Kill-chain 10:16 - AI with Built-in Security 11:56 - Best Practises for AI Security 14:08 - Cisco Innovations for AI 16:48 - Cisco's Red Team for own AI 18:27 - Secure AI in Public Places 20:09 - Should You get into Cyber Security 21:26 - Advice To Your Younger Self 22:29 - Outro Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only. #cisco #ciscoemea #ciscolive
As AI agents begin to outnumber humans 80 to one, who's truly accountable when things go wrong? In this episode, Host Suraksha P talks to Palo Alto Networks CEO Nikesh Arora about the noise on what securing an agentic future actually demands from mandatory agent registries to real-time breach detection that must outpace an eight-minute attack window. He challenges India to pursue a hybrid sovereign AI strategy, warns that AI companies are racing ahead without reckoning with consequences, and offers entrepreneurs a sharp directive: stop building features, start solving problems. The cybersecurity frontier, Arora argues, belongs to those who own the data.You can follow Suraksha P on her social media: X and Linkedin Check out other interesting episodes like: AI Impact Summit: Amazon's Bet on India's AI Future, Anthropic’s India Play, India AI Impact Summit: Microsoft’s Brad Smith on Sovereignty, Scale and Skills, and much more. Catch the latest episode of ‘The Morning Brief’ on The Economic Times Online, Spotify, Apple Podcasts, JioSaavn, Amazon Music and Youtube.See omnystudio.com/listener for privacy information.
Send a textMost cybersecurity stories talk about the hacks, but this episode peels back the curtain on the raw, unfiltered journey of a hacker turned industry pioneer. Jason Haddix shares how his early days of hex editing and fake IDs evolved into leading offensive security at Fortune 100 giants — all driven by relentless curiosity and defiance. His tales of surviving the shadowy underground, navigating multi-year career pivots, and turning obsession into innovation will blow your mind. This isn't just about tech — it's about fearlessly forging a path in a chaotic, ever-changing world where knowledge is power and resilience is everything.You'll discover the secret frameworks behind modern pen testing—like the Bug Hunters Methodology—and how cutting-edge tools are reshaping cybersecurity. Jason dives into his real-world battles: from bypassing the most sophisticated security measures to hacking into critical infrastructure under intense pressure. His insights reveal the brutal truths of red teaming, physical infiltration, and the mental grit required to succeed when everyone else doubts you.We break down the rise of AI and LLMs in security: how attackers jailbreak systems, bypass defenses with prompt injections, and weaponize new technologies faster than security teams can respond. Jason warns about deploying these powerful tools without enough guardrails or understanding — and how FOMO is fueling a wild, unsecured frontier. His perspective is a call to arms for defenders and hackers alike: adapt fast, think boldly, and stay one step ahead in the most dangerous cyber game yet.This episode is essential for anyone hungry to understand the raw reality of offensive security, the future of AI in hacking, and the relentless pursuit of mastery in a digital battlefield. Whether you're a seasoned pro, a curious newcomer, or a business leader, Jason's fearless authenticity will challenge your assumptions and ignite your passion to innovate. Hit play — your fight for security starts now.Chapters00:00 Introduction and Background in Cybersecurity06:05 Early Experiences and Learning in Cybersecurity12:14 Transitioning to Professional Penetration Testing18:30 Challenges and Realities of Consulting in Cybersecurity20:41 Phishing Tests and Their Consequences23:09 Transitioning to Entrepreneurship26:05 The Evolution of Training and Consulting31:18 The Role of AI in Cybersecurity39:11 Navigating AI Security Challenges39:11 Understanding LLMs and User Education41:42 Privacy Concerns and Risk Management in AI44:32 Prompt Engineering Vulnerabilities and Jailbreaking Techniques47:03 Security Challenges in AI Systems49:39 Future of AI and Community EngagementSupport the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast Affiliates➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh➡️ OffGrid Coupon Code: JOE➡️ Unplugged Phone: https://unplugged.com/Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.
How can you help your loved ones navigate and securely adopt AI tools ? Will Gardner, CEO of Childnet joins the show for a vital conversation about helping families use AI safely. We talk about Childnet's latest research and the practical ways you can become a digital role model and start better AI conversations at home.
Modern software is built on layers and layers of code. So how do we know we can trust it?In this episode of Alexa's Input (AI), Alexa Griffith sits down with Justin Cappos, professor of computer science at NYU and a leading expert in software supply chain security, to unpack what trust really means in today's digital infrastructure.From package managers and dependency chains to large-scale outages and AI systems built on inherited code, Justin explains why many security failures aren't random accidents, they're predictable consequences of weak process, misaligned incentives, and insecure design.They discuss:Why security only becomes visible when something breaksThe difference between unavoidable failure and negligenceHow modern software supply chains amplify small mistakesThe role of leadership and culture in preventing breachesWhy verification systems like TUF and in-toto matter more than everAs AI accelerates development and increases system complexity, the need for verifiable trust only grows. This episode is a practical look at the invisible infrastructure that keeps modern software, and increasingly, modern AI, from collapsing under its own complexity.Podcast LinksWatch: https://www.youtube.com/@alexa_griffithRead: https://alexasinput.substack.com/Listen: https://creators.spotify.com/pod/profile/alexagriffith/More: https://linktr.ee/alexagriffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Find out more about the guest at:Website: https://engineering.nyu.edu/faculty/justin-capposNYU page: https://ssl.engineering.nyu.edu/personalpages/jcappos/Wikipedia: https://en.wikipedia.org/wiki/Justin_CapposChapters00:00 Introduction to Justin Cappos and His Work01:17 The Importance of Security in Software Systems03:50 Understanding Security Breaches: Mistakes vs. System Design Problems06:34 Cultural Factors in Security Failures09:25 Justin's Journey in Software Security12:03 The Role of Academia in Enterprise Security14:10 Evaluating Enterprise Security Systems16:58 Foundational Projects in Software Security19:21 AI Security Concerns and Future Directions24:59 The Need for MCP 2.028:57 Security Challenges with LLMs32:33 Designing Secure AI Systems37:14 Ethical Dilemmas in AI Decision-Making40:17 The Role of AI in Open Source43:44 Trust and Mindset in AI Security
What happens when AI safety filters fail to catch harmful content hidden inside images? Alessandro Pignati, AI Security Researcher at NeuralTrust, joins Sean Martin to reveal a newly discovered vulnerability that affects some of the most widely used image-generation models on the market today. The technique, called semantic chaining, is an image-based jailbreak attack discovered by the NeuralTrust research team, and it raises important questions about how enterprises secure their multimodal AI deployments.How does semantic chaining work? Pignati explains that the attack uses a single prompt composed of several parts. It begins with a benign scenario, such as a historical or educational context. A second instruction asks the model to make an innocent modification, like changing the color of a background. The final, critical step introduces a malicious directive, instructing the model to embed harmful content directly into the generated image. Because image-generation models apply fewer safety filters than their text-based counterparts, the harmful instructions are rendered inside the image without triggering the usual safeguards.The NeuralTrust research team tested semantic chaining against prominent models including Gemini Nano Pro, Grok 4, and Seedream 4.5 by ByteDance, finding the attack effective across all of them. For enterprises, the implications extend well beyond consumer use cases. Pignati notes that if an AI agent or chatbot has access to a knowledge base containing sensitive information or personal data, a carefully structured semantic chaining prompt can force the model to generate that data directly into an image, bypassing text-based safety mechanisms entirely.Organizations looking to learn more about semantic chaining and the broader landscape of AI agent security can visit the NeuralTrust blog, where the research team publishes detailed breakdowns of their findings. NeuralTrust also offers a newsletter with regular updates on agent security research and newly discovered vulnerabilities.This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAlessandro Pignati, AI Security Researcher, NeuralTrustOn LinkedIn: https://www.linkedin.com/in/alessandro-pignati/RESOURCESLearn more about NeuralTrust: https://neuraltrust.ai/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSAlessandro Pignati, NeuralTrust, Sean Martin, brand story, brand marketing, marketing podcast, brand highlight, semantic chaining, image jailbreak, AI security, agentic AI, multimodal AI, LLM safety, AI red teaming, prompt injection, AI agent security, image-based attacks, enterprise AI security Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
What happens when AI safety filters fail to catch harmful content hidden inside images? Alessandro Pignati, AI Security Researcher at NeuralTrust, joins Sean Martin to reveal a newly discovered vulnerability that affects some of the most widely used image-generation models on the market today. The technique, called semantic chaining, is an image-based jailbreak attack discovered by the NeuralTrust research team, and it raises important questions about how enterprises secure their multimodal AI deployments.How does semantic chaining work? Pignati explains that the attack uses a single prompt composed of several parts. It begins with a benign scenario, such as a historical or educational context. A second instruction asks the model to make an innocent modification, like changing the color of a background. The final, critical step introduces a malicious directive, instructing the model to embed harmful content directly into the generated image. Because image-generation models apply fewer safety filters than their text-based counterparts, the harmful instructions are rendered inside the image without triggering the usual safeguards.The NeuralTrust research team tested semantic chaining against prominent models including Gemini Nano Pro, Grok 4, and Seedream 4.5 by ByteDance, finding the attack effective across all of them. For enterprises, the implications extend well beyond consumer use cases. Pignati notes that if an AI agent or chatbot has access to a knowledge base containing sensitive information or personal data, a carefully structured semantic chaining prompt can force the model to generate that data directly into an image, bypassing text-based safety mechanisms entirely.Organizations looking to learn more about semantic chaining and the broader landscape of AI agent security can visit the NeuralTrust blog, where the research team publishes detailed breakdowns of their findings. NeuralTrust also offers a newsletter with regular updates on agent security research and newly discovered vulnerabilities.This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAlessandro Pignati, AI Security Researcher, NeuralTrustOn LinkedIn: https://www.linkedin.com/in/alessandro-pignati/RESOURCESLearn more about NeuralTrust: https://neuraltrust.ai/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSAlessandro Pignati, NeuralTrust, Sean Martin, brand story, brand marketing, marketing podcast, brand highlight, semantic chaining, image jailbreak, AI security, agentic AI, multimodal AI, LLM safety, AI red teaming, prompt injection, AI agent security, image-based attacks, enterprise AI security Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In the world of Generative AI, natural language has become the new executable. Attackers no longer need complex code to breach your systems, sometimes, asking for a "poem" is enough to steal your passwords .In this episode, Eduardo Garcia (Global Head of Cloud Security Architecture at Check Point) joins Ashish to explain the paradigm shift in AI security. He shares his experience building AI-powered fraud detection systems and why traditional security controls fail against intent-based attacks like prompt injection and data poisoning .We dive deep into the reality of Shadow AI, where employees unknowingly train public models with sensitive corporate data , and the sophisticated world of Deepfakes, where attackers can bypass biometric security using AI-generated images unless you're tracking micro-movements of the eye .Guest Socials - Eduardo's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security Podcast(00:00) Introduction(01:55) Who is Eduardo Garcia? (Check Point)(03:00) Defining Security for GenAI: The Focus on Prompts (05:20) Why Natural Language is the New Executable (08:50) Multilingual Attacks: Bypassing Filters with Mandarin (12:00) Shift Left vs. Shift Right: The 70/30 Rule for AI Security (15:30) The "Poem Hack": Stealing Passwords with Creative Prompts (21:00) Shadow AI: The "HR Spreadsheet" Leak Scenario (25:40) Security vs. Compliance in a Blurring World (28:00) The Conflict: "My Budget Doesn't Include Security" (34:00) The 5 V's of AI Data: Volume, Veracity, Velocity (40:00) Deepfakes & Biometrics: Detecting Micro-Movements (43:40) Fun Questions: Soccer, Family, and Honduran Tacos
Computer und Kommunikation (komplette Sendung) - Deutschlandfunk
Kloiber, Manfred www.deutschlandfunk.de, Computer und Kommunikation
In this episode, Brad Hibbert (COO & Chief Strategy Officer at Brinqa) joins Ashish to explain why traditional risk-based vulnerability management (RBVM) is no longer enough in a cloud-first world .We explore the evolution from simple patch management to Exposure Management a holistic approach that sits above your security tools to connect infrastructure, code, and cloud risks to actual business impact . Brad breaks down the critical difference between a "Risk Owner" (the service owner) and a "Remediation Owner" (the team fixing the bug) and why this distinction solves the "who fixes this?" problem .This conversation covers practical steps to uplift your VM program, how AI is helping prioritize the noise , and why compliance often just "proves activity" rather than reducing real risk . Whether you're drowning in Jira tickets or trying to automate remediation, this episode provides a roadmap for modernizing your security postureGuest Socials - Brad's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:50) Who is Brad Hibbert? (Brinqa)(04:55) The Evolution: From Scanning Servers to Cloud Complexity (06:50) What is Risk-Based Vulnerability Management? (08:50) Risk Owners vs. Remediation Owners: Who Fixes What? (12:00) How AI is Changing Vulnerability Management (15:20) Defining Exposure Management: Moving Beyond the Tools (18:30) The Challenge of "Data Inconsistency" Between Tools (22:30) Readiness Check: Are You Ready for Exposure Management? (25:10) Automated Remediation: Is "Zero Tickets" Possible? (28:40) Compliance vs. Risk: Why "Activity" isn't "Impact" (31:30) Maturity Milestones for Exposure Management (36:50) Fun Questions: Golf, Turkish Kebabs & Friendships
Is "developer-friendly" AI security actually possible? In this episode, Bryan Woolgar-O'Neil (CTO & Co-founder of Harmonic Security) joins Ashish to dismantle the traditional "block everything" approach to security.Bryan explains why 70% of Model Context Protocol (MCP) servers are running locally on developer laptops and why trying to block them is a losing battle . Instead, he advocates for a "coaching" approach, intervening in real-time to guide engineers rather than stopping their flow .We dive deep into the technical realities of MCP (Model Context Protocol), why it's becoming the standard for connecting AI to data, and the security risks of connecting it to production environments . Bryan also shares his prediction that Small Language Models (SLMs) will eventually outperform general giants like ChatGPT for specific business tasks .Guest Socials - Bryan's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(01:55) Who is Bryan Woolgar-O'Neil?(03:00) Why AI Adoption Stops at Experimentation(05:15) The "Shadow AI" Blind Spot: Firewall Stats vs. Reality (08:00) Is AI Security Fundamentally Different? (Speed & Scale) (10:45) Can Security Ever Be "Developer Friendly"? (14:30) What is MCP (Model Context Protocol)? (17:20) Why 70% of MCP Usage is Local (and the Risks) (21:30) The "Coaching" Approach: Don't Just Block, Educate (25:40) Developer First: Permissive vs. Blocking Cultures (30:20) The Rise of the "Head of AI" Role (34:30) Use Cases: Workforce Productivity vs. Product Integration (41:00) An AI Security Maturity Model (Visibility -> Access -> Coaching) (46:00) Future Prediction: Agentic Flows & Urgent Tasks (49:30) Why Small Language Models (SLMs) Will Win (53:30) Fun Questions: Feature Films & Pork Dumplings
The $250 million Series B was led by Bessemer Venture Partners, with participation from Salesforce Ventures and Picture Capital. Also, Outtake makes an agentic cybersecurity platform to help enterprises detect identity fraud. Its angel investors are a who's who. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Guest Angela Nakalembe, AI and Safety Expert, joins to discuss increase in AI technology, and the challenges to decipher what is real or fake online. Discussion of concerns to children, AI online bullying, tools for education, and more. Democrats threaten another government shutdown until Congress defunds ICE. What? Discussion of appropriations process, Democrats attempt to redeem themselves for political gain during election season, and how far could a government shutdown go.
Send us a textIn this powerhouse episode, Joey Pinz sits down with one of cybersecurity's most influential builders—a serial market maker who has helped shape some of the industry's most iconic companies. From Sourcefire and Fortinet to Cylance, Javelin, and now Sevco Security, Fitz brings unmatched perspective on what separates successful cyber companies from the rest—and what MSPs must do now to stay relevant.Fitz breaks down why visibility is the core of modern security, why most organizations still don't actually know what assets they have, and how exposure management has become the foundation of cyber resilience. He also explains where the real money is flowing in the MSP/MSSP space, the biggest mistakes founders still make, and what MSPs must do to move confidently into security services.On the personal side, Fitz shares insights from a life built around curiosity, communication, and impact—shaped by early roles at Coca-Cola during the Olympics, BMC, Compaq, and decades of startup leadership. His mission today? Protect the planet through better security, better intelligence, and smarter business decisions.
In this episode of Security Matters, host David Puner sits down with Ariel Pisetzky, chief information officer at CyberArk, for a candid look at the fast‑evolving intersection of AI, cybersecurity, and IT innovation. As organizations race to adopt AI, the fear of missing out is driving rapid decisions—often without enough consideration for identity, security, or long‑term impact.Ariel shares practical insights on what it really takes to secure AI at scale, from combating AI‑enabled phishing attacks to managing agent identities and reducing growing risks in the software supply chain. The conversation explores how leaders can balance innovation with identity‑centric guardrails, understand the economics of AI adoption, and push for the democratization of IT without losing control. Whether you're a CIO, an IT leader, or simply curious about the future of cybersecurity, this episode offers clear, actionable guidance to help you stay ahead in 2026 and beyond.
Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit
Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit
Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit
Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit
Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit
Microsoft quietly hands over BitLocker keys to the government, TikTok's new privacy terms spark a user panic, and Europe's secret tech backups reveal anxious prep for digital fallout. Plus, how gambling platforms are changing the future of news and sports. You can bet on how much snow will fall in New York City this weekend Europe Prepares for a Nightmare Scenario: The U.S. Blocking Access to Tech China, US sign off on TikTok US spinoff TikTok users freak out over app's 'immigration status' collection -- here's what it means Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show Microsoft Gave FBI Keys To Unlock Encrypted Data, Exposing Major Privacy Flaw - Forbes House of Lords votes to ban social media for Brits under 16 Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" Route leak incident on January 22, 2026 149 Million Usernames and Passwords Exposed by Unsecured Database Millions of people imperiled through sign-in links sent by SMS Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness The new Siri chatbot may run on Google servers, not Apple's A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots GitHub - anthropics/original_performance_takehome: Anthropic's original performance take-home, now open for you to try! Telly's "free" ad-based TVs make notable revenue—when they're actually delivered - Ars Technica Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush - Slashdot Dr. Gladys West, whose mathematical models inspired GPS, dies at 95 Host: Leo Laporte Guests: Alex Stamos, Doc Rock, and Patrick Beja Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit meter.com/twit redis.io expressvpn.com/twit shopify.com/twit
SAP and Enterprise Trends Podcasts from Jon Reed (@jonerp) of diginomica.com
In the January edition of the Enterprise month in review, we interviewed Louis Columbus with the burning question: are ERP and supply chain vendors ready for AI security, and new attack vectors like prompt injection? Louis has been nailing this on his Venture Beat AI security blog - so we put him in the hot seat to see what we could learn. This podcast is only the Columbus interview, which has been optimized for sound quality. If you want to see the full video replay with slides, check: https://youtube.com/live/-DQBB6mYJ_g.
SAP and Enterprise Trends Podcasts from Jon Reed (@jonerp) of diginomica.com
Are ERP and supply chain vendors ready for AI security, and new attack vectors like prompt injection? Our special guest Louis Columbus says no. Louis has been nailing this on his Venture Beat AI security blog - let's put him in the hot seat and see what we can learn. Your hosts Brian Sommer and Jon Reed will also share their underrated news stories of the month, and unleash their enterprise highs and lows via the infamous slide deck. As always, bring your savviest (and snarkiest) commentary and let's get this done. Note: this is the full show, including our first 20 minutes of underrated news stories and final whiffs. The interview with Louis Columbus is also being issued as a separate audio podcast. If you want to see the video replay with slides, check: https://youtube.com/live/-DQBB6mYJ_g.
Are ERP and supply chain vendors ready for AI security, and new attack vectors like prompt injection? Our special guest Louis Columbus says no. Louis has been nailing this on his Venture Beat AI security blog - let's put him in the hot seat and see what we can learn. Your hosts Brian Sommer and Jon Reed will also share their underrated news stories of the month, and unleash their enterprise highs and lows via the infamous slide deck. As always, bring your savviest (and snarkiest) commentary and let's get this done. Note: this is the full show, including our first 20 minutes of underrated news stories and final whiffs. The interview with Louis Columbus is also being issued as a separate audio podcast. If you want to see the video replay with slides, check: https://youtube.com/live/-DQBB6mYJ_g.
In the January edition of the Enterprise month in review, we interviewed Louis Columbus with the burning question: are ERP and supply chain vendors ready for AI security, and new attack vectors like prompt injection? Louis has been nailing this on his Venture Beat AI security blog - so we put him in the hot seat to see what we could learn. This podcast is only the Columbus interview, which has been optimized for sound quality. If you want to see the full video replay with slides, check: https://youtube.com/live/-DQBB6mYJ_g.
In today's Tech3 from Moneycontrol, we bring you the biggest startup and technology stories shaping the day. Juspay becomes 2026's first unicorn after a $50 million funding round, signalling renewed momentum in fintech. Infosys outlines its plan to hire 20,000 fresh graduates in FY27, even as AI reshapes IT services. Accenture announces a new physical AI security lab in Bengaluru. And from Kumbakonam, we report on Zoho's expansion plans and the launch of its AI-native ERP platform.
Escalating distrust in identity systems and misuse of AI are forcing a shift in security accountability for small and midsize businesses. Recent analysis highlights that the prevalence of deepfake-driven business email compromise and non-human digital identities is eroding confidence in traditional protective solutions. According to Techyle and supporting reports referenced by Dave Sobel, the ratio of non-human to human identities in organizations is now 144:1, further complicating authority and responsibility for managed service providers (MSPs). As trust in exclusive third-party control disintegrates, co-managed security models are becoming standard, repositioning decision-making and liability.The rise of AI-generated data—described as “AI slop”—has prompted increased adoption of zero trust models, with 84% of CIOs reportedly increasing funding for generative AI initiatives. However, as rogue AI agents are recognized as a significant insider threat, current security services are often ill-equipped to manage these new vulnerabilities. Regulatory bodies, including CISA, have issued guidance noting that the integration of AI into critical infrastructure introduces greater risk of outages and security breaches, particularly when governance remains ambiguous. High-profile vulnerabilities in open-source AI platforms used within cloud environments further highlight the persistence of operational risks.Adjacent technology updates include new releases from vendors such as 1Password, WatchGuard, JumpCloud, and ControlUp. These offerings focus on enhancing phishing prevention, expanding managed detection and response, and automating endpoint management for MSPs. However, Dave Sobel emphasizes that these tools introduce additional layers of automation and integration without adequately clarifying who ultimately holds authority and accountability when failures or breaches occur. There is a consistent warning that stacking solutions or outsourcing core functions without redefining operational control creates gaps between action and oversight.For MSPs and IT leaders, the key takeaway is that security risk is no longer defined by missing technology but by unclear governance, undefined authority, and misaligned incentives. Without explicit contractual and operational delineation of responsibility when deploying AI and automation, service providers are increasingly exposed to liability by default. The advice is to move beyond tool-centric strategies and focus on process clarity: define who authorizes, audits, and terminates non-human identities; establish which parties approve automation actions; and ensure clients understand shared responsibilities to mitigate silent risk accumulation. Four things to know today00:00 TechAisle Warns SMB Security Will Shift in 2026 as Identity Attacks and AI Agents Redefine Risk05:44 AI Moves Deeper Into Critical Infrastructure as Open-Source and Human Weaknesses Expand the Attack Surface09:35 MSP Security Platforms Automate Phishing Prevention and MDR—Outpacing Governance and Control Models12:12 AI-Powered MSP Tools Promise Control and Efficiency, But Shift Responsibility by Default This is the Business of Tech. Supported by: https://scalepad.com/dave/
Send us a textIn this high-energy and entertaining episode, Joey Pinz sits down with cybersecurity founder and unabashed Italian-American storyteller Tony Pietrocola. From stomping grapes as a child to running an AI-driven security operations platform, Tony brings a rare blend of toughness, humor, and entrepreneurial clarity.They jump from wine, cooking, and massive NFL bodies to college football, concussions, and how elite athletes are built differently. Tony shares what makes college football the real American spectacle—and why private equity is about to reshape the sport.On the cybersecurity front, Tony breaks down the challenges MSPs face, why most still struggle with security, and how AgileBlue helps them build profitable, white-label practices without the overhead of running a SOC. He explains the three questions every MSP should ask a vendor, the rise of AI-assisted attacks, and why consolidation and greenfield opportunities are the biggest missed revenue streams.The conversation ends with health, habit, and personal transformation—discussing Joey's 130-lb weight loss, Tony's daily 5 a.m. workouts, and the childhood structure that forged their work ethic.
Is the AI SOC a reality, or just vendor hype? In this episode, Antoinette Stevens (Principal Security Engineer at Ramp) joins Ashish to dissect the true state of AI in detection engineering.Antoinette shares her experience building detection program from scratch, explaining why she doesn't trust AI to close alerts due to hallucinations and faulty logic . We explore the "engineering-led" approach to detection, moving beyond simple hunting to building rigorous testing suites for detection-as-code .We discuss the shrinking entry-level job market for security roles , why software engineering skills are becoming non-negotiable , and the critical importance of treating AI as a "force multiplier, not your brain".Guest Socials - Antoinette's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:25) Who is Antoinette Stevens?(04:10) What is an "Engineering-Led" Approach to Detection? (06:00) Moving from Hunting to Automated Testing Suites (09:30) Build vs. Buy: Is AI Making it Easier to Build Your Own Tools? (11:30) Using AI for Documentation & Playbook Updates (14:30) Why Software Engineers Still Need to Learn Detection Domain Knowledge (17:50) The Problem with AI SOC: Why ChatGPT Lies During Triage (23:30) Defining AI Concepts: Memory, Evals, and Inference (26:30) Multi-Agent Architectures: Using Specialized "Persona" Agents (28:40) Advice for Building a Detection Program in 2025 (Back to Basics) (33:00) Measuring Success: Noise Reduction vs. False Positive Rates (36:30) Building an Alerting Data Lake for Metrics (40:00) The Disappearing Entry-Level Security Job & Career Advice (44:20) Why Junior Roles are Becoming "Personality Hires" (48:20) Fun Questions: Wine Certification, Side Quests, and Georgian Food
```html welcome to wall-e's tech briefing for tuesday, january 20th! delve into today's pressing tech topics: ai security funding surge: venture capitalists double down on ai security startups after a rogue ai incident. witness ai secures $58 million to enhance defenses against unchecked ai capabilities. meta's strategic shift: amidst financial strains and declining interest, meta lays off 1,500 employees and closes vr game studios, redirecting focus to ai and ar, particularly ray-ban ar glasses. bioticsai's fda milestone: fresh from a techcrunch disrupt victory, bioticsai receives fda approval for its ai-powered fetal ultrasound tech, poised to transform prenatal care across the u.s. upcoming techcrunch startup battlefield 200: a premier platform for emerging startups, the 2026 edition promises networking, investment opportunities, and a $100,000 prize. applications open mid-february. stay tuned for tomorrow's tech updates! ```
Welcome to Episode 419 of the Microsoft Cloud IT Pro Podcast. In this episode, Ben is once again live from Workplace Ninjas and is joined by John Joyner, an 18-year Microsoft MVP in Cloud Security and Azure Management. They discuss some of the announcements from Microsoft Ignite focused around Microsoft Security as well as diving deep into the new Security Store, AI agents, Security Compute Units (SCUs), and how Microsoft is making enterprise AI security more accessible and affordable than ever. Key topics include the phishing triage agent, conditional access optimization, E5 integration with included SCUs, and the strategic consolidation of security services into the Defender XDR portal. Whether you’re a security professional or IT administrator, this conversation provides valuable insights into Microsoft’s AI-driven security roadmap and how to stay ahead of AI-powered threats. Your support makes this show possible! Please consider becoming a premium member for access to live shows and more. Check out our membership options. Show Notes John Joyner on LinkedIn John Joyner’s Blog John Joyner’s Books Corica Technologies What is Microsoft Security Copilot? Security Store Microsoft Security Copilot agents overview Learn about Security Copilot inclusion in Microsoft 365 E5 subscription Microsoft Security Copilot Phishing Triage Agent in Microsoft Defender John Joyner John Joyner is an inventor, author, speaker, and professor specializing in datacenter and enterprise cloud computing. He serves as Senior Director of Technology at Corsica Technologies (formerly AccountabilIT), where he delivers next-generation technology management services to customers worldwide as a cloud architect helping businesses stay competitive. John is a Microsoft Azure MVP and Security MVP, having been recognized eighteen times (2007-2026) as a Microsoft Most Valuable Professional for his exceptional technical expertise, leadership, speaking experience, online influence, and commitment to solving real-world problems. He holds a Bachelor of Science in Business Administration with an Emphasis in Human Resources Management from the University of Colorado at Boulder. From 2007 to 2024, John served as an Adjunct Professor at the University of Arkansas Little Rock, teaching a pro-bono cloud computing management course open to all Arkansas residents. As an author, John co-wrote the 2021 book “Azure Arc-Enabled Kubernetes and Server” from Apress and contributed to four editions of the industry-standard “System Center Operations Manager: Unleashed” from SAMS Publishing (2005-2013). Between 2012 and 2015, he authored weekly cloud and datacenter columns for CBS Technology publications including TechRepublic and ZDNet. A retired U.S. Navy Lieutenant Commander and computer scientist, John worked for NATO in Europe and aboard an aircraft carrier in the Pacific. He earned the Computer Scientist sub-specialty and served as chief of network operations for NATO during the former Yugoslavia conflict. He is also a veteran of the Persian Gulf War. Outside of technology, John’s personal passions include 4-wheeling in his ‘Black Ops’ Jeep Wrangler and running a visionary art clothing company called Lit Like Luma. About the sponsors Would you like to become the irreplaceable Microsoft 365 resource for your organization? Let us know!
AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agents to make decisions, recommendations, and even commitments far beyond their intended authority.Isom, former director of AI and technology at the U.S. Department of Energy (DOE) and now founder and CEO of IsAdvice & Consulting, explains why AI red teaming must extend beyond cybersecurity, how to stress test AI governance before something breaks, and why human oversight, escalation paths, and clear limits are essential for responsible AI.The conversation examines real-world examples of AI drift, unintended or unethical model behavior, data lineage failures, procurement and vendor blind spots, and the rising need for scalable AI governance, AI security, responsible AI practices, and enterprise red teaming as organizations adopt generative AI.Whether you work in cybersecurity, identity security, AI development, or technology leadership, this episode offers practical insights for managing AI risk and building systems that stay aligned, accountable, and trustworthy.
Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this episode, Sapna Paul (Senior Manager at Dayforce) explains why there are no "Patch Tuesdays" for AI models .Sapna breaks down the three critical layers of AI vulnerability management: protecting production models, securing the data layer against poisoning, and monitoring model behavior for technically correct but ethically flawed outcomes . We discuss how to update your risk register to speak the language of business and the essential skills security professionals need to survive in an AI-first world .The conversation also covers practical ways to use AI within your security team to combat alert fatigue , the importance of explainability tools like SHAP and LIME , and how to align with frameworks like the NIST AI RMF and the EU AI Act .Guest Socials - Sapna's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Sapna Paul?(02:40) What is Vulnerability Management in the Age of AI? (05:00) Defining the New Asset: Neural Networks & Models (07:00) The 3 Layers of AI Vulnerability (Production, Data, Behavior) (10:20) Updating the Risk Register for AI Business Risks (13:30) Compliance vs. Innovation: Preventing AI from Going Rogue (18:20) Using AI to Solve Vulnerability Alert Fatigue (23:00) Skills Required for Future VM Professionals (25:40) Measuring AI Adoption in Security Teams (29:20) Key Frameworks: NIST AI RMF & EU AI Act (31:30) Tools for AI Security: Counterfit, SHAP, and LIME (33:30) Where to Start: Learning & Persona-Based Prompts (38:30) Fun Questions: Painting, Mentoring, and Vegan Ramen
PhoneBoy discusses AI Risk Mapping, AI Security Masters, and some great posts from the CheckMates community you may have missed.AI Risk MappingAI Security Masters SeriesAI Security Masters Session 1: How AI is Reshaping Our WorldHow to Chat with Your Check Point Gateways using Claude DesktopCheck Point MCP ServersThis Month's Spotlight - 3 Revisions Features You Should Start Using Today - October 2025Session Flow for AdministratorsVideos for Configuring Access Control and Threat PreventionHTTPS Inspection Inbound With More Than One CertificateConfiguring an AWS to Onsite VPNCan We Disable the HTTP Protocol ParserSSH Inspected by HTTPS InspectionConfiguring SSH Inspection in Threat PreventionPerformance Limitations of Virtual Switches with Legacy VSX?Upcoming events:CheckMates Fest 2026 on 14 January 2026Quantum SD-WAN Monitoring TechTalk on 21 January 2026AI Security Masters Session 2: Hacking with AI, The Dark Side of Innovation on 22 January 2026
Corey Quinn sits down with Avery Pennarun, co-founder and CEO of Tailscale, for a deep dive into how the company is reinventing networking for the modern era. From finally making VPNs behave the way they should to tackling AI security with zero-click authentication, Avery shares candid insights on building infrastructure people actually love using, and love talking about.They get into everything: surviving 100% year-over-year growth, why running on two tailnets at once is pure chaos, and how Tailscale makes “secure by default” feel effortless. Plus, they dig into why FreeBSD firewalls needed some tough love, the uncomfortable truth behind POCs, and even the surprisingly useful trick of turning your Apple TV into an exit node.About Avery: Avery Pennarun is the co-founder and CEO of Tailscale, where he's redefining secure networking with a simple, Zero Trust approach. A veteran software engineer with experience ranging from startups to Google, he's known for turning complex systems into approachable, user-friendly tools. His contributions to projects like wvdial, bup, and sshuttle reflect his belief that great technology should be both powerful and easy to use. With a mix of technical depth and dry humor, Avery shares insights on modern networking, internet evolution, and the realities of scaling a startup.Highlights:(0:00) Introduction to Tailscale and Security(00:52) Sponsorship and Personal Experiences(02:07) Technical Deep Dive into Tail Scale(06:10) Challenges and Future of Tail Scale(22:45) Building the Tail Net's API(23:54) Connecting Cloud Providers with Tailscale(25:22) Tailscale as a Security Solution(26:44) Innovations and Future of TailscaleSponsored by: duckbillhq.com
Brandyn Murtagh is a full-time bug bounty-hunter and ethical ‘White Hat' hacker who is the founder of MurtaSec. In this episode, he joins host Heather Engel to discuss AI threats and their impact on the security community, as well as his unique approach to threat modeling, the dual nature of AI, and more. • For more on cybersecurity, visit us at https://cybersecurityventures.com
Brian Long is the CEO & Co-Founder at Adaptive Security. In this episode, he joins host Paul John Spaulding and Teresa Zielinski, Vice President and Global CISO at GE Vernova, to discuss social engineering and how it is evolving in light of artificial intelligence advancements. The AI Security Podcast is brought to you by Adaptive Security, the leading provider of AI-powered social engineering prevention solutions, and OpenAI's first and only cybersecurity investment. To learn more about our sponsor, visit https://AdaptiveSecurity.com
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM The episode explores how Chris Stegh sees organisations balancing AI adoption with data security, governance and practical risk management. It covers the real barriers to scaling AI, why perfect data hygiene is unrealistic, and how leaders can use tools like Copilot, Purview and agentic AI to create safe, high‑value use cases while improving long‑term resilience.
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-542
As we reflect on 2025, this episode pulls together the most important themes shaping the year ahead — from the rapid acceleration of AI and automation, to the evolving realities of security, leadership, and trust in an increasingly complex world.What was once hidden behind the scenes is now accessible to everyone. AI has moved from the “Matrix” into daily workflows, forcing organizations to rethink efficiency, security, and human value. At the same time, rising geopolitical tension, information warfare, and emerging technologies like quantum computing are redefining what risk really looks like — both for businesses and for people.This conversation also explores the human side of 2025: leadership under pressure, the importance of culture, mentorship, and professionalism, and why kindness, trust, and preparation are no longer “soft skills,” but strategic advantages.From executive protection and estate management to corporate security, AI leverage, and career longevity, this episode highlights where leaders must adapt — and where getting it wrong even once can have lasting consequences.KEY HIGHLIGHTSAI has crossed a critical threshold — no longer theoretical, but operational, accessible, and increasingly powerfulAutomation and optimization are now survival tools, not optional efficienciesSecurity threats are no longer siloed — digital, physical, personal, and reputational risks are deeply interconnectedQuantum computing looms as a disruptive force that could render today's encryption obsoleteExecutive protection is expanding beyond the C-suite into broader personnel and brand securityLeadership today requires relationship capital, situational awareness, and long-term thinkingCulture, kindness, and mentorship deliver measurable performance and retention advantagesCareers are becoming less linear — leverage, adaptability, and mindset matter more than pedigreeTo hear more episodes of The Fearless Mindset podcast, you can go to https://the-fearless-mindset.simplecast.com/ or listen on major podcasting platforms such as Apple, Google Podcasts, Spotify, etc. You can also subscribe to the Fearless Mindset YouTube Channel to watch episodes on video. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Show Notes: https://securityweekly.com/swn-542
Sander Schulhoff is an AI researcher specializing in AI security, prompt injection, and red teaming. He wrote the first comprehensive guide on prompt engineering and ran the first-ever prompt injection competition, working with top AI labs and companies. His dataset is now used by Fortune 500 companies to benchmark their AI systems security, he's spent more time than anyone alive studying how attackers break AI systems, and what he's found isn't reassuring: the guardrails companies are buying don't actually work, and we've been lucky we haven't seen more harm so far, only because AI agents aren't capable enough yet to do real damage.We discuss:1. The difference between jailbreaking and prompt injection attacks on AI systems2. Why AI guardrails don't work3. Why we haven't seen major AI security incidents yet (but soon will)4. Why AI browser agents are vulnerable to hidden attacks embedded in webpages5. The practical steps organizations should take instead of buying ineffective security tools6. Why solving this requires merging classical cybersecurity expertise with AI knowledge—Brought to you by:Datadog—Now home to Eppo, the leading experimentation and feature flagging platform: https://www.datadoghq.com/lennyMetronome—Monetization infrastructure for modern software companies: https://metronome.com/GoFundMe Giving Funds—Make year-end giving easy: http://gofundme.com/lenny—Transcript: https://www.lennysnewsletter.com/p/the-coming-ai-security-crisis—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/181089452/my-biggest-takeaways-from-this-conversation—Where to find Sander Schulhoff:• X: https://x.com/sanderschulhoff• LinkedIn: https://www.linkedin.com/in/sander-schulhoff• Website: https://sanderschulhoff.com• AI Red Teaming and AI Security Masterclass on Maven: https://bit.ly/44lLSbC—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Sander Schulhoff and AI security(05:14) Understanding AI vulnerabilities(11:42) Real-world examples of AI security breaches(17:55) The impact of intelligent agents(19:44) The rise of AI security solutions(21:09) Red teaming and guardrails(23:44) Adversarial robustness(27:52) Why guardrails fail(38:22) The lack of resources addressing this problem(44:44) Practical advice for addressing AI security(55:49) Why you shouldn't spend your time on guardrails(59:06) Prompt injection and agentic systems(01:09:15) Education and awareness in AI security(01:11:47) Challenges and future directions in AI security(01:17:52) Companies that are doing this well(01:21:57) Final thoughts and recommendations—Referenced:• AI prompt engineering in 2025: What works and what doesn't | Sander Schulhoff (Learn Prompting, HackAPrompt): https://www.lennysnewsletter.com/p/ai-prompt-engineering-in-2025-sander-schulhoff• The AI Security Industry is Bullshit: https://sanderschulhoff.substack.com/p/the-ai-security-industry-is-bullshit• The Prompt Report: Insights from the Most Comprehensive Study of Prompting Ever Done: https://learnprompting.org/blog/the_prompt_report?srsltid=AfmBOoo7CRNNCtavzhyLbCMxc0LDmkSUakJ4P8XBaITbE6GXL1i2SvA0• OpenAI: https://openai.com• Scale: https://scale.com• Hugging Face: https://huggingface.co• Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition: https://www.semanticscholar.org/paper/Ignore-This-Title-and-HackAPrompt%3A-Exposing-of-LLMs-Schulhoff-Pinto/f3de6ea08e2464190673c0ec8f78e5ec1cd08642• Simon Willison's Weblog: https://simonwillison.net• ServiceNow: https://www.servicenow.com• ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts: https://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html• Alex Komoroske on X: https://x.com/komorama• Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack: https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack• MathGPT: https://math-gpt.org• 2025 Las Vegas Cybertruck explosion: https://en.wikipedia.org/wiki/2025_Las_Vegas_Cybertruck_explosion• Disrupting the first reported AI-orchestrated cyber espionage campaign: https://www.anthropic.com/news/disrupting-AI-espionage• Thinking like a gardener not a builder, organizing teams like slime mold, the adjacent possible, and other unconventional product advice | Alex Komoroske (Stripe, Google): https://www.lennysnewsletter.com/p/unconventional-product-advice-alex-komoroske• Prompt Optimization and Evaluation for LLM Automated Red Teaming: https://arxiv.org/abs/2507.22133• MATS Research: https://substack.com/@matsresearch• CBRN: https://en.wikipedia.org/wiki/CBRN_defense• CaMeL offers a promising new direction for mitigating prompt injection attacks: https://simonwillison.net/2025/Apr/11/camel• Trustible: https://trustible.ai• Repello: https://repello.ai• Do not write that jailbreak paper: https://javirando.com/blog/2024/jailbreaks—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
As organizations race to adopt AI, many discover an uncomfortable truth: ambition often outpaces readiness. In this episode of the ITSPmagazine Brand Story Podcast, host Sean Martin speaks with Julian Hamood, Founder and Chief Visionary Officer at TrustedTech, about what it really takes to operationalize AI without amplifying risk, chaos, or misinformation.Julian shares that most organizations are eager to activate tools like AI agents and copilots, yet few have addressed the underlying condition of their environments. Unstructured data sprawl, fragmented cloud architectures, and legacy systems create blind spots that AI does not fix. Instead, AI accelerates whatever already exists, good or bad.A central theme of the conversation is readiness. Julian explains that AI success depends on disciplined data classification, permission hygiene, and governance before automation begins. Without that groundwork, organizations risk exposing sensitive financial, HR, or executive data to unintended audiences simply because an AI system can surface it.The discussion also explores the operational reality beneath the surface. Most environments are a patchwork of Azure, AWS, on-prem infrastructure, SaaS platforms, and custom applications, often shaped by multiple IT leaders over time. When AI is layered onto this complexity without architectural clarity, inaccurate outputs and flawed business decisions quickly follow.Sean and Julian also examine how AI initiatives often emerge from unexpected places. Legal teams, business units, and individual contributors now build their own AI workflows using low-code and no-code tools, frequently outside formal IT oversight. At the same time, founders and CFOs push for rapid AI adoption while resisting the investment required to clean and secure the foundation.The episode highlights why AI programs are never one-and-done projects. Ongoing maintenance, data validation, and security oversight are essential as inputs change and systems evolve. Julian emphasizes that organizations must treat AI as a permanent capability on the roadmap, not a short-term experiment.Ultimately, the conversation frames AI not as a shortcut, but as a force multiplier. When paired with disciplined architecture and trusted guidance, AI enables scale, speed, and confidence. Without that discipline, it simply magnifies existing problems.Note: This story contains promotional content. Learn more.GUESTJulian Hamood, Founder and Chief Visionary Officer at TrustedTech | On LinkedIn: https://www.linkedin.com/in/julian-hamood/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Spotlight Brand Story: https://www.studioc60.com/content-creation#spotlight▶︎ Highlight Brand Story: https://www.studioc60.com/content-creation#highlightKeywords: sean martin, julian hamood, trusted tech, ai readiness, data governance, ai security, enterprise ai, brand story, brand marketing, marketing podcast, brand story podcast, brand spotlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.