POPULARITY
Categories
In this episode, the hosts discuss the seismic shift in the application security landscape triggered by the rise of Large Language Models (LLMs) and Anthropic's "Claude Code". They highlight the massive economic repercussions of these AI advancements, noting that billions in market value were wiped from traditional cybersecurity stocks as investors begin to believe frontier models might eventually write perfectly secure code. The hosts critique the industry's historical reliance on "checkbox" compliance tools like SAST, DAST, and SCA, arguing that these "archaic" methods are being replaced by AI-native strategies capable of reasoning through complex logic flaws. While they acknowledge that AI can suffer from "reasoning drift" and still requires deterministic validation to avoid false positives, they emphasize that security professionals must adapt by building custom "skills" and focusing on governance and observability. The discussion concludes that as developers move to "AI speed," the traditional role of the AppSec professional is evolving into a "Jarvis-like" orchestrator who manages automated workflows and infuses institutional knowledge into AI agents to maintain oversight without slowing down production.
Send a textWant a clear path from CISSP to top-tier pay without getting lost in buzzwords? We break down five high-income specialties that pair perfectly with CISSP leadership: modern GRC, cloud security as code, AI ethics and governance, advanced identity, and software supply chain security. Along the way, we unpack how AI reasoning tools like Claude Code Security are reshaping AppSec by cutting false positives and detecting logic flaws scanners miss, and we translate that shift into concrete workflows, better guardrails, and faster delivery.We start with the career pivot many leaders are making—moving from generalist security management to “decision architect.” That means pairing risk fluency with hands-on understanding of Terraform, Kubernetes, and CI/CD gates, then proving value through resilient architectures and evidence-driven dashboards for boards. You'll hear why GRC is exploding under new enforcement trends, how to automate continuous evidence to beat audit fatigue, and where vCISO opportunities command premium rates when strategy meets measurable outcomes.From there, we get practical. We walk through cloud guardrails that stop drift before it hits prod, share how to navigate shared responsibility with AWS and Azure, and outline identity-first zero trust that tames API key sprawl and enables passwordless access. On AI, we go deep on shadow AI containment, prompt-injection red teaming, model transparency, and data loss prevention tuned for embeddings—governance that accelerates, not blocks. Finally, we turn to software supply chain security: SBOM mandates, signed artifacts, dependency risk, and the DevSecOps policies that keep pipelines moving while raising assurance.If you're mapping your next move, we also compare salary bands across roles and highlight bridge certifications—CISM for program leadership, AI governance credentials for compliance depth, and CISA for audit rigor—to level up fast. Subscribe, share this with a teammate plotting their niche, and leave a quick review to tell us which specialty you're pursuing next.Gain exclusive access to 360 FREE CISSP Practice Questions at FreeCISSPQuestions.com and have them delivered directly to your inbox! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!
There's a particular kind of clarity you get when you talk to someone who spends their days breaking into things for a living. Not with malice — with purpose. John Steigerwald, known to most in the industry simply as "Stigs," co-founded White Knight Labs in 2016 with a mission that sounds almost disarmingly simple: build the best penetration testing team anyone has ever seen, and actually deliver results. Nearly a decade later, the company has grown to 40 people, gone international, and is busier than ever. The question worth asking is: why?The uncomfortable answer, according to Stigs, is that the fundamental problems haven't changed. At all."Honestly, it's still 2015," he said during our most recent conversation on ITSPmagazine's Brand Story series. Not as a metaphor. As a diagnosis. The same misconfigurations, the same weak identity policies, the same unlocked back doors that red teamers were exploiting a decade ago are still wide open today. The apps built in a COVID-era frenzy — pushed out fast, tested never — are now running critical business infrastructure. And the organizations using them are only finding out when something breaks.What's changed is the surface area. Cloud, AI, Microsoft 365, vibe-coded production apps — each new layer of technology gets adopted at speed, and each one arrives carrying the same original sin: no one turned on the basics. Stigs used Microsoft 365 as a pointed example. Millions of businesses are running on it with DMARC turned off, default configurations untouched, Copilot layered on top, and not a single CIS Benchmark policy applied. "Every client is vulnerable," he said. "Not just 10% of clients. Every client."That's a striking statement. It's also, if you've been paying attention to breach headlines, not a surprising one.The AI angle adds a new and almost darkly comedic wrinkle. Vibe coding — the practice of using AI tools like Cursor or Claude to generate production-ready code at speed — has given entry-level developers intermediate-level output. Which sounds great, until you realize that the AI models many of them leaned on were trained on outdated, sometimes vulnerable data. Stigs described visiting multiple clients with nearly identical security weaknesses, all tracing back to the same ChatGPT-generated setup instructions. "You and your neighbor did the same thing," he told one client. That's not just a funny anecdote. It's a warning about what happens when an entire industry bootstraps its infrastructure from the same flawed source.And yet, Stigs isn't anti-AI. He uses it every day. He just sees it with the clarity of someone who also finds the holes it leaves behind. His prediction for the near future: a massive wave of secure code review requests, as companies start reckoning with the vibe-coded backlog they've been quietly accumulating. AppSec is about to have a very good year.Looking forward, White Knight Labs is watching the growing intersection of private sector expertise and government infrastructure testing with particular interest. Critical infrastructure in America, long overdue for rigorous physical and embedded testing, is starting to receive that attention. Stigs and his team are already in the room.What makes White Knight Labs different isn't just technical skill — it's the ability to communicate what they find in language that actually lands. In an industry full of reports that gather dust, that matters. The best penetration test in the world is useless if no one acts on it.The door is open. It's been open for years. The question is who you call to finally lock it.To learn more about White Knight Labs, visit their website or reach out directly. Listen to the full conversation on ITSPmagazine.GUESTJohn StigerwaltFounder at White Knight Labs | Red Team Operations Leaderhttps://www.linkedin.com/in/john-stigerwalt-90a9b4110/RESOURCESWhite Knight Labs: https://whiteknightlabs.com_____________________________________________________________Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Neste episódio do DevSecOps Podcast, fomos direto no ponto: SCA não é sinônimo de caçar CVE em biblioteca open source. Durante anos, muita empresa reduziu Software Composition Analysis a “rodar ferramenta e ver se tem vulnerabilidade no npm ou no Maven”. Só que o jogo ficou mais complexo. Hoje falamos de dependências transitivas invisíveis, pacotes abandonados, licenças incompatíveis, ataques à cadeia de suprimentos e componentes proprietários que ninguém inventaria no SBOM porque “não é open source”. Spoiler: risco não pergunta licença. Discutimos:Por que SCA precisa olhar além do GitHub e entender o ecossistema inteiro da aplicaçãoO papel real do SBOM e onde ele falha na práticaSupply chain attacks e o que mudou depois de casos como Log4ShellDependências internas, pacotes privados e artefatos binários esquecidosLicenciamento como risco jurídico, não só técnicoComo integrar SCA de forma estratégica no pipeline e não virar mais um relatório ignoradoSe AppSec é armadura, SCA é o exame de sangue do software. E não adianta medir só colesterol quando o problema pode estar no fígado. Esse episódio é para quem já rodou ferramenta, já viu dashboard bonito e percebeu que ainda assim algo está faltando. Porque está mesmo.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
Ken Johnson and Seth Law examine the intensifying pressure on security practitioners as AI-driven development causes an unprecedented acceleration in industry velocity. A primary theme is the emergence of "shadow AI," where developers utilize unauthorized AI coding assistants and personal agents, introducing significant data classification risks and supply chain vulnerabilities. The discussion dives into technical concepts like AI agent "skills"—markdown files providing specialized directions—and the corresponding security risks found in new skill registries, such as malicious tools designed to exfiltrate credentials and crypto assets. The hosts also review 1Password's SCAM (Security Comprehension Awareness Measure), highlighting broad performance gaps in an AI's ability to detect phishing, with some models failing up to 65% of the time. To manage these unpredictable systems, the hosts advocate for a shift toward high-level validation roles, emphasizing the need for Subject Matter Expertise to combat "reasoning drift" and maintain safety through test-driven development and periodic "checkpoints". Ultimately, they conclude that while AI can simulate expertise, human oversight remains vital to secure the probabilistic nature of modern agentic workflows.
IA é ferramenta. Poderosa. Rápida. Escalável. E completamente indiferente ao que é certo ou errado. Neste episódio do DevSecOps Podcast, mergulhamos nos perigos reais da Inteligência Artificial além do hype e além do medo irracional. Falamos sobre modelos que aprendem vieses humanos, automação de desinformação em escala industrial, geração de código vulnerável com confiança absurda e a falsa sensação de segurança quando “a IA revisou”. IA não é ética. Não é moral. Não é consciente. É estatística com GPU. Discutimos também o impacto prático no desenvolvimento de software e na segurança de aplicações. Devs usando copilots sem validar saída. Times confiando em respostas geradas como se fossem verdade revelada. Ataques potencializados por modelos generativos. Engenharia social turbinada. Deepfakes cada vez mais convincentes. A IA amplia o melhor e o pior de nós. No fim, a pergunta não é se a IA é perigosa. Toda tecnologia poderosa é. A pergunta é: estamos usando com criticidade ou com preguiça intelectual? Porque quando a máquina erra, ela erra em escala. E quando o humano delega o pensamento, ele terceiriza a responsabilidade. E responsabilidade, meu amigo, não dá para fazer deploy automático.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
Amit Chita is the Field CTO at Mend.io. In this episode, he joins host Paul John Spaulding to discuss the future of AI appsec tooling, including how AI should be used as a force multiplier, not a replacement, new risks, and more. Securing The Build is brought to you by Mend.io, the leading application security solution, helping organizations reduce application risk efficiently. To learn more about our sponsor, visit https://mend.io.
In episode 312 of Absolute AppSec, the hosts discuss the double-edged sword of "vibe coding", noting that while AI agents often write better functional tests than humans, they frequently struggle with nuanced authorization patterns and inherit "upkeep costs" as foundational models change behavior over time. A central theme of the episode is that the greatest security risk to an organization is not AI itself, but an exhausted security team. The hosts explore how burnout often manifests as "silent withdrawal" and emphasize that managers must proactively draw out these issues within organizations that often treat security as a mere cost center. Additionally, they review new defensive strategies, such as TrapSec, a framework for deploying canary API endpoints to detect malicious scanning. They also highlight the value of security scorecarding—pioneered by companies like Netflix and GitHub—as a maturity activity that provides a holistic, blame-free view of application health by aggregating multiple metrics. The episode concludes with a reminder that technical tools like Semgrep remain essential for efficiency, even as practitioners increasingly leverage the probabilistic creativity of LLMs.
Nesse episódio a conversa foi direta e sem anestesia. Falamos sobre como empresas e profissionais de AppSec realmente evoluíram nos últimos anos, o que mudou de verdade e o que é só discurso bonito em slide corporativo. Spoiler: muita coisa avançou, mas muita gente ainda está brigando com problemas que já deveriam estar resolvidos há uma década. Também discutimos o descompasso clássico do mercado. Enquanto algumas organizações já deveriam estar olhando para o próximo nível de maturidade, automação real, decisões baseadas em risco e integração profunda com engenharia, outras ainda estão “começando AppSec” do zero. E aí vem a pergunta incômoda: isso é falta de tempo, de prioridade, de competência ou de coragem? Um episódio para quem quer entender onde estamos, onde deveríamos estar e por que maturidade em AppSec não é checklist, não é ferramenta e definitivamente não é cargo no LinkedIn.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
Ken Johnson and Seth Law examine the profound transformation of the security industry as AI tooling moves from simple generative models to sophisticated agentic architectures. A primary theme is the dramatic surge in development velocity, with some organizations seeing pull request volumes increase by over 800% as developers allow AI agents to operate nearly hands-off. This shift is redefining the role of Application Security practitioners, moving experts from manual tasks like manipulating Burp Suite requests to a validation-centric role where they spot-check complex findings generated by AI in minutes. The hosts characterize older security tools as "primitive" compared to modern AI analysis, which can now identify human-level flaws like complex authorization bypasses. A major technical highlight is the introduction of agent "skills"—markdown files containing instructions that empower coding assistants—and the associated emergence of new supply chain risks. They specifically reference research on malicious skills designed to exfiltrate crypto wallets and SSH credentials, warning that registries for these skills lack adequate security responses. To manage the inherent "reasoning drift" of AI, the hosts argue that test-driven development has become a critical safety requirement. Ultimately, they warn that the industry has already shifted fundamentally, and security professionals must lean into these new technologies immediately to avoid becoming obsolete in a day-to-day evolving landscape.
Neste episódio do DevSecOps Podcast, usamos a armadura do Homem de Ferro como desculpa elegante para falar de coisa séria: como montar um programa de AppSec que funciona no mundo real. Aqui não tem magia, tem engenharia. Assim como Tony Stark não começa salvando o mundo no Mark L, um programa de AppSec não nasce maduro. Falamos de fundamentos, evolução incremental, decisões técnicas difíceis e da diferença brutal entre ter ferramentas… e ter capacidade real. Jarvis vira métrica, sensores viram telemetria, armaduras viram processos. Tudo com pé no chão e código na mesa. Você vai ouvir sobre:por onde começar sem travar o timecomo alinhar AppSec com negócio, produto e Devmaturidade progressiva, não big bang corporativoporque cultura pesa mais que ferramentae o erro clássico de tentar “comprar” segurançaSe o seu AppSec hoje parece mais cosplay do que armadura funcional, esse episódio é pra você. Menos marketing, mais engenharia. Segurança que voa porque foi bem montada, não porque alguém prometeu.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
Ori Bendet, Vice President of Product Management at Checkmarx, joined Doug Green, Publisher of Technology Reseller News, to discuss how the acquisition of Tromzo strengthens Checkmarx's agentic application security strategy and reflects a broader shift in how organizations secure software in an AI-driven development era. Bendet explained that Checkmarx, a pioneer in application security with more than two decades of experience, has traditionally focused on helping organizations identify vulnerabilities early in the software development lifecycle (SDLC). However, the rapid adoption of AI-generated code has fundamentally changed the AppSec landscape. “The industry used to be fixated on finding vulnerabilities,” Bendet said. “Now the real challenge is fixing them at scale, in context, and without slowing developers down.” The Tromzo acquisition builds on Checkmarx's existing family of agentic tools, Checkmarx Assist, which already provides real-time remediation inside the developer IDE. Tromzo extends these capabilities deeper into the SDLC, enabling automated remediation at the repository and pull-request stages. Together, the technologies aim to “complete the loop” by delivering consistent, trusted remediation from early development through later stages of deployment. Bendet noted that AI is widening the gap between development velocity and security oversight, as significantly more code—and therefore more vulnerabilities—is being produced. At the same time, the application footprint itself is evolving to include AI components such as large language models, agents, and third-party AI services. “There is now a new AI element inside the application,” he said, “and organizations need AppSec solutions that understand and protect that expanded footprint.” Auto-remediation, once viewed skeptically by developers, is now gaining acceptance as AI agents gain a deeper understanding of application context. According to Bendet, modern agentic tools can remediate vulnerabilities while preserving business logic and minimizing disruption. “Developers no longer need to spend days undoing fixes that broke functionality,” he said. “The agent can understand the blast radius and refactor automatically.” Looking ahead, Bendet described a future where AppSec becomes more autonomous, with agents continuously testing, fixing, and validating applications while developers shift toward higher-level architectural and review roles. With proper guardrails in place, this evolution promises to reduce alert fatigue and allow teams to focus on innovation rather than remediation backlogs. More information about Checkmarx and its agentic application security approach is available at https://checkmarx.com/, with additional developer-focused resources at https://checkmarx.dev/.
In this episode of Resilient Cyber I sit down with Anshuman Bhartiya to discuss AI-native AppSec. Anshuman is a Staff Security Engineer at Lyft, Host of the The Boring AppSec Community podcast, and author of the AI Security Engineer newsletter on LinkedIn. Anshuman has quickly become an AppSec leader I highly respect and find myself learning from his content and perspectives on AppSec and Security Engineering in the era of AI, LLMs and Agents.
Dynamic Application Security Testing (DAST) has a reputation problem. It's noisy, slow, and often ignored by developers — especially in fast-moving CI/CD pipelines. In this episode of the TestGuild Podcast, we explore developer-focused DAST and why traditional AppSec tools struggle to gain adoption in modern DevOps teams. You'll learn: Why most DAST tools fail inside real-world CI/CD workflows What "shift-left security" actually means beyond marketing buzzwords How developer-first DAST reduces false positives and improves signal quality Where AI genuinely helps in security testing — and where it's mostly hype Practical steps QA, DevOps, and engineering leaders can take to reduce risk this quarter Our guest, Gadi Bashvitz, CEO at Bright Security, shares lessons from decades in cybersecurity, including building security tools that developers actually use — without slowing delivery. If you're responsible for test automation, DevSecOps, or application security, this episode will help you rethink how DAST should work in 2026 and beyond.
Synopsis L'épisode 0x284 lance 2026 avec PAtrick, Jacques et Vanessa, rejoints par Richer, pour un gros focus sur l'Open Banking, aussi appelé consumer-driven banking, au Canada : à quoi ressemble le cadre qui se met en place, comment l'accréditation et la gouvernance peuvent influencer la confiance, et quels nouveaux risques apparaissent quand des tiers et des intermédiaires accèdent à des données financières sensibles. Le trio enchaîne ensuite sur la gestion des vulnérabilités côté terrain : pourquoi une liste de « 50 CVE » ne suffit pas, et comment mieux prioriser en combinant exploitabilité, exposition réelle et impact business. Ils reviennent aussi sur un piège fréquent en DevSecOps : quand « tout le monde est responsable », l'imputabilité se dilue et les décisions critiques s'éternisent. Enfin, ils font le tour de l'actualité cyber : des actions policières importantes, dont Black Axe, à l'ironie très révélatrice de BreachForums qui se fait breacher. Un rappel concret que les fuites de données finissent souvent par devenir des pièces à conviction et des accélérateurs d'enquêtes. Nouvelles Patrick Bureau en gros se fait prendre AMP voit un problème avec la SAAQ… Vanessa Capsule - systèmes bancaire axé sur les consommateurs Jacques Patchez vos systèmes!! Le premier Patch Tuesday de 2026! Europol Arrests 34 Black Axe Members in Spain Over €5.9M Fraud and Organized Crime BreachForums Breached, Exposing 324K Cybercriminals BreachForums database leak Richer Structure over chaos par Tyler Andrew Cole - book Thinking fast and slow par Daniel Kahneman - book Crew Patrick Mathieu Vanessa Henri Richer Dinelle Jacques Sauvé Shamelessplug Join Hackfest/La French Connection Discord #La-French-Connection Join Hackfest us on Masodon POLAR - Québec - 29 Octobre 2026 Hackfest - Québec - 29-30-31 Octobre 2026 Crédits Montage audio par Hackfest Communication Music par Dynamic Range – Acid - Psy Tune Locaux virtuels par Streamyard
Neste episódio do DevSecOps Podcast, Cássio Batista Pereira explora o monitoramento de aplicações através da analogia com o Homem de Ferro. Ele discute o quaão crítico é ter um monitoramento inteligente e ativo para identificar o quanto antes uma ameaça ou mesmo um incidente, apresentando um ciclo de vida simples de log que inclui geração, centralização e visualização. O episódio enfatiza que é trabalhoso mas não impossível de centralizar informações sobre sua aplicação em um único lugar.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
AI doesn't break security, it exposes where it was already fragile. When automation starts making decisions faster than humans can audit, AppSec becomes the only thing standing between scale and catastrophe. In this episode, Ron sits down with Joshua Bregler, Senior Security Manager at McKinsey's QuantumBlack, to dissect how AI agents, pipelines, and dynamic permissions are reshaping application security. From prompt chaining attacks and MCP server sprawl to why static IAM is officially obsolete, this conversation gets brutally honest about what works, what doesn't, and where security teams are fooling themselves. Impactful Moments 00:00 – Introduction 02:15 – AI agents create identity chaos 04:00 – Static permissions officially dead 07:05 – AI security is still AppSec 09:30 – Prompt chaining becomes invisible attack 12:23 – Solving problems vs solving AI 15:03 – Ethics becomes an AI blind spot 17:47 – Identity is the next security failure 20:07 – Frameworks no longer enough alone 26:38– AI fixing insecure code in real time 32:15 – Secure pipelines before production Connect with our Guest Joshua Bregler on LinkedIn: https://www.linkedin.com/in/breglercissp/ Our Links Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/
Ken Johnson (cktricky on social media) and Seth Law are happy to announce a special episode of Absolute AppSec with Avi Douglen (sec_tigger on X), long-time OWASP Global Board of Directors member, founder and CEO of Bounce Security and co-author of the Threat Modeling Manifesto. The conversation ranges from Application Privacy related to Application Security, to participating in meetups and conferences, and finally OWASP as an Avi's experience as a board member.
AI isn't quietly changing software development… it's rewriting the rules while most security programs are still playing defense. When agents write code at machine speed, the real risk isn't velocity, it's invisible security debt compounding faster than teams can see it. In this episode, Ron Eddings sits down with Varun Badhwar, Co-Founder & CEO of Endor Labs, and Henrik Plate, Principal Security Researcher of Endor Labs, to break down how AI-assisted development is reshaping the software supply chain in real time. From MCP servers exploding across GitHub to agents trained on insecure code patterns, they analyze why traditional AppSec controls fail in an agent-driven world and what must replace them. This conversation pulls directly from Endor Labs' 2025 State of Dependency Management Report, revealing why most AI-generated code is functionally correct yet fundamentally unsafe, how malicious packages are already exploiting agent workflows, and why security has to exist inside the IDE, not after the pull request. Impactful Moments 00:00 – Introduction 02:00 – Star Wars meets cybersecurity culture 03:00 – Why this report matters now 04:00 – MCP adoption explodes overnight 10:00 – Can you trust MCP servers 12:00 – Malicious packages weaponize agents 14:00 – Code works, security fails 22:00 – Hooks expose agent behavior 28:30 – 2026 means longer lunches 33:00 – How Endor Labs fixes this Links Connect with our Varun on LinkedIn: https://www.linkedin.com/in/vbadhwar/ Connect with our Henrik on LinkedIn: https://www.linkedin.com/in/henrikplate/ Check out Endor Labs State of Dependency Management 2025: https://www.endorlabs.com/lp/state-of-dependency-management-2025 Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363
Using OWASP SAMM to assess and improve compliance with the Cyber Resilience Act (CRA) is an excellent strategy, as SAMM provides a framework for secure development practices such as secure by design principles and handling vulns. Segment Resources: https://owaspsamm.org/ https://cybersecuritycoalition.be/resource/a-strategic-approach-to-product-security-with-owasp-samm/ As genAI becomes a more popular tool in software engineering, the definition of “secure coding” is changing. This session explores how artificial intelligence is reshaping the way developers learn, apply, and scale secure coding practices — and how new risks emerge when machines start generating the code themselves. We'll dive into the dual challenge of securing both human-written and AI-assisted code, discuss how enterprises can validate AI outputs against existing security standards, and highlight practical steps teams can take to build resilience into the entire development pipeline. Join us as we look ahead to the convergence of secure software engineering and AI security — where trust, transparency, and tooling will define the future of code safety. Segment Resources: https://manicode.com/ai/ Understand the history of threat modeling with Adam Shostack. Learn how threat modeling has evolved with the Four Question Framework and can work in your organizations in the wake of the AI revolution. Whether you're launching a formal Security Champions program or still figuring out where to start, there's one truth every security leader needs to hear: You already have allies in your org -- they're just waiting to be activated. In this session, we'll explore how identifying and empowering your internal advocates is the fastest, most sustainable way to drive security culture change. These are your early adopters: the developers, engineers, and team leads who already “get it,” even if their title doesn't say “security.” We'll unpack: Why you need help from people outside the security org to actually be effective Where to find your natural allies (hint: it starts with listening, not preaching) How to support and energize those allies so they influence the majority What behavioral science tells us about spreading change across an organization Segment Resources: Security Champion Success Guide: https://securitychampionsuccessguide.org/ Related interviews/podcasts: https://www.youtube.com/playlist?list=PLPb14P8f4T1ITv3p3Y3XtKsyEAA8W526h How to measure success and impact of culture change and champions: https://www.linkedin.com/pulse/from-soft-skills-hard-data-measuring-success-security-yhmse/ Global Community of Champions sign up: https://docs.google.com/forms/d/e/1FAIpQLScyXPAMf9M8idpDMwO4p2h5Ng8I0ffofZuY70BbmgCZNPUS5Q/viewform This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-362
Using OWASP SAMM to assess and improve compliance with the Cyber Resilience Act (CRA) is an excellent strategy, as SAMM provides a framework for secure development practices such as secure by design principles and handling vulns. Segment Resources: https://owaspsamm.org/ https://cybersecuritycoalition.be/resource/a-strategic-approach-to-product-security-with-owasp-samm/ As genAI becomes a more popular tool in software engineering, the definition of "secure coding" is changing. This session explores how artificial intelligence is reshaping the way developers learn, apply, and scale secure coding practices — and how new risks emerge when machines start generating the code themselves. We'll dive into the dual challenge of securing both human-written and AI-assisted code, discuss how enterprises can validate AI outputs against existing security standards, and highlight practical steps teams can take to build resilience into the entire development pipeline. Join us as we look ahead to the convergence of secure software engineering and AI security — where trust, transparency, and tooling will define the future of code safety. Segment Resources: https://manicode.com/ai/ Understand the history of threat modeling with Adam Shostack. Learn how threat modeling has evolved with the Four Question Framework and can work in your organizations in the wake of the AI revolution. Whether you're launching a formal Security Champions program or still figuring out where to start, there's one truth every security leader needs to hear: You already have allies in your org -- they're just waiting to be activated. In this session, we'll explore how identifying and empowering your internal advocates is the fastest, most sustainable way to drive security culture change. These are your early adopters: the developers, engineers, and team leads who already "get it," even if their title doesn't say "security." We'll unpack: Why you need help from people outside the security org to actually be effective Where to find your natural allies (hint: it starts with listening, not preaching) How to support and energize those allies so they influence the majority What behavioral science tells us about spreading change across an organization Segment Resources: Security Champion Success Guide: https://securitychampionsuccessguide.org/ Related interviews/podcasts: https://www.youtube.com/playlist?list=PLPb14P8f4T1ITv3p3Y3XtKsyEAA8W526h How to measure success and impact of culture change and champions: https://www.linkedin.com/pulse/from-soft-skills-hard-data-measuring-success-security-yhmse/ Global Community of Champions sign up: https://docs.google.com/forms/d/e/1FAIpQLScyXPAMf9M8idpDMwO4p2h5Ng8I0ffofZuY70BbmgCZNPUS5Q/viewform This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-362
Using OWASP SAMM to assess and improve compliance with the Cyber Resilience Act (CRA) is an excellent strategy, as SAMM provides a framework for secure development practices such as secure by design principles and handling vulns. Segment Resources: https://owaspsamm.org/ https://cybersecuritycoalition.be/resource/a-strategic-approach-to-product-security-with-owasp-samm/ As genAI becomes a more popular tool in software engineering, the definition of "secure coding" is changing. This session explores how artificial intelligence is reshaping the way developers learn, apply, and scale secure coding practices — and how new risks emerge when machines start generating the code themselves. We'll dive into the dual challenge of securing both human-written and AI-assisted code, discuss how enterprises can validate AI outputs against existing security standards, and highlight practical steps teams can take to build resilience into the entire development pipeline. Join us as we look ahead to the convergence of secure software engineering and AI security — where trust, transparency, and tooling will define the future of code safety. Segment Resources: https://manicode.com/ai/ Understand the history of threat modeling with Adam Shostack. Learn how threat modeling has evolved with the Four Question Framework and can work in your organizations in the wake of the AI revolution. Whether you're launching a formal Security Champions program or still figuring out where to start, there's one truth every security leader needs to hear: You already have allies in your org -- they're just waiting to be activated. In this session, we'll explore how identifying and empowering your internal advocates is the fastest, most sustainable way to drive security culture change. These are your early adopters: the developers, engineers, and team leads who already "get it," even if their title doesn't say "security." We'll unpack: Why you need help from people outside the security org to actually be effective Where to find your natural allies (hint: it starts with listening, not preaching) How to support and energize those allies so they influence the majority What behavioral science tells us about spreading change across an organization Segment Resources: Security Champion Success Guide: https://securitychampionsuccessguide.org/ Related interviews/podcasts: https://www.youtube.com/playlist?list=PLPb14P8f4T1ITv3p3Y3XtKsyEAA8W526h How to measure success and impact of culture change and champions: https://www.linkedin.com/pulse/from-soft-skills-hard-data-measuring-success-security-yhmse/ Global Community of Champions sign up: https://docs.google.com/forms/d/e/1FAIpQLScyXPAMf9M8idpDMwO4p2h5Ng8I0ffofZuY70BbmgCZNPUS5Q/viewform This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-362
In episode 307 of Absolute AppSec, hosts Ken and Seth conduct a retrospective on the application security landscape of 2025. They conclude that their previous predictions were largely accurate, particularly regarding the rise of prompt injection, AI-backed attacks, and the industry-wide shift toward per-token billing models. A major theme of the year was the solidification of supply chain security as a critical pillar of AppSec, driven by notable incidents such as Shai Hulud and React for Shell. The hosts also share insights from their four-day training course on utilizing LLMs for secure code review, noting that while AI development is becoming more prevalent, most practitioners are still in the nascent stages of building custom tooling. Much of the discussion focuses on the Model Context Protocol (MCP); while it offers significant value for agentic workflows, the hosts criticize its current lack of robust security controls, specifically highlighting issues with OAuth implementations and short timeouts in existing clients. Finally, they discuss how the industry is moving toward a more nuanced balance between deterministic tools like Semgrep and the probabilistic creativity of LLMs to increase efficiency in security consulting.
Using OWASP SAMM to assess and improve compliance with the Cyber Resilience Act (CRA) is an excellent strategy, as SAMM provides a framework for secure development practices such as secure by design principles and handling vulns. Segment Resources: https://owaspsamm.org/ https://cybersecuritycoalition.be/resource/a-strategic-approach-to-product-security-with-owasp-samm/ As genAI becomes a more popular tool in software engineering, the definition of "secure coding" is changing. This session explores how artificial intelligence is reshaping the way developers learn, apply, and scale secure coding practices — and how new risks emerge when machines start generating the code themselves. We'll dive into the dual challenge of securing both human-written and AI-assisted code, discuss how enterprises can validate AI outputs against existing security standards, and highlight practical steps teams can take to build resilience into the entire development pipeline. Join us as we look ahead to the convergence of secure software engineering and AI security — where trust, transparency, and tooling will define the future of code safety. Segment Resources: https://manicode.com/ai/ Understand the history of threat modeling with Adam Shostack. Learn how threat modeling has evolved with the Four Question Framework and can work in your organizations in the wake of the AI revolution. Whether you're launching a formal Security Champions program or still figuring out where to start, there's one truth every security leader needs to hear: You already have allies in your org -- they're just waiting to be activated. In this session, we'll explore how identifying and empowering your internal advocates is the fastest, most sustainable way to drive security culture change. These are your early adopters: the developers, engineers, and team leads who already "get it," even if their title doesn't say "security." We'll unpack: Why you need help from people outside the security org to actually be effective Where to find your natural allies (hint: it starts with listening, not preaching) How to support and energize those allies so they influence the majority What behavioral science tells us about spreading change across an organization Segment Resources: Security Champion Success Guide: https://securitychampionsuccessguide.org/ Related interviews/podcasts: https://www.youtube.com/playlist?list=PLPb14P8f4T1ITv3p3Y3XtKsyEAA8W526h How to measure success and impact of culture change and champions: https://www.linkedin.com/pulse/from-soft-skills-hard-data-measuring-success-security-yhmse/ Global Community of Champions sign up: https://docs.google.com/forms/d/e/1FAIpQLScyXPAMf9M8idpDMwO4p2h5Ng8I0ffofZuY70BbmgCZNPUS5Q/viewform This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-362
⬥EPISODE NOTES⬥Modern application development depends on open source packages moving at extraordinary speed. Paul McCarty, Offensive Security Specialist focused on software supply chain threats, explains why that speed has quietly reshaped risk across development pipelines, developer laptops, and CI environments.JavaScript dominates modern software delivery, and the npm registry has become the largest package ecosystem in the world. Millions of packages, thousands of daily updates, and deeply nested dependency chainsഴ് often exceeding a thousand indirect dependencies per application. That scale creates opportunity, not only for innovation, but for adversaries who understand how developers actually build software.This conversation focuses on a shift that security leaders can no longer ignore. Malicious packages are not exploiting accidental coding errors. They are intentionally engineered to steal credentials, exfiltrate secrets, and compromise environments long before traditional security tools see anything wrong. Attacks increasingly begin on developer machines through social engineering and poisoned repositories, then propagate into CI pipelines where access density and sensitive credentials converge.Paul outlines why many existing security approaches fall short. Vulnerability databases were built for mistakes, not hostile code. AppSec teams are overloaded burning down backlogs. Security operations teams rarely receive meaningful telemetry from build systems. The result is a visibility gap where malicious code can run, disappear, and leave organizations unsure what was touched or stolen.The episode also explores why simple advice like “only use vetted packages” fails in practice. Open source ecosystems move too fast for manual approval models, and internal package repositories often collapse under friction. Meanwhile, attackers exploit maintainer accounts, typosquatting domains, and ecosystem trust to reach billions of downstream installations in a single event.This discussion challenges security leaders to rethink how software supply chain risk is defined, detected, and owned. The problem is no longer theoretical, and it no longer lives only in development teams. It sits at the intersection of intellectual property, identity, and delivery velocity, demanding attention from anyone responsible for protecting modern software-driven organizations.⬥GUEST⬥Paul McCarty, NPM Hacker and Software Supply Chain Researcher | On LinkedIn: https://www.linkedin.com/in/mccartypaul/⬥HOST⬥Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥RESOURCES⬥LinkedIn Post: https://www.linkedin.com/posts/mccartypaul_i-want-to-introduce-you-to-my-latest-project-activity-7396297753196363776-1N-TOpen Source Malware Database: https://opensourcemalware.comOpenSSF Scorecard Project: https://securityscorecards.dev⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
⬥EPISODE NOTES⬥Modern application development depends on open source packages moving at extraordinary speed. Paul McCarty, Offensive Security Specialist focused on software supply chain threats, explains why that speed has quietly reshaped risk across development pipelines, developer laptops, and CI environments.JavaScript dominates modern software delivery, and the npm registry has become the largest package ecosystem in the world. Millions of packages, thousands of daily updates, and deeply nested dependency chainsഴ് often exceeding a thousand indirect dependencies per application. That scale creates opportunity, not only for innovation, but for adversaries who understand how developers actually build software.This conversation focuses on a shift that security leaders can no longer ignore. Malicious packages are not exploiting accidental coding errors. They are intentionally engineered to steal credentials, exfiltrate secrets, and compromise environments long before traditional security tools see anything wrong. Attacks increasingly begin on developer machines through social engineering and poisoned repositories, then propagate into CI pipelines where access density and sensitive credentials converge.Paul outlines why many existing security approaches fall short. Vulnerability databases were built for mistakes, not hostile code. AppSec teams are overloaded burning down backlogs. Security operations teams rarely receive meaningful telemetry from build systems. The result is a visibility gap where malicious code can run, disappear, and leave organizations unsure what was touched or stolen.The episode also explores why simple advice like “only use vetted packages” fails in practice. Open source ecosystems move too fast for manual approval models, and internal package repositories often collapse under friction. Meanwhile, attackers exploit maintainer accounts, typosquatting domains, and ecosystem trust to reach billions of downstream installations in a single event.This discussion challenges security leaders to rethink how software supply chain risk is defined, detected, and owned. The problem is no longer theoretical, and it no longer lives only in development teams. It sits at the intersection of intellectual property, identity, and delivery velocity, demanding attention from anyone responsible for protecting modern software-driven organizations.⬥GUEST⬥Paul McCarty, NPM Hacker and Software Supply Chain Researcher | On LinkedIn: https://www.linkedin.com/in/mccartypaul/⬥HOST⬥Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥RESOURCES⬥LinkedIn Post: https://www.linkedin.com/posts/mccartypaul_i-want-to-introduce-you-to-my-latest-project-activity-7396297753196363776-1N-TOpen Source Malware Database: https://opensourcemalware.comOpenSSF Scorecard Project: https://securityscorecards.dev⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food
Jeff Williams is the Co-Founder and CTO of Contrast Security, where he leads innovation in runtime-based application security. A pioneer in modern AppSec and co-founder of OWASP, Jeff has spent more than two decades helping organizations understand and manage software risk through instrumentation, context, and continuous learning.You can find Jeff on the following sites:LinkedInXHere are some links provided by Jeff:Contrast SecurityContrast Security X PLEASE SUBSCRIBE TO THE PODCASTSpotifyApple PodcastsYouTube MusicAmazon MusicRSS FeedYou can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.comCoffee and Open Source is hosted by Isaac Levin
This episode, the 304th of Absolute AppSec, features hosts Ken Johnson (@cktricky) and Seth Law (@sethlaw) discussing the crush of Q4 expectations, upcoming training opportunities, the recent updates to the OWASP Top Ten, and the impact of AI tools like XBow on application security (AppSec) consulting. The hosts discuss the shift in the OWASP Top Ten from focusing on vulnerabilities to focusing on risks, and the dual role the list now plays for both awareness/training and compliance. Shifting to recent funding of XBow, the overall consensus is that while AI tools dramatically improve process flow, scoping, and the speed of vulnerability identification for consultants, they won't replace the need for human experts for complex, bespoke systems, business logic flaws, or authorization issues. AI is commoditizing lower-level AppSec work.
In this episode Brad and Jordan sit down to discuss common web application security findings we've seen this year.Blog: https://offsec.blog/Youtube: https://www.youtube.com/@cyberthreatpovTwitter: https://x.com/cyberthreatpov Follow Spencer on social ⬇Spencer's Links: https://go.spenceralessi.com/links Work with Us: https://securit360.com | Find vulnerabilities that matter, learn about how we do internal pentesting here.
In this episode of Resilient Cyber, I sit down with longtime industry AppSec leader and Founder/CTO of Contrast Security, Jeff Williams, along with Contrast Security's Sr. Director of Product Security Naomi Buckwalter, to discuss all things Application Detection & Response (ADR), as well as the implications of AI-driven development.
Most organizations have security champions. Few have a real security culture.In this episode of AppSec Contradictions, Sean Martin explores why AppSec awareness efforts stall, why champion programs struggle to gain traction, and what leaders can do to turn intent into impact.
Tanya Janca is a globally recognized AppSec (application security) expert and founder of We Hack Purple. In this episode, she shares wild stories from the front lines of cybersecurity. She shares stories of when she was a penetration tester to an incident responder.You can sign up for her newsletter at https://newsletter.shehackspurple.ca/SponsorsSupport for this show comes from ThreatLocker®. ThreatLocker® is a Zero Trust Endpoint Protection Platform that strengthens your infrastructure from the ground up. With ThreatLocker® Allowlisting and Ringfencing™, you gain a more secure approach to blocking exploits of known and unknown vulnerabilities. ThreatLocker® provides Zero Trust control at the kernel level that enables you to allow everything you need and block everything else, including ransomware! Learn more at www.threatlocker.com.This episode is sponsored by Hims. Hims offers access to ED treatment options ranging from trusted generics that cost up to 95% less than brand names to Hard Mints, if prescribed. To get simple, online access to personalized, affordable care for ED, Hair Loss, Weight Loss, and more, visit https://hims.com/darknet.Support for this show comes from Drata. Drata is the trust management platform that uses AI-driven automation to modernize governance, risk, and compliance, helping thousands of businesses stay audit-ready and scale securely. Learn more at drata.com/darknetdiaries.View all active sponsors.Books Alice and Bob Learn Secure Coding by Tanya Janca Alice and Bob Learn Application Security by Tanya Janca
The silos between Application Security and Cloud Security are officially breaking down, and AI is the primary catalyst. In this episode, Tejas Dakve, Senior Manager, Application Security, Bloomberg Industry Group and Aditya Patel, VP of Cybersecurity Architecture discuss how the AI-driven landscape is forcing a fundamental change in how we secure our applications and infrastructure.The conversation explores why traditional security models and gates are "absolutely impossible" to maintain against the sheer speed and volume of AI-generated code . Learn why traditional threat modeling is no longer a one-time event, how the lines between AppSec and CloudSec are merging, and why the future of the industry belongs to "T-shaped engineers" with a multidisciplinary range of skills.Guest Socials - Tejas's Linkedin + Aditya's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who is Tejas Dakve? (AppSec)(03:40) Who is Aditya Patel? (CloudSec)(04:30) Common Use Cases for AI in Cloud & Applications(08:00) How AI Changed the Landscape for AppSec Teams(09:00) Why Traditional Security Models Don't Work for AI(11:00) AI is Breaking Down Security Silos (CloudSec & AppSec)(12:15) The "Hallucination" Problem: AI Knows Everything Until You're the Expert(12:45) The Speed & Volume of AI-Generated Code is the Real Challenge(14:30) How to Handle the AI Code Explosion? "Paved Roads"(15:45) From "Department of No" to "Department of Safe Yes"(16:30) Baking Security into the AI Lifecycle (Like DevSecOps)(18:25) Securing Agentic AI: Why IAM is More Important than the Chat(24:00) The Silo: AppSec Doesn't Have Visibility into Cloud IAM(25:00) Merging Threat Models: AppSec + CloudSec(26:20) Using New Frameworks: MITRE ATLAS & OWASP LLM Top 10(27:30) Threat Modeling Must Be a "Living & Breathing Process"(28:30) Using AI for Automated Threat Modeling(31:00) Building vs. Buying AI Security Tools(34:10) Prioritizing Vulnerabilities: Quality Over Quantity(37:20) The Rise of the "T-Shaped" Security Engineer(39:20) Building AI Governance with Cross-Functional Teams(40:10) Secure by Design for AI-Native Applications(44:10) AI Adoption Maturity: The 5 Stages of Grief(50:00) How the Security Role is Evolving with AI(55:20) Career Advice for Evolving in the Age of AI(01:00:00) Career Advice for Newcomers: Get an IT Help Desk Job(01:03:00) Fun Questions: Cats, Philanthropy, and Thai FoodResources discussed during the interview:Amazon Rufus: (Amazon's AI review summarizer) OWASP Top 10 for LLMsSTRIDE Threat Model: (Microsoft methodology) MITRE ATLASCloud Security Alliance (CSA) Maestro Framework CISA KEV (Known Exploited Vulnerabilities)Book: Range: Why Generalists Triumph in a Specialized World by David Epstein Anjali Charitable TrustAditya Patel's Blog
Episode 302 of Absolute AppSec has hosts Ken Johnson and Seth Law speculating on the upcoming Global AppSec DC conference, predicting the announcement of the OWASP Top Ten 2025 edition, with Brian Glass scheduled to discuss it on the podcast. The conversation shifts to a technical discussion of OpenAI's new browser, Atlas, which is built on Chromium and includes AI capabilities. The hosts noted concern over the discovered prompt instructions for Atlas, which direct the ChatGPT agent to use browser history and available APIs to find data from the user's logged-in sites to answer ambiguous queries or fulfill requests. This functionality raises significant security concerns, as the agent's ability to comb the cache and logged-in sites could be exploited, effectively creating a "honeypot for cross-site scripting" with malicious potential like unauthorized money transfers. The hosts discussed the lack of talk submissions on Mobile Context Protocol (MCP) security at the conference, despite its growing relevance in a world of AI agents and tooling. Finally, they highlighted a new tool called SlopGuard, developed to prevent the risk of AI hallucinating non-existent, potentially malicious packages (which occurs 5-21% of the time) and attempting to install them from registries like NPM.
Organizations pour millions into protecting running applications—yet attackers are targeting the delivery path itself.This episode of AppSec Contradictions reveals why CI/CD and cloud pipelines are becoming the new frontline in cybersecurity.
Send us a textIn this candid and cathartic episode, Ken and Mike unpack the chaos that is Q4 for security professionals. From budget burnouts to end-of-year pentesting sprints, they explore why the final months of the year feel like a perfect storm for stress. Tune in as they share hard-earned lessons, practical advice for maintaining your sanity, and some gentle reminders that not everything needs to ship before Christmas. Whether you're a tired vendor, an overwhelmed engineer, or just trying to make it to PTO, this episode is for you.
Brad Geesaman, Principal Security Engineer at Ghost, joins the podcast today to explore how AI and large language models are transforming the world of application security. The discussion starts with the concept of "toil"—the repetitive, exhausting work that drains AppSec teams as they struggle to keep up with mountains of security findings and alerts. Brad shares his insights on how LLMs can provide meaningful leverage by handling the heavy lifting of triage, classification, and evidence gathering, while keeping humans firmly in the loop for final decisions. They also discuss the seismic shift happening in the AppSec market, with AI-native approaches potentially disrupting traditional security tooling. Listen along to hear more about the future of secure coding and how artificial intelligence might finally give security teams the helicopter view they need to fight fires effectively.FOLLOW OUR SOCIAL MEDIA: ➜Twitter: @AppSecPodcast➜LinkedIn: The Application Security Podcast➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast Thanks for Listening! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For the 300th (!!!!) episode of the podcast, Seth and Ken reminisce on changes to the industry and overall approach to application security since inception. The hosts discussed the evolution of the industry, noting that once-popular approaches like blindly emulating "hip" Silicon Valley security programs and running unmanaged Security Champions Programs have fallen out of favor, as organizations now better understand that these approaches are not one-size-fits-all and require careful, metrics-driven management. While Bug Bounty Programs remain popular, they noted an increase in submissions from "skiddies" (script kiddies) that challenge program effectiveness and highlight the need for internal support and a proactive stance before rolling out a public program. Positively, they observed that the industry has become more mature, focusing on business value, metrics, and ROI , a move that may have been accelerated by recent economic pressures. Furthermore, security practices have improved, with the decline of common vulnerabilities like XSS and SQL Injection due to safer frameworks and browser controls, allowing AppSec professionals to focus on more complex issues, such as business logic flaws and focused threat analysis, while the once monolithic process of threat modeling has evolved into a more nimble, "point-in-time" assessment readily adopted by developers.
International law enforcement take down the Breachforums domains. Researchers link exploitation campaigns targeting Cisco, Palo Alto Networks, and Fortinet. Juniper Networks patches over 200 vulnerabilities. Apple and Google update their bug bounties. Evaluating AI use in application security (AppSec) programs. Microsegmentation can contain ransomware much faster and yield better cyber insurance terms. The new RondoDox botnet exploits over 50 vulnerabilities. Researchers tag 13 unpatched Ivanti Endpoint Manager flaws. Our guest is Jason Manar, CISO of Kaseya, sharing his insight into how the private and public sectors can work together for national security. Hackers mistake a decoy for glory. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today we are joined by Jason Manar, CISO of Kaseya, sharing his insight into how the private and public sectors can/must work together for national security. Selected Reading FBI takes down BreachForums portal used for Salesforce extortion (Bleeping Computer) Cisco, Fortinet, Palo Alto Networks Devices Targeted in Coordinated Campaign (SecurityWeek) Juniper Networks Patches Critical Junos Space Vulnerabilities (OffSeq) Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous Exploits (WIRED) Google Launches AI Bug Bounty with $30,000 Top Reward (Infosecurity Magazine) In AI We Trust? Increasing AI Adoption in AppSec Despite Limited Oversight (Fastly) Reducing Risk: Microsegmentation Means Faster Incident Response, Lower Insurance Premiums for Organizations (Akamai) RondoDox Botnet Takes ‘Exploit Shotgun' Approach (SecurityWeek) ZDI Drops 13 Unpatched Ivanti Endpoint Manager Vulnerabilities (SecurityWeek) Pro-Russian hackers caught bragging about attack on fake water utility (The Record) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
⬥GUEST⬥Pieter VanIperen, CISO and CIO of AlphaSense | On Linkedin: https://www.linkedin.com/in/pietervaniperen/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥Real-World Principles for Real-World Security: A Conversation with Pieter VanIperenPieter VanIperen, the Chief Information Security and Technology Officer at AlphaSense, joins Sean Martin for a no-nonsense conversation that strips away the noise around cybersecurity leadership. With experience spanning media, fintech, healthcare, and SaaS—including roles at Salesforce, Disney, Fox, and Clear—Pieter brings a rare clarity to what actually works in building and running a security program that serves the business.He shares why being “comfortable being uncomfortable” is an essential trait for today's security leaders—not just reacting to incidents, but thriving in ambiguity. That distinction matters, especially when every new technology trend, vendor pitch, or policy update introduces more complexity than clarity. Pieter encourages CISOs to lead by knowing when to go deep and when to zoom out, especially in areas like compliance, AI, and IT operations where leadership must translate risks into outcomes the business cares about.One of the strongest points he makes is around threat intelligence: it must be contextual. “Generic threat intel is an oxymoron,” he argues, pointing out how the volume of tools and alerts often distracts from actual risks. Instead, Pieter advocates for simplifying based on principles like ownership, real impact, and operational context. If a tool hasn't been turned on for two months and no one noticed, he says, “do you even need it?”The episode also offers frank insight into vendor relationships. Pieter calls out the harm in trying to “tell a CISO what problems they have” rather than listening. He explains why true partnerships are based on trust, humility, and a long-term commitment—not transactional sales quotas. “If you disappear when I need you most, you're not part of the solution,” he says.For CISOs and vendors alike, this episode is packed with perspective you can't Google. Tune in to challenge your assumptions—and maybe your entire security stack.⬥SPONSORS⬥ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
SBOMs were supposed to be the ingredient label for software—bringing transparency, faster response, and stronger trust. But reality shows otherwise. Fewer than 1% of GitHub projects have policy-driven SBOMs. Only 15% of developer SBOM questions get answered. And while 86% of EU firms claim supply chain policies, just 47% actually fund them.So why do SBOMs stall as compliance artifacts instead of risk-reduction tools? And what happens when they do work?In this episode of AppSec Contradictions, Sean Martin examines:Why SBOM adoption is laggingThe cost of static SBOMs for developers, AppSec teams, and business leadersReal-world examples where SBOMs deliver measurable valueHow AISBOMs are extending transparency into AI models and dataCatch the full companion article in the Future of Cybersecurity newsletter for deeper analysis and more research.
Threat modeling is often called the foundation of secure software design—anticipating attackers, uncovering flaws, and embedding resilience before a single line of code is written. But does it really work in practice?In this episode of AppSec Contradictions, Sean Martin explores why threat modeling so often fails to deliver:It's treated as a one-time exercise, not a continuous processResearch shows teams who put risk first discover 2x more high-priority threatsYet fewer than 4 in 10 organizations use systematic threat modeling at scaleDrawing on insights from SANS, Forrester, and Gartner, Sean breaks down the gap between theory and reality—and why evolving our processes, not just our models, is the only path forward.
AI is everywhere in application security today — but instead of fixing the problem of false positives, it often makes the noise worse. In this first episode of AppSec Contradictions, Sean Martin explores why AI in application security is failing to deliver on its promises.False positives dominate AppSec programs, with analysts wasting time on irrelevant alerts, developers struggling with insecure AI-written code, and business leaders watching ROI erode. Industry experts like Forrester and Gartner warn that without strong governance, AI risks amplifying chaos instead of clarifying risk.This episode breaks down:• Why 70% of analyst time is wasted on false positives• How AI-generated code introduces new security risks• What “alert fatigue” means for developers, security teams, and business leaders• Why automating bad processes creates more noise, not less
Our guest today is Akansha Shukla, an information security professional with over 10 years of experience in application security, DevSecOps, and API security. We're discussing why API security remains one of the least mature areas of AppSec today and exploring the challenges developers face when securing APIs. Akansha shares her insights on incorporating APIs into threat modeling exercises, the ongoing struggles with API discovery and inventory management, and the authorization challenges highlighted in the OWASP API Security Top 10. The conversation also touches on whether "shift left" is truly dead and why we still haven't solved basic security problems like input validation despite having the frameworks to address them.FOLLOW OUR SOCIAL MEDIA: ➜Twitter: @AppSecPodcast➜LinkedIn: The Application Security Podcast➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast Thanks for Listening! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The conversation around cloud security is maturing beyond simple threat detection. As the industry grapples with alert fatigue, we explore the necessary shift from a reactive to a proactive security posture, questioning if a traditional SecOps model is sufficient for modern cloud environments.We spoke with Gil Geron, CEO of Orca Security, to examine the limitations of a SecOps-centric defense. SecOps teams are inherently reactive, they cannot be the sole guardians of cloud infrastructure. Instead, the conversation centers on a new blueprint: viewing cloud security as an end-to-end workflow that integrates development, deployment, and production runtime with a continuous feedback loop into policy.The role of AI is also explored, not just as a threat, but as an opportunity to empower security teams and make knowledge more accessible. We spoke about the power of context in reducing alert volume, citing a case where millions of vulnerabilities were prioritized down to a handful of actionable fixes.Guest Socials - Gil's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Cybersecurity PodcastQuestions asked:(00:00) Introduction(02:12) Who is Gil Geron? From Check Point to CEO of Orca Security(02:54) What is Cloud Security in 2025? The Evolution to a Modern Workflow(05:50) How AI is Impacting the Cloud Security Landscape: A Salvation, Not a Risk(08:40) The Limits of a Reactive Approach: Why SecOps Can't Be Your Only Defense(12:15) The Surprising Truth: 95% of Cloud Malware is Introduced, Not Hacked(13:40) The Role of Identity in Cloud Security: The New Networking(18:00) The Current Cloud Security Landscape: From "Thumb Mistakes" to Neglected Assets(22:20) How CISOs are Modernizing Security by Modernizing Engineering Workflows(23:50) Reducing SOC Fatigue: How Context Turns Millions of Alerts into a Handful of Fixes(26:20) Is Auto-Remediation Safe? Why It's an Orchestration Challenge, Not a Technical One(35:20) Shifting Left with Production Context: The Future of AppSec & Cloud Sec(38:00) How to Choose a Security Vendor: Finding Hope, Not Fear(42:01) Final Questions: Hiking, Team Pride, and French FriesThank you to our episode sponsor - Orca Security