POPULARITY
Categories
Neste episódio do DevSecOps Podcast, o papo gira em torno de um tema que muita gente na área de tecnologia sente na pele: a falta de formação realmente sólida em Application Security. Em vez de cursos superficiais ou conteúdos soltos pela internet, discutimos a ideia de uma Pós-graduação focada em AppSec e DevSecOps, pensada para quem quer sair da teoria genérica e mergulhar no que realmente acontece dentro das empresas. Ao longo do episódio, exploramos por que segurança de aplicações exige uma visão ampla que vai além de ferramentas. Falamos sobre arquitetura segura, modelagem de ameaças, revisão de código, segurança em pipelines de CI/CD, cloud, gestão de vulnerabilidades e cultura de segurança no desenvolvimento. A proposta da pós é justamente conectar esses pontos e formar profissionais capazes de pensar segurança dentro do ciclo completo de desenvolvimento. Se você é desenvolvedor, engenheiro de segurança, arquiteto ou líder técnico e quer entender como estruturar um aprendizado sério em AppSec, este episódio traz uma visão clara do que esperar de uma formação avançada na área. Neste episódio você vai encontrar: • Por que o mercado precisa de especialistas em Application Security• A diferença entre aprender ferramentas e aprender segurança de verdade• Os pilares de uma formação sólida em AppSec e DevSecOps• Como conectar desenvolvimento, cloud e segurança no mesmo modelo mental• O tipo de profissional que o mercado realmente está procurando hojeBecome a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
As more developers turn to LLMs to generate code, more appsec teams are turning to LLMs to conduct security code reviews. One of the biggest themes in all the discussion around LLMs, agents, and code is speed -- more code created faster. James Wickett shares why speed continues to pose a challenge to appsec teams and why that's often because teams haven't invested enough in foundational appsec principles. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-372
As more developers turn to LLMs to generate code, more appsec teams are turning to LLMs to conduct security code reviews. One of the biggest themes in all the discussion around LLMs, agents, and code is speed -- more code created faster. James Wickett shares why speed continues to pose a challenge to appsec teams and why that's often because teams haven't invested enough in foundational appsec principles. Show Notes: https://securityweekly.com/asw-372
As more developers turn to LLMs to generate code, more appsec teams are turning to LLMs to conduct security code reviews. One of the biggest themes in all the discussion around LLMs, agents, and code is speed -- more code created faster. James Wickett shares why speed continues to pose a challenge to appsec teams and why that's often because teams haven't invested enough in foundational appsec principles. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-372
In episode 315 of Absolute AppSec, Ken Johnson and Seth Law discuss the rapidly evolving challenges of securing software in an era of AI-assisted development. The hosts provide updates on their "Harnessing LLMs for Application Security" training, noting that the field is changing so fast that they must constantly update their exercises to include new agents and advanced tools like Claude Code. A primary concern raised is the "naivete" of many new security tools, where prompts are often automatically generated by AI rather than expertly crafted, causing a loss of essential nuance. The hosts also warn against AI companies building security products without specialized expertise, citing a zero-click exploit in the "Comet" AI browser that could exfiltrate sensitive secrets via calendar summaries. As development teams now ship code at "AI speed," the hosts argue that traditional AppSec methods are too slow, necessitating a strategic pivot toward automated design reviews, governance, and observability rather than just chasing individual vulnerabilities. Despite the inherent risks and the ongoing difficulty of managing AI reasoning drift, they remain optimistic that these tools can eventually unlock more efficient, hands-off AppSec workflows if managed with proper guardrails and deterministic oversight.
As more developers turn to LLMs to generate code, more appsec teams are turning to LLMs to conduct security code reviews. One of the biggest themes in all the discussion around LLMs, agents, and code is speed -- more code created faster. James Wickett shares why speed continues to pose a challenge to appsec teams and why that's often because teams haven't invested enough in foundational appsec principles. Show Notes: https://securityweekly.com/asw-372
Anthropic's Claude Code Security research preview promises AI-powered code analysis and vulnerability detection at scale. The announcement triggered strong reactions across the cybersecurity community and sent several vendor stocks lower. In this episode, we break down what the tool actually does, where it fits in modern AppSec, and whether AI automation threatens traditional security products or simply makes teams more efficient. Expect a practical, no-hype conversation about what changes and what doesn't. ** Links mentioned on the show ** Anthropic’s New Claude AI Security Tool Wipes Out Over $15 Billion From Cybersecurity Stocks https://www.linkedin.com/pulse/anthropics-new-claude-ai-security-tool-wipes-out-17jje/ Making frontier cybersecurity capabilities available to defenders https://www.anthropic.com/news/claude-code-security ** Watch this episode on YouTube ** ** Become a Shared Security Supporter ** Get exclusive access to ad-free episodes, bonus episodes, listen to new episodes before they are released, receive a monthly shout-out on the show, and get a discount code for 15% off merch at the Shared Security store. Become a supporter today! https://patreon.com/SharedSecurity ** Thank you to our sponsors! ** SLNT Visit slnt.com to check out SLNT’s amazing line of Faraday bags and other products built to protect your privacy. As a listener of this podcast you receive 10% off your order at checkout using discount code “sharedsecurity”. ** Subscribe and follow the podcast ** Subscribe on YouTube: https://www.youtube.com/c/SharedSecurityPodcast Follow us on Bluesky: https://bsky.app/profile/sharedsecurity.bsky.social Follow us on Mastodon: https://infosec.exchange/@sharedsecurity Join us on Reddit: https://www.reddit.com/r/SharedSecurityShow/ Visit our website: https://sharedsecurity.net Subscribe on your favorite podcast app: https://sharedsecurity.net/subscribe Sign-up for our email newsletter to receive updates about the podcast, contest announcements, and special offers from our sponsors: https://shared-security.beehiiv.com/subscribe Leave us a rating and review: https://ratethispodcast.com/sharedsecurity Contact us: https://sharedsecurity.net/contact The post Claude Code Security: The AI Shockwave Hitting Cybersecurity appeared first on Shared Security Podcast.
Amit Chita is the Field CTO at Mend.io. In this episode, he joins host Paul John Spaulding to discuss enterprise appsec metrics, including which matter most for organizations, translating technical risk into business impact, and more. Securing The Build is brought to you by Mend.io, the leading application security solution, helping organizations reduce application risk efficiently. To learn more about our sponsor, visit https://mend.io.
Send a textAI just found hundreds of high-severity vulnerabilities hiding in open source, and the market flinched. We dig into what Anthropic's Claude Code Security actually means for security teams, why vendors like CrowdStrike and Okta aren't going away, and how the real change lands on roles, workflows, and the skills you need next. From CI/CD integration to vulnerability discovery at scale, we frame where general models augment specialized tools and where human expertise still anchors the stack.We also get tactical with five CISSP-style AI questions designed to sharpen your instincts. You'll learn how adversaries reverse engineer decision boundaries to drive up false negatives, what adversarial examples look like in practice, and why adversarial training matters. We break down indirect prompt injection—how a crafted document can hijack an LLM to exfiltrate session data—and outline guardrails that actually reduce risk. Then we map AI risk using NIST's AI RMF, focusing on the Measure function to evaluate potential harms to protected classes, and we unpack why federated learning still faces privacy leakage through gradient updates without differential privacy and secure aggregation.If you're in a SOC or building AppSec pipelines, this conversation gives you a blueprint to adapt: automate tier one triage, monitor for model drift, add OOD detection, and treat your models like code with tests, reviews, and rollbacks. If you're planning your career, we share concrete pivot paths into detection engineering with ML, AI governance, and assurance. Want more hands-on practice and mentorship to pass the CISSP the first time and future-proof your skills? Subscribe, share this with a teammate, and leave a review with the next AI topic you want us to tackle.Gain exclusive access to 360 FREE CISSP Practice Questions at FreeCISSPQuestions.com and have them delivered directly to your inbox! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!
In this episode, the hosts discuss the seismic shift in the application security landscape triggered by the rise of Large Language Models (LLMs) and Anthropic's "Claude Code". They highlight the massive economic repercussions of these AI advancements, noting that billions in market value were wiped from traditional cybersecurity stocks as investors begin to believe frontier models might eventually write perfectly secure code. The hosts critique the industry's historical reliance on "checkbox" compliance tools like SAST, DAST, and SCA, arguing that these "archaic" methods are being replaced by AI-native strategies capable of reasoning through complex logic flaws. While they acknowledge that AI can suffer from "reasoning drift" and still requires deterministic validation to avoid false positives, they emphasize that security professionals must adapt by building custom "skills" and focusing on governance and observability. The discussion concludes that as developers move to "AI speed," the traditional role of the AppSec professional is evolving into a "Jarvis-like" orchestrator who manages automated workflows and infuses institutional knowledge into AI agents to maintain oversight without slowing down production.
Send a textWant a clear path from CISSP to top-tier pay without getting lost in buzzwords? We break down five high-income specialties that pair perfectly with CISSP leadership: modern GRC, cloud security as code, AI ethics and governance, advanced identity, and software supply chain security. Along the way, we unpack how AI reasoning tools like Claude Code Security are reshaping AppSec by cutting false positives and detecting logic flaws scanners miss, and we translate that shift into concrete workflows, better guardrails, and faster delivery.We start with the career pivot many leaders are making—moving from generalist security management to “decision architect.” That means pairing risk fluency with hands-on understanding of Terraform, Kubernetes, and CI/CD gates, then proving value through resilient architectures and evidence-driven dashboards for boards. You'll hear why GRC is exploding under new enforcement trends, how to automate continuous evidence to beat audit fatigue, and where vCISO opportunities command premium rates when strategy meets measurable outcomes.From there, we get practical. We walk through cloud guardrails that stop drift before it hits prod, share how to navigate shared responsibility with AWS and Azure, and outline identity-first zero trust that tames API key sprawl and enables passwordless access. On AI, we go deep on shadow AI containment, prompt-injection red teaming, model transparency, and data loss prevention tuned for embeddings—governance that accelerates, not blocks. Finally, we turn to software supply chain security: SBOM mandates, signed artifacts, dependency risk, and the DevSecOps policies that keep pipelines moving while raising assurance.If you're mapping your next move, we also compare salary bands across roles and highlight bridge certifications—CISM for program leadership, AI governance credentials for compliance depth, and CISA for audit rigor—to level up fast. Subscribe, share this with a teammate plotting their niche, and leave a quick review to tell us which specialty you're pursuing next.Gain exclusive access to 360 FREE CISSP Practice Questions at FreeCISSPQuestions.com and have them delivered directly to your inbox! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!
There's a particular kind of clarity you get when you talk to someone who spends their days breaking into things for a living. Not with malice — with purpose. John Steigerwald, known to most in the industry simply as "Stigs," co-founded White Knight Labs in 2016 with a mission that sounds almost disarmingly simple: build the best penetration testing team anyone has ever seen, and actually deliver results. Nearly a decade later, the company has grown to 40 people, gone international, and is busier than ever. The question worth asking is: why?The uncomfortable answer, according to Stigs, is that the fundamental problems haven't changed. At all."Honestly, it's still 2015," he said during our most recent conversation on ITSPmagazine's Brand Story series. Not as a metaphor. As a diagnosis. The same misconfigurations, the same weak identity policies, the same unlocked back doors that red teamers were exploiting a decade ago are still wide open today. The apps built in a COVID-era frenzy — pushed out fast, tested never — are now running critical business infrastructure. And the organizations using them are only finding out when something breaks.What's changed is the surface area. Cloud, AI, Microsoft 365, vibe-coded production apps — each new layer of technology gets adopted at speed, and each one arrives carrying the same original sin: no one turned on the basics. Stigs used Microsoft 365 as a pointed example. Millions of businesses are running on it with DMARC turned off, default configurations untouched, Copilot layered on top, and not a single CIS Benchmark policy applied. "Every client is vulnerable," he said. "Not just 10% of clients. Every client."That's a striking statement. It's also, if you've been paying attention to breach headlines, not a surprising one.The AI angle adds a new and almost darkly comedic wrinkle. Vibe coding — the practice of using AI tools like Cursor or Claude to generate production-ready code at speed — has given entry-level developers intermediate-level output. Which sounds great, until you realize that the AI models many of them leaned on were trained on outdated, sometimes vulnerable data. Stigs described visiting multiple clients with nearly identical security weaknesses, all tracing back to the same ChatGPT-generated setup instructions. "You and your neighbor did the same thing," he told one client. That's not just a funny anecdote. It's a warning about what happens when an entire industry bootstraps its infrastructure from the same flawed source.And yet, Stigs isn't anti-AI. He uses it every day. He just sees it with the clarity of someone who also finds the holes it leaves behind. His prediction for the near future: a massive wave of secure code review requests, as companies start reckoning with the vibe-coded backlog they've been quietly accumulating. AppSec is about to have a very good year.Looking forward, White Knight Labs is watching the growing intersection of private sector expertise and government infrastructure testing with particular interest. Critical infrastructure in America, long overdue for rigorous physical and embedded testing, is starting to receive that attention. Stigs and his team are already in the room.What makes White Knight Labs different isn't just technical skill — it's the ability to communicate what they find in language that actually lands. In an industry full of reports that gather dust, that matters. The best penetration test in the world is useless if no one acts on it.The door is open. It's been open for years. The question is who you call to finally lock it.To learn more about White Knight Labs, visit their website or reach out directly. Listen to the full conversation on ITSPmagazine.GUESTJohn StigerwaltFounder at White Knight Labs | Red Team Operations Leaderhttps://www.linkedin.com/in/john-stigerwalt-90a9b4110/RESOURCESWhite Knight Labs: https://whiteknightlabs.com_____________________________________________________________Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Neste episódio do DevSecOps Podcast, fomos direto no ponto: SCA não é sinônimo de caçar CVE em biblioteca open source. Durante anos, muita empresa reduziu Software Composition Analysis a “rodar ferramenta e ver se tem vulnerabilidade no npm ou no Maven”. Só que o jogo ficou mais complexo. Hoje falamos de dependências transitivas invisíveis, pacotes abandonados, licenças incompatíveis, ataques à cadeia de suprimentos e componentes proprietários que ninguém inventaria no SBOM porque “não é open source”. Spoiler: risco não pergunta licença. Discutimos:Por que SCA precisa olhar além do GitHub e entender o ecossistema inteiro da aplicaçãoO papel real do SBOM e onde ele falha na práticaSupply chain attacks e o que mudou depois de casos como Log4ShellDependências internas, pacotes privados e artefatos binários esquecidosLicenciamento como risco jurídico, não só técnicoComo integrar SCA de forma estratégica no pipeline e não virar mais um relatório ignoradoSe AppSec é armadura, SCA é o exame de sangue do software. E não adianta medir só colesterol quando o problema pode estar no fígado. Esse episódio é para quem já rodou ferramenta, já viu dashboard bonito e percebeu que ainda assim algo está faltando. Porque está mesmo.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
Ken Johnson and Seth Law examine the intensifying pressure on security practitioners as AI-driven development causes an unprecedented acceleration in industry velocity. A primary theme is the emergence of "shadow AI," where developers utilize unauthorized AI coding assistants and personal agents, introducing significant data classification risks and supply chain vulnerabilities. The discussion dives into technical concepts like AI agent "skills"—markdown files providing specialized directions—and the corresponding security risks found in new skill registries, such as malicious tools designed to exfiltrate credentials and crypto assets. The hosts also review 1Password's SCAM (Security Comprehension Awareness Measure), highlighting broad performance gaps in an AI's ability to detect phishing, with some models failing up to 65% of the time. To manage these unpredictable systems, the hosts advocate for a shift toward high-level validation roles, emphasizing the need for Subject Matter Expertise to combat "reasoning drift" and maintain safety through test-driven development and periodic "checkpoints". Ultimately, they conclude that while AI can simulate expertise, human oversight remains vital to secure the probabilistic nature of modern agentic workflows.
IA é ferramenta. Poderosa. Rápida. Escalável. E completamente indiferente ao que é certo ou errado. Neste episódio do DevSecOps Podcast, mergulhamos nos perigos reais da Inteligência Artificial além do hype e além do medo irracional. Falamos sobre modelos que aprendem vieses humanos, automação de desinformação em escala industrial, geração de código vulnerável com confiança absurda e a falsa sensação de segurança quando “a IA revisou”. IA não é ética. Não é moral. Não é consciente. É estatística com GPU. Discutimos também o impacto prático no desenvolvimento de software e na segurança de aplicações. Devs usando copilots sem validar saída. Times confiando em respostas geradas como se fossem verdade revelada. Ataques potencializados por modelos generativos. Engenharia social turbinada. Deepfakes cada vez mais convincentes. A IA amplia o melhor e o pior de nós. No fim, a pergunta não é se a IA é perigosa. Toda tecnologia poderosa é. A pergunta é: estamos usando com criticidade ou com preguiça intelectual? Porque quando a máquina erra, ela erra em escala. E quando o humano delega o pensamento, ele terceiriza a responsabilidade. E responsabilidade, meu amigo, não dá para fazer deploy automático.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
Amit Chita is the Field CTO at Mend.io. In this episode, he joins host Paul John Spaulding to discuss the future of AI appsec tooling, including how AI should be used as a force multiplier, not a replacement, new risks, and more. Securing The Build is brought to you by Mend.io, the leading application security solution, helping organizations reduce application risk efficiently. To learn more about our sponsor, visit https://mend.io.
In episode 312 of Absolute AppSec, the hosts discuss the double-edged sword of "vibe coding", noting that while AI agents often write better functional tests than humans, they frequently struggle with nuanced authorization patterns and inherit "upkeep costs" as foundational models change behavior over time. A central theme of the episode is that the greatest security risk to an organization is not AI itself, but an exhausted security team. The hosts explore how burnout often manifests as "silent withdrawal" and emphasize that managers must proactively draw out these issues within organizations that often treat security as a mere cost center. Additionally, they review new defensive strategies, such as TrapSec, a framework for deploying canary API endpoints to detect malicious scanning. They also highlight the value of security scorecarding—pioneered by companies like Netflix and GitHub—as a maturity activity that provides a holistic, blame-free view of application health by aggregating multiple metrics. The episode concludes with a reminder that technical tools like Semgrep remain essential for efficiency, even as practitioners increasingly leverage the probabilistic creativity of LLMs.
Nesse episódio a conversa foi direta e sem anestesia. Falamos sobre como empresas e profissionais de AppSec realmente evoluíram nos últimos anos, o que mudou de verdade e o que é só discurso bonito em slide corporativo. Spoiler: muita coisa avançou, mas muita gente ainda está brigando com problemas que já deveriam estar resolvidos há uma década. Também discutimos o descompasso clássico do mercado. Enquanto algumas organizações já deveriam estar olhando para o próximo nível de maturidade, automação real, decisões baseadas em risco e integração profunda com engenharia, outras ainda estão “começando AppSec” do zero. E aí vem a pergunta incômoda: isso é falta de tempo, de prioridade, de competência ou de coragem? Um episódio para quem quer entender onde estamos, onde deveríamos estar e por que maturidade em AppSec não é checklist, não é ferramenta e definitivamente não é cargo no LinkedIn.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
Ken Johnson and Seth Law examine the profound transformation of the security industry as AI tooling moves from simple generative models to sophisticated agentic architectures. A primary theme is the dramatic surge in development velocity, with some organizations seeing pull request volumes increase by over 800% as developers allow AI agents to operate nearly hands-off. This shift is redefining the role of Application Security practitioners, moving experts from manual tasks like manipulating Burp Suite requests to a validation-centric role where they spot-check complex findings generated by AI in minutes. The hosts characterize older security tools as "primitive" compared to modern AI analysis, which can now identify human-level flaws like complex authorization bypasses. A major technical highlight is the introduction of agent "skills"—markdown files containing instructions that empower coding assistants—and the associated emergence of new supply chain risks. They specifically reference research on malicious skills designed to exfiltrate crypto wallets and SSH credentials, warning that registries for these skills lack adequate security responses. To manage the inherent "reasoning drift" of AI, the hosts argue that test-driven development has become a critical safety requirement. Ultimately, they warn that the industry has already shifted fundamentally, and security professionals must lean into these new technologies immediately to avoid becoming obsolete in a day-to-day evolving landscape.
Neste episódio do DevSecOps Podcast, usamos a armadura do Homem de Ferro como desculpa elegante para falar de coisa séria: como montar um programa de AppSec que funciona no mundo real. Aqui não tem magia, tem engenharia. Assim como Tony Stark não começa salvando o mundo no Mark L, um programa de AppSec não nasce maduro. Falamos de fundamentos, evolução incremental, decisões técnicas difíceis e da diferença brutal entre ter ferramentas… e ter capacidade real. Jarvis vira métrica, sensores viram telemetria, armaduras viram processos. Tudo com pé no chão e código na mesa. Você vai ouvir sobre:por onde começar sem travar o timecomo alinhar AppSec com negócio, produto e Devmaturidade progressiva, não big bang corporativoporque cultura pesa mais que ferramentae o erro clássico de tentar “comprar” segurançaSe o seu AppSec hoje parece mais cosplay do que armadura funcional, esse episódio é pra você. Menos marketing, mais engenharia. Segurança que voa porque foi bem montada, não porque alguém prometeu.Become a supporter of this podcast: https://www.spreaker.com/podcast/devsecops-podcast--4179006/support.Apoio: Nova8, Snyk, Conviso, Gold Security, Digitalwolk e PurpleBird Security.
Ori Bendet, Vice President of Product Management at Checkmarx, joined Doug Green, Publisher of Technology Reseller News, to discuss how the acquisition of Tromzo strengthens Checkmarx's agentic application security strategy and reflects a broader shift in how organizations secure software in an AI-driven development era. Bendet explained that Checkmarx, a pioneer in application security with more than two decades of experience, has traditionally focused on helping organizations identify vulnerabilities early in the software development lifecycle (SDLC). However, the rapid adoption of AI-generated code has fundamentally changed the AppSec landscape. “The industry used to be fixated on finding vulnerabilities,” Bendet said. “Now the real challenge is fixing them at scale, in context, and without slowing developers down.” The Tromzo acquisition builds on Checkmarx's existing family of agentic tools, Checkmarx Assist, which already provides real-time remediation inside the developer IDE. Tromzo extends these capabilities deeper into the SDLC, enabling automated remediation at the repository and pull-request stages. Together, the technologies aim to “complete the loop” by delivering consistent, trusted remediation from early development through later stages of deployment. Bendet noted that AI is widening the gap between development velocity and security oversight, as significantly more code—and therefore more vulnerabilities—is being produced. At the same time, the application footprint itself is evolving to include AI components such as large language models, agents, and third-party AI services. “There is now a new AI element inside the application,” he said, “and organizations need AppSec solutions that understand and protect that expanded footprint.” Auto-remediation, once viewed skeptically by developers, is now gaining acceptance as AI agents gain a deeper understanding of application context. According to Bendet, modern agentic tools can remediate vulnerabilities while preserving business logic and minimizing disruption. “Developers no longer need to spend days undoing fixes that broke functionality,” he said. “The agent can understand the blast radius and refactor automatically.” Looking ahead, Bendet described a future where AppSec becomes more autonomous, with agents continuously testing, fixing, and validating applications while developers shift toward higher-level architectural and review roles. With proper guardrails in place, this evolution promises to reduce alert fatigue and allow teams to focus on innovation rather than remediation backlogs. More information about Checkmarx and its agentic application security approach is available at https://checkmarx.com/, with additional developer-focused resources at https://checkmarx.dev/.
In this episode of Resilient Cyber I sit down with Anshuman Bhartiya to discuss AI-native AppSec. Anshuman is a Staff Security Engineer at Lyft, Host of the The Boring AppSec Community podcast, and author of the AI Security Engineer newsletter on LinkedIn. Anshuman has quickly become an AppSec leader I highly respect and find myself learning from his content and perspectives on AppSec and Security Engineering in the era of AI, LLMs and Agents.
Dynamic Application Security Testing (DAST) has a reputation problem. It's noisy, slow, and often ignored by developers — especially in fast-moving CI/CD pipelines. In this episode of the TestGuild Podcast, we explore developer-focused DAST and why traditional AppSec tools struggle to gain adoption in modern DevOps teams. You'll learn: Why most DAST tools fail inside real-world CI/CD workflows What "shift-left security" actually means beyond marketing buzzwords How developer-first DAST reduces false positives and improves signal quality Where AI genuinely helps in security testing — and where it's mostly hype Practical steps QA, DevOps, and engineering leaders can take to reduce risk this quarter Our guest, Gadi Bashvitz, CEO at Bright Security, shares lessons from decades in cybersecurity, including building security tools that developers actually use — without slowing delivery. If you're responsible for test automation, DevSecOps, or application security, this episode will help you rethink how DAST should work in 2026 and beyond.
Synopsis L'épisode 0x284 lance 2026 avec PAtrick, Jacques et Vanessa, rejoints par Richer, pour un gros focus sur l'Open Banking, aussi appelé consumer-driven banking, au Canada : à quoi ressemble le cadre qui se met en place, comment l'accréditation et la gouvernance peuvent influencer la confiance, et quels nouveaux risques apparaissent quand des tiers et des intermédiaires accèdent à des données financières sensibles. Le trio enchaîne ensuite sur la gestion des vulnérabilités côté terrain : pourquoi une liste de « 50 CVE » ne suffit pas, et comment mieux prioriser en combinant exploitabilité, exposition réelle et impact business. Ils reviennent aussi sur un piège fréquent en DevSecOps : quand « tout le monde est responsable », l'imputabilité se dilue et les décisions critiques s'éternisent. Enfin, ils font le tour de l'actualité cyber : des actions policières importantes, dont Black Axe, à l'ironie très révélatrice de BreachForums qui se fait breacher. Un rappel concret que les fuites de données finissent souvent par devenir des pièces à conviction et des accélérateurs d'enquêtes. Nouvelles Patrick Bureau en gros se fait prendre AMP voit un problème avec la SAAQ… Vanessa Capsule - systèmes bancaire axé sur les consommateurs Jacques Patchez vos systèmes!! Le premier Patch Tuesday de 2026! Europol Arrests 34 Black Axe Members in Spain Over €5.9M Fraud and Organized Crime BreachForums Breached, Exposing 324K Cybercriminals BreachForums database leak Richer Structure over chaos par Tyler Andrew Cole - book Thinking fast and slow par Daniel Kahneman - book Crew Patrick Mathieu Vanessa Henri Richer Dinelle Jacques Sauvé Shamelessplug Join Hackfest/La French Connection Discord #La-French-Connection Join Hackfest us on Masodon POLAR - Québec - 29 Octobre 2026 Hackfest - Québec - 29-30-31 Octobre 2026 Crédits Montage audio par Hackfest Communication Music par Dynamic Range – Acid - Psy Tune Locaux virtuels par Streamyard
AI doesn't break security, it exposes where it was already fragile. When automation starts making decisions faster than humans can audit, AppSec becomes the only thing standing between scale and catastrophe. In this episode, Ron sits down with Joshua Bregler, Senior Security Manager at McKinsey's QuantumBlack, to dissect how AI agents, pipelines, and dynamic permissions are reshaping application security. From prompt chaining attacks and MCP server sprawl to why static IAM is officially obsolete, this conversation gets brutally honest about what works, what doesn't, and where security teams are fooling themselves. Impactful Moments 00:00 – Introduction 02:15 – AI agents create identity chaos 04:00 – Static permissions officially dead 07:05 – AI security is still AppSec 09:30 – Prompt chaining becomes invisible attack 12:23 – Solving problems vs solving AI 15:03 – Ethics becomes an AI blind spot 17:47 – Identity is the next security failure 20:07 – Frameworks no longer enough alone 26:38– AI fixing insecure code in real time 32:15 – Secure pipelines before production Connect with our Guest Joshua Bregler on LinkedIn: https://www.linkedin.com/in/breglercissp/ Our Links Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/
Ken Johnson (cktricky on social media) and Seth Law are happy to announce a special episode of Absolute AppSec with Avi Douglen (sec_tigger on X), long-time OWASP Global Board of Directors member, founder and CEO of Bounce Security and co-author of the Threat Modeling Manifesto. The conversation ranges from Application Privacy related to Application Security, to participating in meetups and conferences, and finally OWASP as an Avi's experience as a board member.
AI isn't quietly changing software development… it's rewriting the rules while most security programs are still playing defense. When agents write code at machine speed, the real risk isn't velocity, it's invisible security debt compounding faster than teams can see it. In this episode, Ron Eddings sits down with Varun Badhwar, Co-Founder & CEO of Endor Labs, and Henrik Plate, Principal Security Researcher of Endor Labs, to break down how AI-assisted development is reshaping the software supply chain in real time. From MCP servers exploding across GitHub to agents trained on insecure code patterns, they analyze why traditional AppSec controls fail in an agent-driven world and what must replace them. This conversation pulls directly from Endor Labs' 2025 State of Dependency Management Report, revealing why most AI-generated code is functionally correct yet fundamentally unsafe, how malicious packages are already exploiting agent workflows, and why security has to exist inside the IDE, not after the pull request. Impactful Moments 00:00 – Introduction 02:00 – Star Wars meets cybersecurity culture 03:00 – Why this report matters now 04:00 – MCP adoption explodes overnight 10:00 – Can you trust MCP servers 12:00 – Malicious packages weaponize agents 14:00 – Code works, security fails 22:00 – Hooks expose agent behavior 28:30 – 2026 means longer lunches 33:00 – How Endor Labs fixes this Links Connect with our Varun on LinkedIn: https://www.linkedin.com/in/vbadhwar/ Connect with our Henrik on LinkedIn: https://www.linkedin.com/in/henrikplate/ Check out Endor Labs State of Dependency Management 2025: https://www.endorlabs.com/lp/state-of-dependency-management-2025 Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363
Using OWASP SAMM to assess and improve compliance with the Cyber Resilience Act (CRA) is an excellent strategy, as SAMM provides a framework for secure development practices such as secure by design principles and handling vulns. Segment Resources: https://owaspsamm.org/ https://cybersecuritycoalition.be/resource/a-strategic-approach-to-product-security-with-owasp-samm/ As genAI becomes a more popular tool in software engineering, the definition of “secure coding” is changing. This session explores how artificial intelligence is reshaping the way developers learn, apply, and scale secure coding practices — and how new risks emerge when machines start generating the code themselves. We'll dive into the dual challenge of securing both human-written and AI-assisted code, discuss how enterprises can validate AI outputs against existing security standards, and highlight practical steps teams can take to build resilience into the entire development pipeline. Join us as we look ahead to the convergence of secure software engineering and AI security — where trust, transparency, and tooling will define the future of code safety. Segment Resources: https://manicode.com/ai/ Understand the history of threat modeling with Adam Shostack. Learn how threat modeling has evolved with the Four Question Framework and can work in your organizations in the wake of the AI revolution. Whether you're launching a formal Security Champions program or still figuring out where to start, there's one truth every security leader needs to hear: You already have allies in your org -- they're just waiting to be activated. In this session, we'll explore how identifying and empowering your internal advocates is the fastest, most sustainable way to drive security culture change. These are your early adopters: the developers, engineers, and team leads who already “get it,” even if their title doesn't say “security.” We'll unpack: Why you need help from people outside the security org to actually be effective Where to find your natural allies (hint: it starts with listening, not preaching) How to support and energize those allies so they influence the majority What behavioral science tells us about spreading change across an organization Segment Resources: Security Champion Success Guide: https://securitychampionsuccessguide.org/ Related interviews/podcasts: https://www.youtube.com/playlist?list=PLPb14P8f4T1ITv3p3Y3XtKsyEAA8W526h How to measure success and impact of culture change and champions: https://www.linkedin.com/pulse/from-soft-skills-hard-data-measuring-success-security-yhmse/ Global Community of Champions sign up: https://docs.google.com/forms/d/e/1FAIpQLScyXPAMf9M8idpDMwO4p2h5Ng8I0ffofZuY70BbmgCZNPUS5Q/viewform This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-362
Using OWASP SAMM to assess and improve compliance with the Cyber Resilience Act (CRA) is an excellent strategy, as SAMM provides a framework for secure development practices such as secure by design principles and handling vulns. Segment Resources: https://owaspsamm.org/ https://cybersecuritycoalition.be/resource/a-strategic-approach-to-product-security-with-owasp-samm/ As genAI becomes a more popular tool in software engineering, the definition of "secure coding" is changing. This session explores how artificial intelligence is reshaping the way developers learn, apply, and scale secure coding practices — and how new risks emerge when machines start generating the code themselves. We'll dive into the dual challenge of securing both human-written and AI-assisted code, discuss how enterprises can validate AI outputs against existing security standards, and highlight practical steps teams can take to build resilience into the entire development pipeline. Join us as we look ahead to the convergence of secure software engineering and AI security — where trust, transparency, and tooling will define the future of code safety. Segment Resources: https://manicode.com/ai/ Understand the history of threat modeling with Adam Shostack. Learn how threat modeling has evolved with the Four Question Framework and can work in your organizations in the wake of the AI revolution. Whether you're launching a formal Security Champions program or still figuring out where to start, there's one truth every security leader needs to hear: You already have allies in your org -- they're just waiting to be activated. In this session, we'll explore how identifying and empowering your internal advocates is the fastest, most sustainable way to drive security culture change. These are your early adopters: the developers, engineers, and team leads who already "get it," even if their title doesn't say "security." We'll unpack: Why you need help from people outside the security org to actually be effective Where to find your natural allies (hint: it starts with listening, not preaching) How to support and energize those allies so they influence the majority What behavioral science tells us about spreading change across an organization Segment Resources: Security Champion Success Guide: https://securitychampionsuccessguide.org/ Related interviews/podcasts: https://www.youtube.com/playlist?list=PLPb14P8f4T1ITv3p3Y3XtKsyEAA8W526h How to measure success and impact of culture change and champions: https://www.linkedin.com/pulse/from-soft-skills-hard-data-measuring-success-security-yhmse/ Global Community of Champions sign up: https://docs.google.com/forms/d/e/1FAIpQLScyXPAMf9M8idpDMwO4p2h5Ng8I0ffofZuY70BbmgCZNPUS5Q/viewform This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-362
Using OWASP SAMM to assess and improve compliance with the Cyber Resilience Act (CRA) is an excellent strategy, as SAMM provides a framework for secure development practices such as secure by design principles and handling vulns. Segment Resources: https://owaspsamm.org/ https://cybersecuritycoalition.be/resource/a-strategic-approach-to-product-security-with-owasp-samm/ As genAI becomes a more popular tool in software engineering, the definition of "secure coding" is changing. This session explores how artificial intelligence is reshaping the way developers learn, apply, and scale secure coding practices — and how new risks emerge when machines start generating the code themselves. We'll dive into the dual challenge of securing both human-written and AI-assisted code, discuss how enterprises can validate AI outputs against existing security standards, and highlight practical steps teams can take to build resilience into the entire development pipeline. Join us as we look ahead to the convergence of secure software engineering and AI security — where trust, transparency, and tooling will define the future of code safety. Segment Resources: https://manicode.com/ai/ Understand the history of threat modeling with Adam Shostack. Learn how threat modeling has evolved with the Four Question Framework and can work in your organizations in the wake of the AI revolution. Whether you're launching a formal Security Champions program or still figuring out where to start, there's one truth every security leader needs to hear: You already have allies in your org -- they're just waiting to be activated. In this session, we'll explore how identifying and empowering your internal advocates is the fastest, most sustainable way to drive security culture change. These are your early adopters: the developers, engineers, and team leads who already "get it," even if their title doesn't say "security." We'll unpack: Why you need help from people outside the security org to actually be effective Where to find your natural allies (hint: it starts with listening, not preaching) How to support and energize those allies so they influence the majority What behavioral science tells us about spreading change across an organization Segment Resources: Security Champion Success Guide: https://securitychampionsuccessguide.org/ Related interviews/podcasts: https://www.youtube.com/playlist?list=PLPb14P8f4T1ITv3p3Y3XtKsyEAA8W526h How to measure success and impact of culture change and champions: https://www.linkedin.com/pulse/from-soft-skills-hard-data-measuring-success-security-yhmse/ Global Community of Champions sign up: https://docs.google.com/forms/d/e/1FAIpQLScyXPAMf9M8idpDMwO4p2h5Ng8I0ffofZuY70BbmgCZNPUS5Q/viewform This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-362
In episode 307 of Absolute AppSec, hosts Ken and Seth conduct a retrospective on the application security landscape of 2025. They conclude that their previous predictions were largely accurate, particularly regarding the rise of prompt injection, AI-backed attacks, and the industry-wide shift toward per-token billing models. A major theme of the year was the solidification of supply chain security as a critical pillar of AppSec, driven by notable incidents such as Shai Hulud and React for Shell. The hosts also share insights from their four-day training course on utilizing LLMs for secure code review, noting that while AI development is becoming more prevalent, most practitioners are still in the nascent stages of building custom tooling. Much of the discussion focuses on the Model Context Protocol (MCP); while it offers significant value for agentic workflows, the hosts criticize its current lack of robust security controls, specifically highlighting issues with OAuth implementations and short timeouts in existing clients. Finally, they discuss how the industry is moving toward a more nuanced balance between deterministic tools like Semgrep and the probabilistic creativity of LLMs to increase efficiency in security consulting.
Using OWASP SAMM to assess and improve compliance with the Cyber Resilience Act (CRA) is an excellent strategy, as SAMM provides a framework for secure development practices such as secure by design principles and handling vulns. Segment Resources: https://owaspsamm.org/ https://cybersecuritycoalition.be/resource/a-strategic-approach-to-product-security-with-owasp-samm/ As genAI becomes a more popular tool in software engineering, the definition of "secure coding" is changing. This session explores how artificial intelligence is reshaping the way developers learn, apply, and scale secure coding practices — and how new risks emerge when machines start generating the code themselves. We'll dive into the dual challenge of securing both human-written and AI-assisted code, discuss how enterprises can validate AI outputs against existing security standards, and highlight practical steps teams can take to build resilience into the entire development pipeline. Join us as we look ahead to the convergence of secure software engineering and AI security — where trust, transparency, and tooling will define the future of code safety. Segment Resources: https://manicode.com/ai/ Understand the history of threat modeling with Adam Shostack. Learn how threat modeling has evolved with the Four Question Framework and can work in your organizations in the wake of the AI revolution. Whether you're launching a formal Security Champions program or still figuring out where to start, there's one truth every security leader needs to hear: You already have allies in your org -- they're just waiting to be activated. In this session, we'll explore how identifying and empowering your internal advocates is the fastest, most sustainable way to drive security culture change. These are your early adopters: the developers, engineers, and team leads who already "get it," even if their title doesn't say "security." We'll unpack: Why you need help from people outside the security org to actually be effective Where to find your natural allies (hint: it starts with listening, not preaching) How to support and energize those allies so they influence the majority What behavioral science tells us about spreading change across an organization Segment Resources: Security Champion Success Guide: https://securitychampionsuccessguide.org/ Related interviews/podcasts: https://www.youtube.com/playlist?list=PLPb14P8f4T1ITv3p3Y3XtKsyEAA8W526h How to measure success and impact of culture change and champions: https://www.linkedin.com/pulse/from-soft-skills-hard-data-measuring-success-security-yhmse/ Global Community of Champions sign up: https://docs.google.com/forms/d/e/1FAIpQLScyXPAMf9M8idpDMwO4p2h5Ng8I0ffofZuY70BbmgCZNPUS5Q/viewform This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-362
⬥EPISODE NOTES⬥Modern application development depends on open source packages moving at extraordinary speed. Paul McCarty, Offensive Security Specialist focused on software supply chain threats, explains why that speed has quietly reshaped risk across development pipelines, developer laptops, and CI environments.JavaScript dominates modern software delivery, and the npm registry has become the largest package ecosystem in the world. Millions of packages, thousands of daily updates, and deeply nested dependency chainsഴ് often exceeding a thousand indirect dependencies per application. That scale creates opportunity, not only for innovation, but for adversaries who understand how developers actually build software.This conversation focuses on a shift that security leaders can no longer ignore. Malicious packages are not exploiting accidental coding errors. They are intentionally engineered to steal credentials, exfiltrate secrets, and compromise environments long before traditional security tools see anything wrong. Attacks increasingly begin on developer machines through social engineering and poisoned repositories, then propagate into CI pipelines where access density and sensitive credentials converge.Paul outlines why many existing security approaches fall short. Vulnerability databases were built for mistakes, not hostile code. AppSec teams are overloaded burning down backlogs. Security operations teams rarely receive meaningful telemetry from build systems. The result is a visibility gap where malicious code can run, disappear, and leave organizations unsure what was touched or stolen.The episode also explores why simple advice like “only use vetted packages” fails in practice. Open source ecosystems move too fast for manual approval models, and internal package repositories often collapse under friction. Meanwhile, attackers exploit maintainer accounts, typosquatting domains, and ecosystem trust to reach billions of downstream installations in a single event.This discussion challenges security leaders to rethink how software supply chain risk is defined, detected, and owned. The problem is no longer theoretical, and it no longer lives only in development teams. It sits at the intersection of intellectual property, identity, and delivery velocity, demanding attention from anyone responsible for protecting modern software-driven organizations.⬥GUEST⬥Paul McCarty, NPM Hacker and Software Supply Chain Researcher | On LinkedIn: https://www.linkedin.com/in/mccartypaul/⬥HOST⬥Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥RESOURCES⬥LinkedIn Post: https://www.linkedin.com/posts/mccartypaul_i-want-to-introduce-you-to-my-latest-project-activity-7396297753196363776-1N-TOpen Source Malware Database: https://opensourcemalware.comOpenSSF Scorecard Project: https://securityscorecards.dev⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
⬥EPISODE NOTES⬥Modern application development depends on open source packages moving at extraordinary speed. Paul McCarty, Offensive Security Specialist focused on software supply chain threats, explains why that speed has quietly reshaped risk across development pipelines, developer laptops, and CI environments.JavaScript dominates modern software delivery, and the npm registry has become the largest package ecosystem in the world. Millions of packages, thousands of daily updates, and deeply nested dependency chainsഴ് often exceeding a thousand indirect dependencies per application. That scale creates opportunity, not only for innovation, but for adversaries who understand how developers actually build software.This conversation focuses on a shift that security leaders can no longer ignore. Malicious packages are not exploiting accidental coding errors. They are intentionally engineered to steal credentials, exfiltrate secrets, and compromise environments long before traditional security tools see anything wrong. Attacks increasingly begin on developer machines through social engineering and poisoned repositories, then propagate into CI pipelines where access density and sensitive credentials converge.Paul outlines why many existing security approaches fall short. Vulnerability databases were built for mistakes, not hostile code. AppSec teams are overloaded burning down backlogs. Security operations teams rarely receive meaningful telemetry from build systems. The result is a visibility gap where malicious code can run, disappear, and leave organizations unsure what was touched or stolen.The episode also explores why simple advice like “only use vetted packages” fails in practice. Open source ecosystems move too fast for manual approval models, and internal package repositories often collapse under friction. Meanwhile, attackers exploit maintainer accounts, typosquatting domains, and ecosystem trust to reach billions of downstream installations in a single event.This discussion challenges security leaders to rethink how software supply chain risk is defined, detected, and owned. The problem is no longer theoretical, and it no longer lives only in development teams. It sits at the intersection of intellectual property, identity, and delivery velocity, demanding attention from anyone responsible for protecting modern software-driven organizations.⬥GUEST⬥Paul McCarty, NPM Hacker and Software Supply Chain Researcher | On LinkedIn: https://www.linkedin.com/in/mccartypaul/⬥HOST⬥Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥RESOURCES⬥LinkedIn Post: https://www.linkedin.com/posts/mccartypaul_i-want-to-introduce-you-to-my-latest-project-activity-7396297753196363776-1N-TOpen Source Malware Database: https://opensourcemalware.comOpenSSF Scorecard Project: https://securityscorecards.dev⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food
Jeff Williams is the Co-Founder and CTO of Contrast Security, where he leads innovation in runtime-based application security. A pioneer in modern AppSec and co-founder of OWASP, Jeff has spent more than two decades helping organizations understand and manage software risk through instrumentation, context, and continuous learning.You can find Jeff on the following sites:LinkedInXHere are some links provided by Jeff:Contrast SecurityContrast Security X PLEASE SUBSCRIBE TO THE PODCASTSpotifyApple PodcastsYouTube MusicAmazon MusicRSS FeedYou can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.comCoffee and Open Source is hosted by Isaac Levin
This episode, the 304th of Absolute AppSec, features hosts Ken Johnson (@cktricky) and Seth Law (@sethlaw) discussing the crush of Q4 expectations, upcoming training opportunities, the recent updates to the OWASP Top Ten, and the impact of AI tools like XBow on application security (AppSec) consulting. The hosts discuss the shift in the OWASP Top Ten from focusing on vulnerabilities to focusing on risks, and the dual role the list now plays for both awareness/training and compliance. Shifting to recent funding of XBow, the overall consensus is that while AI tools dramatically improve process flow, scoping, and the speed of vulnerability identification for consultants, they won't replace the need for human experts for complex, bespoke systems, business logic flaws, or authorization issues. AI is commoditizing lower-level AppSec work.
In this episode Brad and Jordan sit down to discuss common web application security findings we've seen this year.Blog: https://offsec.blog/Youtube: https://www.youtube.com/@cyberthreatpovTwitter: https://x.com/cyberthreatpov Follow Spencer on social ⬇Spencer's Links: https://go.spenceralessi.com/links Work with Us: https://securit360.com | Find vulnerabilities that matter, learn about how we do internal pentesting here.
In this episode of Resilient Cyber, I sit down with longtime industry AppSec leader and Founder/CTO of Contrast Security, Jeff Williams, along with Contrast Security's Sr. Director of Product Security Naomi Buckwalter, to discuss all things Application Detection & Response (ADR), as well as the implications of AI-driven development.
Most organizations have security champions. Few have a real security culture.In this episode of AppSec Contradictions, Sean Martin explores why AppSec awareness efforts stall, why champion programs struggle to gain traction, and what leaders can do to turn intent into impact.
Tanya Janca is a globally recognized AppSec (application security) expert and founder of We Hack Purple. In this episode, she shares wild stories from the front lines of cybersecurity. She shares stories of when she was a penetration tester to an incident responder.You can sign up for her newsletter at https://newsletter.shehackspurple.ca/SponsorsSupport for this show comes from ThreatLocker®. ThreatLocker® is a Zero Trust Endpoint Protection Platform that strengthens your infrastructure from the ground up. With ThreatLocker® Allowlisting and Ringfencing™, you gain a more secure approach to blocking exploits of known and unknown vulnerabilities. ThreatLocker® provides Zero Trust control at the kernel level that enables you to allow everything you need and block everything else, including ransomware! Learn more at www.threatlocker.com.This episode is sponsored by Hims. Hims offers access to ED treatment options ranging from trusted generics that cost up to 95% less than brand names to Hard Mints, if prescribed. To get simple, online access to personalized, affordable care for ED, Hair Loss, Weight Loss, and more, visit https://hims.com/darknet.Support for this show comes from Drata. Drata is the trust management platform that uses AI-driven automation to modernize governance, risk, and compliance, helping thousands of businesses stay audit-ready and scale securely. Learn more at drata.com/darknetdiaries.View all active sponsors.Books Alice and Bob Learn Secure Coding by Tanya Janca Alice and Bob Learn Application Security by Tanya Janca
The silos between Application Security and Cloud Security are officially breaking down, and AI is the primary catalyst. In this episode, Tejas Dakve, Senior Manager, Application Security, Bloomberg Industry Group and Aditya Patel, VP of Cybersecurity Architecture discuss how the AI-driven landscape is forcing a fundamental change in how we secure our applications and infrastructure.The conversation explores why traditional security models and gates are "absolutely impossible" to maintain against the sheer speed and volume of AI-generated code . Learn why traditional threat modeling is no longer a one-time event, how the lines between AppSec and CloudSec are merging, and why the future of the industry belongs to "T-shaped engineers" with a multidisciplinary range of skills.Guest Socials - Tejas's Linkedin + Aditya's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who is Tejas Dakve? (AppSec)(03:40) Who is Aditya Patel? (CloudSec)(04:30) Common Use Cases for AI in Cloud & Applications(08:00) How AI Changed the Landscape for AppSec Teams(09:00) Why Traditional Security Models Don't Work for AI(11:00) AI is Breaking Down Security Silos (CloudSec & AppSec)(12:15) The "Hallucination" Problem: AI Knows Everything Until You're the Expert(12:45) The Speed & Volume of AI-Generated Code is the Real Challenge(14:30) How to Handle the AI Code Explosion? "Paved Roads"(15:45) From "Department of No" to "Department of Safe Yes"(16:30) Baking Security into the AI Lifecycle (Like DevSecOps)(18:25) Securing Agentic AI: Why IAM is More Important than the Chat(24:00) The Silo: AppSec Doesn't Have Visibility into Cloud IAM(25:00) Merging Threat Models: AppSec + CloudSec(26:20) Using New Frameworks: MITRE ATLAS & OWASP LLM Top 10(27:30) Threat Modeling Must Be a "Living & Breathing Process"(28:30) Using AI for Automated Threat Modeling(31:00) Building vs. Buying AI Security Tools(34:10) Prioritizing Vulnerabilities: Quality Over Quantity(37:20) The Rise of the "T-Shaped" Security Engineer(39:20) Building AI Governance with Cross-Functional Teams(40:10) Secure by Design for AI-Native Applications(44:10) AI Adoption Maturity: The 5 Stages of Grief(50:00) How the Security Role is Evolving with AI(55:20) Career Advice for Evolving in the Age of AI(01:00:00) Career Advice for Newcomers: Get an IT Help Desk Job(01:03:00) Fun Questions: Cats, Philanthropy, and Thai FoodResources discussed during the interview:Amazon Rufus: (Amazon's AI review summarizer) OWASP Top 10 for LLMsSTRIDE Threat Model: (Microsoft methodology) MITRE ATLASCloud Security Alliance (CSA) Maestro Framework CISA KEV (Known Exploited Vulnerabilities)Book: Range: Why Generalists Triumph in a Specialized World by David Epstein Anjali Charitable TrustAditya Patel's Blog
Organizations pour millions into protecting running applications—yet attackers are targeting the delivery path itself.This episode of AppSec Contradictions reveals why CI/CD and cloud pipelines are becoming the new frontline in cybersecurity.
Send us a textIn this candid and cathartic episode, Ken and Mike unpack the chaos that is Q4 for security professionals. From budget burnouts to end-of-year pentesting sprints, they explore why the final months of the year feel like a perfect storm for stress. Tune in as they share hard-earned lessons, practical advice for maintaining your sanity, and some gentle reminders that not everything needs to ship before Christmas. Whether you're a tired vendor, an overwhelmed engineer, or just trying to make it to PTO, this episode is for you.
Brad Geesaman, Principal Security Engineer at Ghost, joins the podcast today to explore how AI and large language models are transforming the world of application security. The discussion starts with the concept of "toil"—the repetitive, exhausting work that drains AppSec teams as they struggle to keep up with mountains of security findings and alerts. Brad shares his insights on how LLMs can provide meaningful leverage by handling the heavy lifting of triage, classification, and evidence gathering, while keeping humans firmly in the loop for final decisions. They also discuss the seismic shift happening in the AppSec market, with AI-native approaches potentially disrupting traditional security tooling. Listen along to hear more about the future of secure coding and how artificial intelligence might finally give security teams the helicopter view they need to fight fires effectively.FOLLOW OUR SOCIAL MEDIA: ➜Twitter: @AppSecPodcast➜LinkedIn: The Application Security Podcast➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast Thanks for Listening! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
International law enforcement take down the Breachforums domains. Researchers link exploitation campaigns targeting Cisco, Palo Alto Networks, and Fortinet. Juniper Networks patches over 200 vulnerabilities. Apple and Google update their bug bounties. Evaluating AI use in application security (AppSec) programs. Microsegmentation can contain ransomware much faster and yield better cyber insurance terms. The new RondoDox botnet exploits over 50 vulnerabilities. Researchers tag 13 unpatched Ivanti Endpoint Manager flaws. Our guest is Jason Manar, CISO of Kaseya, sharing his insight into how the private and public sectors can work together for national security. Hackers mistake a decoy for glory. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today we are joined by Jason Manar, CISO of Kaseya, sharing his insight into how the private and public sectors can/must work together for national security. Selected Reading FBI takes down BreachForums portal used for Salesforce extortion (Bleeping Computer) Cisco, Fortinet, Palo Alto Networks Devices Targeted in Coordinated Campaign (SecurityWeek) Juniper Networks Patches Critical Junos Space Vulnerabilities (OffSeq) Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous Exploits (WIRED) Google Launches AI Bug Bounty with $30,000 Top Reward (Infosecurity Magazine) In AI We Trust? Increasing AI Adoption in AppSec Despite Limited Oversight (Fastly) Reducing Risk: Microsegmentation Means Faster Incident Response, Lower Insurance Premiums for Organizations (Akamai) RondoDox Botnet Takes ‘Exploit Shotgun' Approach (SecurityWeek) ZDI Drops 13 Unpatched Ivanti Endpoint Manager Vulnerabilities (SecurityWeek) Pro-Russian hackers caught bragging about attack on fake water utility (The Record) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices