Application Security Weekly decrypts development for the Security Professional - exploring how to inject security into their organization’s Software Development Lifecycle (SDLC) in a fluid and transparent way; Learn the tools, techniques, and processes necessary to move at the speed of DevOps (even…
Up first, the ASW news of the week. At Black Hat 2025, Doug White interviews Ted Shorter, CTO of Keyfactor, about the quantum revolution already knocking on cybersecurity's door. They discuss the terrifying reality of quantum computing's power to break RSA and ECC encryption—the very foundations of modern digital life. With 2030 set as the deadline for transitioning away from legacy crypto, organizations face a race against time. Ted breaks down what "full crypto visibility" really means, why it's crucial to map your cryptographic assets now, and how legacy tech—from robotic sawmills to outdated hospital gear—poses serious risks. The interview explores NIST's new post-quantum algorithms, global readiness efforts, and how Keyfactor's acquisitions of InfoSec Global and Cipher Insights help companies start the quantum transition today—not tomorrow. Don't wait for the breach. Watch this and start your quantum strategy now. If digital trust is the goal, cryptography is the foundation. Segment Resources: http://www.keyfactor.com/digital-trust-digest-quantum-readiness https://www.keyfactor.com/press-releases/keyfactor-acquires-infosec-global-and-cipherinsights/ For more information about Keyfactor's latest Digital Trust Digest, please visit: https://securityweekly.com/keyfactorbh Live from BlackHat 2025 in Las Vegas, cybersecurity host Jackie McGuire sits down with Seemant Sehgal, founder of BreachLock, to unpack one of the most pressing challenges facing SOC teams today: alert fatigue—and its even more dangerous cousin, vulnerability fatigue. In this must-watch conversation, Seemant reveals how his groundbreaking approach, Adversarial Exposure Validation (AEV), flips the script on traditional defense-heavy security strategies. Instead of drowning in 10,000+ “critical” alerts, AEV pinpoints what actually matters—using Generative AI to map realistic attack paths, visualize kill chains, and identify the exact vulnerabilities that put an organization's crown jewels at risk. From his days leading cybersecurity at a major global bank to pioneering near real-time CVE validation, Seemant shares insights on scaling offensive security, improving executive buy-in, and balancing automation with human expertise. Whether you're a CISO, SOC analyst, red teamer, or security enthusiast, this interview delivers actionable strategies to fight fatigue, prioritize risks, and protect high-value assets. Key topics covered: - The truth about alert fatigue & why it's crippling SOC efficiency - How AI-driven offensive security changes the game - Visualizing kill chains to drive faster remediation - Why fixing “what matters” beats fixing “everything” - The future of AI trust, transparency, and control in cybersecurity Watch now to discover how BreachLock is redefining offensive security for the AI era. Segment Resources: https://www.breachlock.com/products/adversarial-exposure-validation/ This segment is sponsored by Breachlock. Visit https://securityweekly.com/breachlockbh to learn more about them! Show Notes: https://securityweekly.com/asw-347
In this must-see BlackHat 2025 interview, Doug White sits down with Michael Callahan, CMO at Salt Security, for a high-stakes conversation about Agentic AI, Model Context Protocol (MCP) servers, and the massive API security risks reshaping the cyber landscape. Broadcast live from the CyberRisk TV studio at Mandalay Bay, Las Vegas, the discussion pulls back the curtain on how autonomous AI agents and centralized MCP hubs could supercharge productivity—while also opening the door to unprecedented supply chain vulnerabilities. From “shadow MCP servers” to the concept of an “API fabric,” Michael explains why these threats are evolving faster than traditional security measures can keep up, and why CISOs need to act before it's too late. Viewers will get rare insight into the parallels between MCP exploitation and DNS poisoning, the hidden dangers of API sprawl, and why this new era of AI-driven communication could become a hacker's dream. Blog: https://salt.security/blog/when-ai-agents-go-rogue-what-youre-missing-in-your-mcp-security Survey Report: https://content.salt.security/AI-Agentic-Survey-2025_LP-AI-Agentic-Survey-2025.html This segment is sponsored by Salt Security. Visit https://securityweekly.com/saltbh for a free API Attack Surface Assessment! At Black Hat 2025, live from the Cyber Risk TV studio in Las Vegas, Jackie McGuire sits down with Apiiro Co-Founder & CEO Idan Plotnik to unpack the real-world impact of AI code assistants on application security, developer velocity, and cloud costs. With experience as a former Director of Engineering at Microsoft, Idan dives into what drove him to launch Apiiro — and why 75% of engineers will be using AI assistants by 2028. From 10x more vulnerabilities to skyrocketing API bloat and security blind spots, Idan breaks down research from Fortune 500 companies on how AI is accelerating both innovation and risk. What you'll learn in this interview: - Why AI coding tools are increasing code complexity and risk - The massive cost of unnecessary APIs in cloud environments - How to automate secure code without slowing down delivery - Why most CISOs fail to connect security to revenue (and how to fix it) - How Apiiro's Autofix AI Agent helps organizations auto-fix and auto-govern code risks at scale This isn't just another AI hype talk. It's a deep dive into the future of secure software delivery — with practical steps for CISOs, CTOs, and security leaders to become true business enablers. Watch till the end to hear how Apiiro is helping Fortune 500s bridge the gap between code, risk, and revenue. Apiiro AutoFix Agent. Built for Enterprise Security: https://youtu.be/f-_zrnqzYsc Deep Dive Demo: https://youtu.be/WnFmMiXiUuM This segment is sponsored by Apiiro. Be one of the first to see their new AppSec Agent in action at https://securityweekly.com/apiirobh. Is Your AI Usage a Ticking Time Bomb? In this exclusive Black Hat 2025 interview, Matt Alderman sits down with GitLab CISO Josh Lemos to unpack one of the most pressing questions in tech today: Are executives blindly racing into AI adoption without understanding the risks? Filmed live at the CyberRisk TV Studio in Las Vegas, this eye-opening conversation dives deep into: - How AI is being rapidly adopted across enterprises — with or without security buy-in - Why AI governance is no longer optional — and how to actually implement it - The truth about agentic AI, automation, and building trust in non-human identities - The role of frameworks like ISO 42001 in building AI transparency and assurance - Real-world examples of how teams are using LLMs in development, documentation & compliance Whether you're a CISO, developer, or business exec — this discussion will reshape how you think about AI governance, security, and adoption strategy in your org. Don't wait until it's too late to understand the risks. The Economics of Software Innovation: $750B+ Opportunity at a Crossroads Report: http://about.gitlab.com/software-innovation-report/ For more information about GitLab and their report, please visit: https://securityweekly.com/gitlabbh Live from Black Hat 2025 in Las Vegas, Jackie McGuire sits down with Chris Boehm, Field CTO at Zero Networks, for a high-impact conversation on microsegmentation, shadow IT, and why AI still struggles to stop lateral movement. With 15+ years of cybersecurity experience—from Microsoft to SentinelOne—Chris breaks down complex concepts like you're a precocious 8th grader (his words!) and shares real talk on why AI alone won't save your infrastructure. Learn how Zero Networks is finally making microsegmentation frictionless, how summarization is the current AI win, and what red flags to look for when evaluating AI-infused security tools. If you're a CISO, dev, or just trying to stay ahead of cloud threats—this one's for you. This segment is sponsored by Zero Networks. Visit https://securityweekly.com/zerobh to learn more about them! Show Notes: https://securityweekly.com/asw-346
The EU Cyber Resilience Act joins the long list of regulations intended to improve the security of software delivered to users. Emily Fox and Roman Zhukov share their experience education regulators on open source software and educating open source projects on security. They talk about creating a baseline for security that addresses technical items, maintaining projects, and supporting project owners so they can focus on their projects. Segment resources: github.com/ossf/wg-globalcyberpolicy github.com/orcwg baseline.openssf.org Show Notes: https://securityweekly.com/asw-345
A smaller attack surface should lead to a smaller list of CVEs to track, which in turn should lead to a smaller set of vulns that you should care about. But in practice, keeping something like a container image small has a lot of challenges in terms of what should be considered minimal. Neil Carpenter shares advice and anecdotes on what it takes to refine a container image and to change an org's expectations that every CVE needs to be fixed. Show Notes: https://securityweekly.com/asw-344
Open source software is a massive contribution that provides everything from foundational frameworks to tiny single-purpose libraries. We walk through the dimensions of trust and provenance in the software supply chain with Janet Worthington. And we discuss how even with new code generated by LLMs and new terms like slopsquatting, a lot of the most effective solutions are old techniques. Resources https://www.forrester.com/blogs/make-no-mistake-software-is-a-supply-chain-and-its-under-attack/ https://www.forrester.com/report/the-future-of-software-supply-chain-security/RES184050 Show Notes: https://securityweekly.com/asw-343
Maintaining code is a lot more than keeping dependencies up to date. It involved everything from keeping old code running to changing frameworks to even changing implementation languages. Jonathan Schneider talks about the engineering considerations of refactoring and rewriting code, why code maintenance is important to appsec, and how to build confidence that adding automation to a migration results in code that has the same workflows as before. Resources https://docs.openrewrite.org https://github.com/openrewrite Then, instead of our usual news segment, we do a deep dive on some recent vulns NVIDIA's Triton Inference Server disclosed by Trail of Bits' Will Vandevanter. Will talks about the thought process and tools that go into identify potential vulns, the analysis in determining whether they're exploitable, and the disclosure process with vendors. He makes the important point that even if something doesn't turn out to be a vuln, there's still benefit to the learning process and gaining experience in seeing the different ways that devs design software. Of course, it's also more fun when you find an exploitable vuln -- which Will did here! Resources https://nvidia.custhelp.com/app/answers/detail/a_id/5687 https://github.com/triton-inference-server/server https://blog.trailofbits.com/2025/07/31/hijacking-multi-agent-systems-in-your-pajamas/ https://blog.trailofbits.com/2025/07/28/we-built-the-security-layer-mcp-always-needed/ Show Notes: https://securityweekly.com/asw-342
A successful strategy in appsec is to build platforms with defaults and designs that ease the burden of security choices for developers. But there's an important difference between expecting (or requiring!) developers to use a platform and building a platform that developers embrace. Julia Knecht shares her experience in building platforms with an attention to developer needs, developer experience, and security requirements. She brings attention to the product management skills and feedback loops that make paved roads successful -- as well as the areas where developers may still need or choose their own alternatives. After all, the impact of a paved road isn't in its creation, it's in its adoption. Show Notes: https://securityweekly.com/asw-341
AI is more than LLMs. Machine learning algorithms have been part of infosec solutions for a long time. For appsec practitioners, a key concern is always going to be how to evaluate the security of software or a system. In some cases, it doesn't matter if a human or an LLM generated code -- the code needs to be reviewed for common flaws and design problems. But the creation of MCP servers and LLM-based agents is also adding a concern about what an unattended or autonomous piece of software is doing. Sohrob Kazerounian gives us context on how LLMs are designed, what to expect from them, and where they pose risk and reward to modern software engineering. Resources https://www.vectra.ai/research Show Notes: https://securityweekly.com/asw-340
What are some appsec basics? There's no monolithic appsec role. Broadly speaking, appsec tends to branch into engineering or compliance paths, each with different areas of focus despite having shared vocabularies and the (hopefully!) shared goal of protecting software, data, and users. The better question is, "What do you want to secure?" We discuss the Cybersecurity Skills Framework put together by the OpenSSF and the Linux Foundation and how you might prepare for one of its job families. The important basics aren't about memorizing lists or technical details, but demonstrating experience in working with technologies, understanding how they can fail, and being able to express concerns, recommendations, and curiosity about their security properties. Resources: https://cybersecurityframework.io https://owasp.org/www-project-cheat-sheets/ https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/ https://aflplus.plus/ https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ Show Notes: https://securityweekly.com/asw-339
Appsec still deals with ancient vulns like SQL injection and XSS. And now LLMs are generating code along side humans. Sandy Carielli and Janet Worthington join us once again to discuss what all this new code means for appsec practices. On a positive note, the prevalence of those ancient vulns seems to be diminishing, but the rising use of LLMs is expanding a new (but not very different) attack surface. We look at where orgs are investing in appsec, who appsec teams are collaborating with, and whether we need security awareness training for LLMs. Resources: https://www.forrester.com/blogs/application-security-2025-yes-ai-just-made-it-harder-to-do-this-right/ Show Notes: https://securityweekly.com/asw-338
Manual secure code reviews can be tedious and time intensive if you're just going through checklists. There's plenty of room for linters and compilers and all the grep-like tools to find flaws. Louis Nyffenegger describes the steps of a successful code review process. It's a process that starts with understanding code, which can even benefit from an LLM assistant, and then applies that understanding to a search for developer patterns that lead to common mistakes like mishandling data, not enforcing a control flow, or not defending against unexpected application states. He explains how finding those kinds of more impactful bugs are rewarding for the reviewer and valuable to the code owner. It involves reading a lot of code, but Louis offers tips on how to keep notes, keep an app's context in mind, and keep code secure. Segment Resources: https://pentesterlab.com/live-training/ https://pentesterlab.com/appsecschool https://deepwiki.com https://daniel.haxx.se/blog/2025/05/29/decomplexification/ Show Notes: https://securityweekly.com/asw-337
Fuzzing has been one of the most successful ways to improve software quality. And it demonstrates how improving software quality improves security. Artur Cygan shares his experience in building and applying fuzzers to barcode scanners, smart contracts, and just about any code you can imagine. We go through the useful relationship between unit tests and fuzzing coverage, nudging fuzzers into deeper code paths, and how LLMs can help guide a fuzzer into using better inputs for its testing. Resources https://blog.trailofbits.com/2024/10/31/fuzzing-between-the-lines-in-popular-barcode-software/ https://github.com/crytic/echidna https://github.com/crytic/medusa https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html Show Notes: https://securityweekly.com/asw-336
What makes a threat modeling process effective? Do you need a long list of threat actors? Do you need a long list of terms? What about a short list like STRIDE? Has an effective process ever come out of a list? Farshad Abasi joins our discussion as we explain why the answer to most of those questions is No and describe the kinds of approaches that are more conducive to useful threat models. Resources: https://www.eurekadevsecops.com/agile-devops-and-the-threat-modeling-disconnect-bridging-the-gap-with-developer-insights/ https://www.threatmodelingmanifesto.org https://kellyshortridge.com/blog/posts/security-decision-trees-with-graphviz/ In the news, learning from outage postmortems, an EchoLeak image speaks a 1,000 words from Microsoft 365 Copilot, TokenBreak attack targets tokenizing techniques, Google's layered strategy against prompt injection looks like a lot like defending against XSS, learning about code security from CodeAuditor CTF, and more! Show Notes: https://securityweekly.com/asw-335
CISA has been championing Secure by Design principles. Many of the principles are universal, like adopting MFA and having opinionated defaults that reduce the need for hardening guides. Matthew Rogers talks about how the approach to Secure by Design has to be tailored for Operational Technology (OT) systems. These systems have strict requirements on safety and many of them rely on protocols that are four (or more!) decades old. He explains how the considerations in this space go far beyond just memory safety concerns. Segment Resources: https://www.cisa.gov/sites/default/files/2025-01/joint-guide-secure-by-demand-priority-considerations-for-ot-owners-and-operators-508c_0.pdf https://www.youtube.com/watch?v=vHSXu1P4ZTo Show Notes: https://securityweekly.com/asw-334
The recent popularity of MCPs is surpassed only by the recent examples deficiencies of their secure design. The most obvious challenge is how MCPs, and many more general LLM use cases, have erased two decades of security principles behind separating code and data. We take a look at how developers are using LLMs to generate code and continue our search for where LLMs are providing value to appsec. We also consider what indicators we'd look for as signs of success. For example, are LLMs driving useful commits to overburdened open source developers? Are LLMs climbing the ranks of bug bounty platforms? In the news, more examples of prompt injection techniques against LLM features in GitLab and GitHub, the value (and tradeoffs) in rewriting code, secure design lessons from a history of iOS exploitation, checking for all the ways to root, and NIST's approach to (maybe) measuring likely exploited vulns. Show Notes: https://securityweekly.com/asw-333
ArmorCode unveils Anya—the first agentic AI virtual security champion designed specifically for AppSec and product security teams. Anya brings together conversation and context to help AppSec, developers and security teams cut through the noise, prioritize risks, and make faster, smarter decisions across code, cloud, and infrastructure. Built into the ArmorCode ASPM Platform and backed by 25B findings, 285+ integrations, natural language intelligence, and role-aware insights, Anya turns complexity into clarity, helping teams scale securely and close the security skills gap. Anya is now generally available and included as part of the ArmorCode ASPM Platform. Visit https://securityweekly.com/armorcodersac to request a demo! As 'vibe coding", the practice of using AI tools with specialized coding LLMs to develop software, is making waves, what are the implications for security teams? How can this new way of developing applications be made secure? Or have the horses already left the stable? Segment Resources: https://www.backslash.security/press-releases/backslash-security-reveals-in-new-research-that-gpt-4-1-other-popular-llms-generate-insecure-code-unless-explicitly-prompted https://www.backslash.security/blog/vibe-securing-4-1-pillars-of-appsec-for-vibe-coding This segment is sponsored by Backslash. Visit https://securityweekly.com/backslashrsac to learn more about them! The rise of AI has largely mirrored the early days of open source software. With rapid adoption amongst developers who are trying to do more with less time, unmanaged open source AI presents serious risks to organizations. Brian Fox, CTO & Co-founder of Sonatype, will dive into the risks associated with open source AI and best practices to secure it. Segment Resources: https://www.sonatype.com/solutions/open-source-ai https://www.sonatype.com/blog/beyond-open-vs.-closed-understanding-the-spectrum-of-ai-transparency https://www.sonatype.com/resources/whitepapers/modern-development-in-ai-era This segment is sponsored by Sonatype. Visit https://securityweekly.com/sonatypersac to learn more about Sonatype's AI SCA solutions! The surge in AI agents is creating a vast new cyber attack surface with Non-Human Identities (NHIs) becoming a prime target. This segment will explore how SandboxAQ's AQtive Guard Discover platform addresses this challenge by providing real-time vulnerability detection and mitigation for NHIs and cryptographic assets. We'll discuss the platform's AI-driven approach to inventory, threat detection, and automated remediation, and its crucial role in helping enterprises secure their AI-driven future. To take control of your NHI security and proactively address the escalating threats posed by AI agents, visit https://securityweekly.com/sandboxaqrsac to schedule an early deployment and risk assessment. Show Notes: https://securityweekly.com/asw-332
In the news, Coinbase deals with bribes and insider threat, the NCSC notes the cross-cutting problem of incentivizing secure design, we cover some research that notes the multitude of definitions for secure design, and discuss the new Cybersecurity Skills Framework from the OpenSSF and Linux Foundation. Then we share two more sponsored interviews from this year's RSAC Conference. With more types of identities, machines, and agents trying to access increasingly critical data and resources, across larger numbers of devices, organizations will be faced with managing this added complexity and identity sprawl. Now more than ever, organizations need to make sure security is not an afterthought, implementing comprehensive solutions for securing, managing, and governing both non-human and human identities across ecosystems at scale. This segment is sponsored by Okta. Visit https://securityweekly.com/oktarsac to learn more about them! At Mend.io, we believe that securing AI-powered applications requires more than just scanning for vulnerabilities in AI-generated code—it demands a comprehensive, enterprise-level strategy. While many AppSec vendors offer limited, point-in-time solutions focused solely on AI code, Mend.io takes a broader and more integrated approach. Our platform is designed to secure not just the code, but the full spectrum of AI components embedded within modern applications. By leveraging existing risk management strategies, processes, and tools, we uncover the unique risks that AI introduces—without forcing organizations to reinvent their workflows. Mend.io's solution ensures that AI security is embedded into the software development lifecycle, enabling teams to assess and mitigate risks proactively and at scale. Unlike isolated AI security startups, Mend.io delivers a single, unified platform that secures an organization's entire codebase—including its AI-driven elements. This approach maximizes efficiency, minimizes disruption, and empowers enterprises to embrace AI innovation with confidence and control. This segment is sponsored by Mend.io. Visit https://securityweekly.com/mendrsac to book a live demo! Show Notes: https://securityweekly.com/asw-331
Developers are relying on LLMs as coding assistants, so where are the LLM assistants for appsec? The principles behind secure code reviews don't really change based on who write the code, whether human or AI. But more code means more reasons for appsec to scale its practices and figure out how to establish trust in code, packages, and designs. Rey Bango shares his experience with secure code reviews and where developer education fits in among the adoption of LLMs. As businesses rapidly embrace SaaS and AI-powered applications at an unprecedented rate, many small-to-medium sized businesses (SMBs) struggle to keep up due to complex tech stacks and limited visibility into the skyrocketing app sprawl. These modern challenges demand a smarter, more streamlined approach to identity and access management. Learn how LastPass is reimagining access control through “Secure Access Experiences” - starting with the introduction of SaaS Monitoring capabilities designed to bring clarity to even the most chaotic environments. Secure Access Experiences - https://www.lastpass.com/solutions/secure-access This segment is sponsored by LastPass. Visit https://securityweekly.com/lastpassrsac to learn more about them! Cloud Application Detection and Response (CADR) has burst onto the scene as one of the hottest categories in security, with numerous vendors touting a variety of capabilities and making promises on how bringing detection and response to the application-level will be a game changer. In this segment, Gal Elbaz, co-founder and CTO of Oligo Security, will dive into what CADR is, who it helps, and what the future will look like for this game changing technology. Segment Resources - https://www.oligo.security/company/whyoligo To see Oligo in action, please visit https://securityweekly.com/oligorsac Show Notes: https://securityweekly.com/asw-330
We catch up on news after a week of BSidesSF and RSAC Conference. Unsurprisingly, AI in all its flavors, from agentic to gen, was inescapable. But perhaps more surprising (and more unfortunate) is how much the adoption of LLMs has increased the attack surface within orgs. The news is heavy on security issues from MCPs and a novel alignment bypass against LLMs. Not everything is genAI as we cover some secure design topics from the Airborne attack against Apple's AirPlay to more calls for companies to show how they're embracing secure design principles and practices. Apiiro CEO & Co-Founder, Idan Plotnik discusses the AI problem in AppSec. This segment is sponsored by Apiiro. Visit https://securityweekly.com/apiirorsac to learn more about them! Gen AI is being adopted faster than company's policy and data security can keep up, and as LLM's become more integrated into company systems and uses leverage more AI enabled applications, they essentially become unintentional data exfiltration points. These tools do not differentiate between what data is sensitive and proprietary and what is not. This interview will examine how the rapid adoption of Gen AI is putting sensitive company data at risk, and the data security considerations and policies organizations should implement before, if, and when their employees may seek to adopt a Gen AI tools to leverage some of their undeniable workplace benefits. Customer case studies: https://www.seclore.com/resources/customer-case-studies/ Seclore Blog: https://www.seclore.com/blog/ This segment is sponsored by Seclore. Visit https://securityweekly.com/seclorersac to learn more about them! Show Notes: https://securityweekly.com/asw-329
In this live recording from BSidesSF we explore the factors that influence a secure design, talk about how to avoid the bite of UX dragons, and why designs should put classes of vulns into dungeons. But we can't threat model a secure design forever and we can't oversimplify guidance for a design to be "more secure". Kalyani Pawar and Jack Cable join the discussion to provide advice on evaluating secure designs through examples of strong and weak designs we've seen over the years. We highlight the importance of designing systems to serve users and consider what it means to have a secure design with a poor UX. As we talk about the strategy and tactics of secure design, we share why framing this as a challenge in preventing dangerous errors can help devs make practical engineering decisions that improve appsec for everyone. Resources https://owasp.org/Top10/A042021-InsecureDesign/ https://dl.acm.org/doi/10.5555/1251421.1251435 https://www.threatmodelingmanifesto.org https://www.ietf.org/rfc/rfc9700.html https://www.cisa.gov/resources-tools/resources/secure-by-design Show Notes: https://securityweekly.com/asw-328
Secrets end up everywhere, from dev systems to CI/CD pipelines to services, certificates, and cloud environments. Vlad Matsiiako shares some of the tactics that make managing secrets more secure as we discuss the distinctions between secure architectures, good policies, and developer friendly tools. We've thankfully moved on from forced 90-day user password rotations, but that doesn't mean there isn't a place for rotating secrets. It means that the tooling and processes for ephemeral secrets should be based on secure, efficient mechanisms rather than putting all the burden on users. And it also means that managing secrets shouldn't become an unmanaged risk with new attack surfaces or new points of failure. Segment Resources: https://infisical.com/blog/solving-secret-zero-problem https://infisical.com/blog/gitops-secrets-management Show Notes: https://securityweekly.com/asw-327
The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources https://www.forrester.com/blogs/breaches-and-lawsuits-and-fines-oh-my-what-we-learned-the-hard-way-from-2024/ https://www.forrester.com/blogs/wafs-are-now-the-center-of-application-protection-suites/ https://www.forrester.com/blogs/are-you-making-these-devsecops-mistakes-the-four-phases-you-need-to-know-before-your-code-becomes-your-vulnerability/ In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Show Notes: https://securityweekly.com/asw-326
We have a top ten list entry for Insecure Design, pledges to CISA's Secure by Design principles, and tons of CVEs that fall into familiar categories of flaws. But what does it mean to have a secure design and how do we get there? There are plenty of secure practices that orgs should implement are supply chains, authentication, and the SDLC. Those practices address important areas of risk, but only indirectly influence a secure design. We look at tactics from coding styles to design councils as we search for guidance that makes software more secure. Segment resources https://owasp.org/Top10/A042021-InsecureDesign/ https://www.cisa.gov/securebydesign/pledge https://www.cisa.gov/securebydesign https://kccnceu2025.sched.com/event/1xBJR/keynote-rust-in-the-linux-kernel-a-new-era-for-cloud-native-performance-and-security-greg-kroah-hartman-linux-kernel-maintainer-fellow-the-linux-foundation https://newsletter.pragmaticengineer.com/p/how-linux-is-built-with-greg-kroah https://daniel.haxx.se/blog/2025/04/07/writing-c-for-curl/ Show Notes: https://securityweekly.com/asw-325
We take advantage of April Fools to look at some of appsec's myths, mistakes, and behaviors that lead to bad practices. It's easy to get trapped in a status quo of chasing CVEs or discussing which direction to shift security. But scrutinizing decimal points in CVSS scores or rearranging tools misses the opportunity for more strategic thinking. We satirize some worst practices in order to have a more serious discussion about a future where more software is based on secure designs. Segment resources: https://bsidessf2025.sched.com/event/1x8ST/secure-designs-ux-dragons-vuln-dungeons-application-security-weekly https://bsidessf2025.sched.com/event/1x8TU/preparing-for-dragons-dont-sharpen-swords-set-traps-gather-supplies https://www.rfc-editor.org/rfc/rfc3514.html https://www.rfc-editor.org/rfc/rfc1149.html Show Notes: https://securityweekly.com/asw-324
LLMs are helping devs write code, but is it secure code? How are LLMs helping appsec teams? Keith Hoodlet returns to talk about where he's seen value from genAI, where it fits in with tools like source code analysis and fuzzers, and where its limitations mean we'll be relying on humans for a while. Those limitations don't mean appsec should dismiss LLMs as a tool. It means appsec should understand how things like context windows might limit a tool's security analysis to a few files, leaving a security architecture review to humans. Segment resources: https://securing.dev/posts/ai-security-reasoning-and-bias/ https://seclists.org/dailydave/2025/q1/0 https://arxiv.org/pdf/2409.16165 https://arxiv.org/pdf/2410.05229 https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html Show Notes: https://securityweekly.com/asw-323
The crypto world is rife with smart contracts that have been outsmarted by attackers, with consequences in the millions of dollars (and more!). Shashank shares his research into scanning contracts for flaws, how the classes of contract flaws have changed in the last few years, and how optimistic we can be about the future of this space. Segment Resources: https://scs.owasp.org https://scs.owasp.org/sctop10/ https://solidityscan.com/web3hackhub https://www.web3isgoinggreat.com Show Notes: https://securityweekly.com/asw-322
Skype hangs up for good, over a million cheap Android devices may be backdoored, parallels between jailbreak research and XSS, impersonating AirTags, network reconnaissance via a memory disclosure vuln in the GFW, and more! Show Notes: https://securityweekly.com/asw-321
Just three months into 2025 and we already have several hundred CVEs for XSS and SQL injection. Appsec has known about these vulns since the late 90s. Common defenses have been known since the early 2000s. Jack Cable talks about CISA's Secure by Design principles and how they're trying to refocus businesses on addressing vuln classes and prioritizing software quality -- with security one of those important dimensions of quality. Segment Resources: https://www.cisa.gov/securebydesign https://www.cisa.gov/securebydesign/pledge https://www.cisa.gov/resources-tools/resources/product-security-bad-practices https://www.lawfaremedia.org/projects-series/reviews-essays/security-by-design https://corridor.dev Show Notes: https://securityweekly.com/asw-321
Google replacing SMS with QR codes for authentication, MS pulls a VSCode extension due to red flags, threat modeling with TRAIL, threat modeling the Bybit hack, malicious models and malicious AMIs, and more! Show Notes: https://securityweekly.com/asw-320
Curl and libcurl are everywhere. Not only has the project maintained success for almost three decades now, but it's done that while being written in C. Daniel Stenberg talks about the challenges in dealing with appsec, the design philosophies that keep it secure, and fostering a community to create one of the most recognizable open source projects in the world. Segment Resources: https://daniel.haxx.se/blog/2025/01/23/cvss-is-dead-to-us/ https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/ https://thenewstack.io/curls-daniel-stenberg-on-securing-180000-lines-of-c-code/ Show Notes: https://securityweekly.com/asw-320
Applying forgivable vs. unforgivable criteria to reDoS vulns, what backdoors in LLMs mean for trust in building software, considering some secure AI architectures to minimize prompt injection impact, developer reactions to Rust, and more! Show Notes: https://securityweekly.com/asw-319
Minimizing latency, increasing performance, and reducing compile times are just a part of what makes a development environment better. Throw in useful tests and some useful security tools and you have an even better environment. Dan Moore talks about what motivates some developers to prefer a "local first" approach as we walk through what all of this means for security. Show Notes: https://securityweekly.com/asw-319
We're getting close to two full decades of celebrating web hacking techniques. James Kettle shares which was his favorite, why the list is important to the web hacking community, and what inspires the kind of research that makes it onto the list. We discuss why we keep seeing eternal flaws like XSS and SQL injection making these lists year after year and how clever research is still finding new attack surfaces in old technologies. But there's a lot of new web technology still to be examined, from HTTP/2 and HTTP/3 to WebAssembly. Segment Resources: Top 10, 2024: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024 Full nomination list: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024-nominations-open Project overview: https://portswigger.net/research/top-10-web-hacking-techniques Show Notes: https://securityweekly.com/asw-318
Identifying and eradicating unforgivable vulns, an unforgivable flaw (and a few others) in DeepSeek's iOS app, academics and industry looking to standardize principles and practices for memory safety, and more! Show Notes: https://securityweekly.com/asw-317
Code scanning is one of the oldest appsec practices. In many cases, simple grep patterns and some fancy regular expressions are enough to find many of the obvious software mistakes. Scott Norberg shares his experience with encountering code scanners that didn't find the .NET vuln classes he needed to find and why that led him to creating a scanner from scratch. We talk about some challenges in testing tools, making smart investments in engineering time, and why working with .NET's compiler made his decisions easier. Segment Resources: -https://github.com/ScottNorberg-NCG/CodeSheriff.NET Show Notes: https://securityweekly.com/asw-317
Speculative data flow attacks demonstrated against Apple chips with SLAP and FLOP, the design and implementation choices that led to OCSP's demise, an appsec angle on AI, updating the threat model and recommendations for implementing OAuth 2.0, and more! Show Notes: https://securityweekly.com/asw-316
Threat modeling has been in the appsec toolbox for decades. But it hasn't always been used and it hasn't always been useful. Sandy Carielli shares what she's learned from talking to orgs about what's been successful, and what's failed, when they've approached this practice. Akira Brand joins to talk about her direct experience with building threat models with developers. Show Notes: https://securityweekly.com/asw-316
An open source security project forks in response to license changes (and an echo of how we've been here before), car hacking via spectacularly insecure web apps, hacking a synth via spectacularly cool MIDI messages, cookie parsing problems, the RANsacked paper of 100+ LTE/5G vulns found from fuzzing, and more! Show Notes: https://securityweekly.com/asw-315
A lot of AI security boils down to the boring, but important, software security topics that appsec teams have been dealing with for decades. Niv Braun explains the distinctions between AI-related and AI-specific security as we avoid the FUD and hype of genAI to figure out where appsec teams can invest their time. He notes that data scientists have been working with ML and sensitive data sets for a long time, and it's good to have more scrutiny on what controls should be present to protect that data. This segment is sponsored by Noma Security. Visit https://securityweekly.com/noma to learn more about them! Show Notes: https://securityweekly.com/asw-315
What's in store for appsec in 2025? Sure, there'll be some XSS and SQL injection, but what about trends that might influence how appsec teams plan? Cody Scott shares five cybersecurity and privacy predictions and we take a deep dive into three of them. We talk about finding value to appsec from AI, why IoT and OT need both programmatic and technical changes, and what the implications of the next XZ Utils attack might be. Segment resources: https://www.forrester.com/blogs/predictions-2025-cybersecurity-risk-privacy/ Show Notes: https://securityweekly.com/asw-314
Design lessons from PyPI's Quarantine capability, effective ways for appsec to approach phishing, why fishshell is moving to Rust component by component (and why that's a good thing!), what behaviors the Cyber Trust Mark might influence, and more! Show Notes: https://securityweekly.com/asw-313
There's a pernicious myth that developers don't care about security. In practice, they care about code quality. What developers don't care for is ambiguous requirements. Ixchel Ruiz shares her experience is discussing software designs, the challenges in prioritizing dev efforts, and how to help open source project maintainers with their issue backlog. Segment resources: https://github.com/ossf/scorecard https://www.commonhaus.org/ https://www.hackergarten.net/ Show Notes: https://securityweekly.com/asw-313