POPULARITY
Segurança em aplicações não é coisa de outro mundo. Neste episódio do Kubicast, recebemos André Esteves e Matheus Farias, duas feras do iFood que vivem o dia a dia da Application Security (AppSec) na veia! Com muito bom humor e bastante casca de produção, eles compartilham a rotina, os desafios e os aprendizados de quem realmente coloca a mão na massa para proteger sistemas em larga escala.A conversa vai de OWASP Top 10 à política de travamento de PRs, passando por burp suite, cultura dev, roles de segurança, hardening de imagens base com zero CVEs e o papel crucial dos soft skills para quem quer entrar na área. Se você acha que segurança é só sobre hacker de hoodie e terminal verde piscando, esse papo vai te mostrar a real!Links Importantes:- Andre Esteves - https://www.linkedin.com/in/andreestevespaiva/- Matheus Farias - https://www.linkedin.com/in/eu-matheus-farias-devsecops/- João Brito - https://www.linkedin.com/in/juniorjbn- Assista ao FilmeTEArapia - https://youtu.be/M4QFmW_HZh0?si=HIXBDWZJ8yPbpflMParticipe de nosso programa de acesso antecipado e tenha um ambiente mais seguro em instantes!https://getup.io/zerocveO Kubicast é uma produção da Getup, empresa especialista em Kubernetes e projetos open source para Kubernetes. Os episódios do podcast estão nas principais plataformas de áudio digital e no YouTube.com/@getupcloud.
In this On Location episode during OWASP AppSec Global 2025 in Barcelona, Maria Mora, Staff Application Security Engineer and active OWASP lifetime member, shares how her experience at the OWASP AppSec Global conference in Barcelona has reaffirmed the power of community in security. While many attendees chase back-to-back talks and technical training, Maria highlights something often overlooked—connection. Whether at the member lounge ping-pong table, during late-night beach meetups, or over keynote reflections, it's the relationships and shared purpose that make this event resonate.Maria emphasizes how her own journey into OWASP began with uncertainty but evolved into a meaningful path of participation. Through volunteering, serving on the events committee, and mentoring others, she has expanded not only her technical toolkit but also her ability to collaborate and communicate—skills she notes are essential in InfoSec but rarely prioritized. By stepping into the OWASP community, she's learned that you don't need decades of experience to contribute—just a willingness to start.Keynotes and sessions this year reinforced a similar message: security isn't just about hard skills. It's about bridging academia and industry, engaging first-time attendees, and creating welcoming spaces where no one feels like an outsider. Talks like Sarah Jané's encouraged attendees to find their own ways to give back, whether by submitting to the call for papers, helping with logistics, or simply sparking hallway conversations.Maria also points to how OWASP structures participation to make it accessible. Through demo rooms, project hubs, and informal lounge chats, attendees find ways to contribute to global initiatives like the OWASP Top 10 or volunteer-led trainings. Whether it's your first conference or your tenth, there's always room to jump in.For Maria, OWASP no longer feels like a secret club—it's a growing, open collective focused on helping people bring their best selves to security. That's the power of community: not just lifting up software, but lifting up each other.And for those thinking of taking the next step, Maria reminds us that the call for papers for OWASP DC is open through June 24th. As she puts it, “We all have something valuable to share—sometimes you just need the nudge to start.”GUEST: Maria Mora | Staff Application Security Engineer and OWASP events committee member | https://www.linkedin.com/in/riamaria/HOST: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | https://www.seanmartin.comSPONSORSManicode Security: https://itspm.ag/manicode-security-7q8iRESOURCESLearn more and catch more stories from OWASP AppSec Global 2025 Barcelona coverage: https://www.itspmagazine.com/owasp-global-appsec-barcelona-2025-application-security-event-coverage-in-catalunya-spainCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
In Episode 3 of The Due Diligence Show (Season 2), Adam and Luke get candid about the AI skills shift that's already reshaping the workplace. From drop-in replacements for white-collar workers to agentic AI that codes entire apps solo, this episode unpacks how job roles are being augmented — or outright redefined. Then the pair dives into the growing concern around AI security and trust. Luke walks us through the OWASP Top 10 AI risks — from prompt injection to model theft — and Adam probes whether companies are moving too fast with too little oversight. If you're a founder, tech lead, CISO, or just wondering how secure your AI tools really are, this one's a must-listen.
In this On Location episode during OWASP AppSec Global 2025 in Barcelona, Starr Brown, Director of Open Source Projects and Programs at OWASP, unpacks the real engine behind the organization's impact: the projects and the people driving them forward.With over 130 active projects, OWASP continues to expand its open source contributions to improve software security across the board. While the OWASP Top 10 remains its most recognized initiative, Starr points out that it's just one among many. Other significant projects include the Application Security Verification Standard (ASVS), the Software Assurance Maturity Model (SAMM), and the increasingly popular security games like Cornucopia, which use gamification to bring security concepts into business conversations and development workflows.AI is playing an increasingly prominent role in OWASP's work. Starr highlights the GenAI Security Project as a focal point, encompassing tools and guidance for LLM use, agentic AI, red teaming, and more. The scale of community engagement is equally impressive: around 33,000 people are active on Slack, and hundreds contribute to individual initiatives, reflecting the organization's truly global and grassroots structure.Beyond tools and documentation, OWASP is influencing regulation and policy through initiatives like the AI Exchange and the Transparency Exchange. These projects connect with government entities and standards bodies such as the European Commission and CEN/CENELEC to help shape responsible governance frameworks around software, AI, and cybersecurity.Listeners also get a glimpse into what's ahead. From upcoming events in Washington, D.C., to the OWASP Community Room at DEF CON in Las Vegas, the goal is to keep fostering connections and hands-on engagement. These gatherings not only showcase flagship tools and frameworks but create space for open dialogue, prototyping, and collaboration—whether you're breaking things or building them.To get involved, Starr encourages exploring the OWASP Projects page and joining their Slack community. The conversation makes it clear: OWASP is not just a collection of tools—it's a living, breathing network of contributors shaping the future of secure software.GUEST: Starr Brown | Director of Open Source Projects and Programs at OWASP | https://www.linkedin.com/in/starr-brown-8837547/HOST: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | https://www.seanmartin.comSPONSORSManicode Security: https://itspm.ag/manicode-security-7q8iRESOURCESLearn more and catch more stories from OWASP AppSec Global 2025 Barcelona coverage: https://www.itspmagazine.com/owasp-global-appsec-barcelona-2025-application-security-event-coverage-in-catalunya-spainCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
Parce que… c'est l'épisode 0x586! Shameless plug 10 au 18 mai 2025 - NorthSec 12 au 17 octobre 2025 - Objective by the sea v8 10 au 12 novembre 2025 - IAQ - Le Rendez-vous IA Québec 17 au 20 novembre 2025 - European Cyber Week 25 et 26 février 2026 - SéQCure 2065 Description Notes À venir Collaborateurs Nicolas-Loïc Fortin Gabriel T. St-Hilaire Crédits Montage par Intrasecure inc Locaux réels par Cybereco
Please enjoy this encore episode of Word Notes. A broad OWASP Top 10 software development category representing missing, ineffective, or unforeseen security measures. CyberWire Glossary link: https://thecyberwire.com/glossary/owasp-insecure-design Audio reference link: “Oceans Eleven Problem Constraints Assumptions.” by Steve Jones, YouTube, 4 November 2015.
Please enjoy this encore episode of Word Notes. A broad OWASP Top 10 software development category representing missing, ineffective, or unforeseen security measures. CyberWire Glossary link: https://thecyberwire.com/glossary/owasp-insecure-design Audio reference link: “Oceans Eleven Problem Constraints Assumptions.” by Steve Jones, YouTube, 4 November 2015. Learn more about your ad choices. Visit megaphone.fm/adchoices
In a conversation that sets the tone for this year's RSA Conference, Steve Wilson, shares a candid look at how AI is intersecting with cybersecurity in real and measurable ways. Wilson, who also leads the OWASP Top 10 for Large Language Models project and recently authored a book published by O'Reilly on the topic, brings a multi-layered perspective to a discussion that blends strategy, technology, and organizational behavior.Wilson's session title at RSA Conference—“Are the Machines Learning, or Are We?”—asks a timely question. Security teams are inundated with data, but without meaningful visibility—defined not just as seeing, but understanding and acting on what you see—confidence in defense capabilities may be misplaced. Wilson references a study conducted with IDC that highlights this very disconnect: organizations feel secure, yet admit they can't see enough of their environment to justify that confidence.This episode tackles one of the core paradoxes of AI in cybersecurity: it offers the promise of enhanced detection, speed, and insight, but only if applied thoughtfully. Generative AI and large language models (LLMs) aren't magical fixes, and they struggle with large datasets. But when layered atop refined systems like user and entity behavior analytics (UEBA), they can help junior analysts punch above their weight—or even automate early-stage investigations.Wilson doesn't stop at the tools. He zooms out to the business implications, where visibility, talent shortages, and tech complexity converge. He challenges security leaders to rethink what visibility truly means and to recognize the mounting noise problem. The industry is chasing 40% more CVEs year over year—an unsustainable growth curve that demands better signal-to-noise filtering.At its heart, the episode raises important strategic questions: Are businesses merely offloading thinking to machines? Or are they learning how to apply these technologies to think more clearly, act more decisively, and structure teams differently?Whether you're building a SOC strategy, rethinking tooling, or just navigating the AI hype cycle, this conversation with Steve Wilson offers grounded insights with real implications for today—and tomorrow.
⬥GUEST⬥Ken Huang, Co-Chair, AI Safety Working Groups at Cloud Security Alliance | On LinkedIn: https://www.linkedin.com/in/kenhuang8/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥In this episode of Redefining CyberSecurity, host Sean Martin speaks with Ken Huang, Co-Chair of the Cloud Security Alliance (CSA) AI Working Group and author of several books including Generative AI Security and the upcoming Agent AI: Theory and Practice. The conversation centers on what agentic AI is, how it is being implemented, and what security, development, and business leaders need to consider as adoption grows.Agentic AI refers to systems that can autonomously plan, execute, and adapt tasks using large language models (LLMs) and integrated tools. Unlike traditional chatbots, agentic systems handle multi-step workflows, delegate tasks to specialized agents, and dynamically respond to inputs using tools like vector databases or APIs. This creates new possibilities for business automation but also introduces complex security and governance challenges.Practical Applications and Emerging Use CasesKen outlines current use cases where agentic AI is being applied: startups using agentic models to support scientific research, enterprise tools like Salesforce's AgentForce automating workflows, and internal chatbots acting as co-workers by tapping into proprietary data. As agentic AI matures, these systems may manage travel bookings, orchestrate ticketing operations, or even assist in robotic engineering—all with minimal human intervention.Implications for Development and Security TeamsDevelopment teams adopting agentic AI frameworks—such as AutoGen or CrewAI—must recognize that most do not come with out-of-the-box security controls. Ken emphasizes the need for SDKs that add authentication, monitoring, and access controls. For IT and security operations, agentic systems challenge traditional boundaries; agents often span across cloud environments, demanding a zero-trust mindset and dynamic policy enforcement.Security leaders are urged to rethink their programs. Agentic systems must be validated for accuracy, reliability, and risk—especially when multiple agents operate together. Threat modeling and continuous risk assessment are no longer optional. Enterprises are encouraged to start small: deploy a single-agent system, understand the workflow, validate security controls, and scale as needed.The Call for Collaboration and Mindset ShiftAgentic AI isn't just a technological shift—it requires a cultural one. Huang recommends cross-functional engagement and alignment with working groups at CSA, OWASP, and other communities to build resilient frameworks and avoid duplicated effort. Zero Trust becomes more than an architecture—it becomes a guiding principle for how agentic AI is developed, deployed, and defended.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥BOOK | Generative AI Security: https://link.springer.com/book/10.1007/978-3-031-54252-7BOOK | Agentic AI: Theories and Practices, to be published August by Springer: https://link.springer.com/book/9783031900259BOOK | The Handbook of CAIO (with a business focus): https://www.amazon.com/Handbook-Chief-AI-Officers-Revolution/dp/B0DFYNXGMRMore books at Amazon, including books published by Cambridge University Press and John Wiley, etc.: https://www.amazon.com/stores/Ken-Huang/author/B0D3J7L7GNVideo Course Mentioned During this Episode: "Generative AI for Cybersecurity" video course by EC-Council with 255 people rated averaged 5 starts: https://codered.eccouncil.org/course/generative-ai-for-cybersecurity-course?logged=falsePodcast: The 2025 OWASP Top 10 for LLMs: What's Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
⬥GUEST⬥Ken Huang, Co-Chair, AI Safety Working Groups at Cloud Security Alliance | On LinkedIn: https://www.linkedin.com/in/kenhuang8/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥In this episode of Redefining CyberSecurity, host Sean Martin speaks with Ken Huang, Co-Chair of the Cloud Security Alliance (CSA) AI Working Group and author of several books including Generative AI Security and the upcoming Agent AI: Theory and Practice. The conversation centers on what agentic AI is, how it is being implemented, and what security, development, and business leaders need to consider as adoption grows.Agentic AI refers to systems that can autonomously plan, execute, and adapt tasks using large language models (LLMs) and integrated tools. Unlike traditional chatbots, agentic systems handle multi-step workflows, delegate tasks to specialized agents, and dynamically respond to inputs using tools like vector databases or APIs. This creates new possibilities for business automation but also introduces complex security and governance challenges.Practical Applications and Emerging Use CasesKen outlines current use cases where agentic AI is being applied: startups using agentic models to support scientific research, enterprise tools like Salesforce's AgentForce automating workflows, and internal chatbots acting as co-workers by tapping into proprietary data. As agentic AI matures, these systems may manage travel bookings, orchestrate ticketing operations, or even assist in robotic engineering—all with minimal human intervention.Implications for Development and Security TeamsDevelopment teams adopting agentic AI frameworks—such as AutoGen or CrewAI—must recognize that most do not come with out-of-the-box security controls. Ken emphasizes the need for SDKs that add authentication, monitoring, and access controls. For IT and security operations, agentic systems challenge traditional boundaries; agents often span across cloud environments, demanding a zero-trust mindset and dynamic policy enforcement.Security leaders are urged to rethink their programs. Agentic systems must be validated for accuracy, reliability, and risk—especially when multiple agents operate together. Threat modeling and continuous risk assessment are no longer optional. Enterprises are encouraged to start small: deploy a single-agent system, understand the workflow, validate security controls, and scale as needed.The Call for Collaboration and Mindset ShiftAgentic AI isn't just a technological shift—it requires a cultural one. Huang recommends cross-functional engagement and alignment with working groups at CSA, OWASP, and other communities to build resilient frameworks and avoid duplicated effort. Zero Trust becomes more than an architecture—it becomes a guiding principle for how agentic AI is developed, deployed, and defended.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥BOOK | Generative AI Security: https://link.springer.com/book/10.1007/978-3-031-54252-7BOOK | Agentic AI: Theories and Practices, to be published August by Springer: https://link.springer.com/book/9783031900259BOOK | The Handbook of CAIO (with a business focus): https://www.amazon.com/Handbook-Chief-AI-Officers-Revolution/dp/B0DFYNXGMRMore books at Amazon, including books published by Cambridge University Press and John Wiley, etc.: https://www.amazon.com/stores/Ken-Huang/author/B0D3J7L7GNVideo Course Mentioned During this Episode: "Generative AI for Cybersecurity" video course by EC-Council with 255 people rated averaged 5 starts: https://codered.eccouncil.org/course/generative-ai-for-cybersecurity-course?logged=falsePodcast: The 2025 OWASP Top 10 for LLMs: What's Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
In this episode of CISO Tradecraft, G. Mark Hardy dives deep into the world of Agentic AI and its impact on cybersecurity. The discussion covers the definition and characteristics of Agentic AI, as well as expert insights on its feasibility. Learn about its primary functions—perception, cognition, and action—and explore practical cybersecurity applications. Discover the rapid advancements made by tech giants and potential risks involved. This episode is a comprehensive guide to understanding and securely implementing Agentic AI in your enterprise. Transcripts https://docs.google.com/document/d/1tIv2NKX0DL4NTnvqKV9rKrgrewa68m3W References Vladimir Putin - https://www.rt.com/news/401731-ai-rule-world-putin/ Minds and Machines - https://link.springer.com/article/10.1007/s44163-024-00216-2 Anthropic - https://www.cnbc.com/2024/10/22/anthropic-announces-ai-agents-for-complex-tasks-racing-openai.html Convergence AI - https://convergence.ai/training-web-agents-with-web-world-models-dec-2024/ OpenAI Operator - https://openai.com/index/introducing-operator/ ByteDance UITARS - https://venturebeat.com/ai/bytedances-ui-tars-can-take-over-your-computer-outperforms-gpt-4o-and-claude/ Zapier - https://www.linkedin.com/pulse/openai-bytedance-zapier-launch-ai-agents-getcoai-l6blf/ Microsoft OmniParser - https://www.microsoft.com/en-us/research/articles/omniparser-v2-turning-any-llm-into-a-computer-use-agent/ Google Project Mariner - https://deepmind.google/technologies/project-mariner/ Rajeev Sharma - Agentic AI Architecture - https://markovate.com/blog/agentic-ai-architecture/ NIST.AI.600-1 - https://doi.org/10.6028/NIST.AI.600-1 Mitre ATLAS - https://atlas.mitre.org/ OWASP Top 10 for LLMs - https://owasp.org/www-project-top-10-for-large-language-model-applications/ ISO 42001 - https://www.iso.org/standard/81230.html Chapters 00:00 Introduction and Intriguing Quote 01:10 Defining Agentic AI 02:01 Expert Insights on Agency 04:32 Agentic AI in Practice 06:54 Recent Developments in Agentic AI 08:20 Deep Dive into Agentic AI Infrastructure 15:35 Use Cases for Agentic AI 21:12 Challenges and Considerations 24:22 Conclusion and Recap
Organizations build and deploy applications at an unprecedented pace, but security is often an afterthought. This episode of ITSPmagazine's Brand Story features Jim Manico, founder of Manicode Security, in conversation with hosts Sean Martin and Marco Ciappelli. The discussion explores the current state of application security, the importance of developer training, and how organizations can integrate security from the ground up to drive better business outcomes.The Foundation of Secure DevelopmentJim Manico has spent decades helping engineers and architects understand and implement secure coding practices. His work with the Open Web Application Security Project (OWASP), including contributions to the OWASP Top 10 and the OWASP Cheat Sheet Series, has influenced how security is approached in software development. He emphasizes that security should not be an afterthought but a fundamental part of the development process.He highlights OWASP's role in providing documentation, security tools, and standards like the Application Security Verification Standard (ASVS), which is now in its 5.0 release. These resources help organizations build secure applications, but Manico points out that simply having the guidance available isn't enough—engineers need the right training to apply security principles effectively.Why Training MattersManico has trained thousands of engineers worldwide and sees firsthand the impact of hands-on education. He explains that developers often lack formal security training, which leads to common mistakes such as insecure authentication, improper data handling, and vulnerabilities in third-party dependencies. His training programs focus on practical, real-world applications, allowing developers to immediately integrate security into their work.Security training also helps businesses beyond just compliance. While some companies initially engage in training to meet regulatory requirements, many realize the long-term value of security in reducing risk, improving product quality, and building customer trust. Manico shares an example of a startup that embedded security from the beginning, investing heavily in training early on. That approach helped differentiate them in the market and contributed to their success as a multi-billion-dollar company.The Role of AI and Continuous LearningManico acknowledges that the speed of technological change presents challenges for security training. Frameworks, programming languages, and attack techniques evolve constantly, requiring continuous learning. He has integrated AI tools into his training workflow to help answer complex questions, identify knowledge gaps, and refine content. AI serves as an augmentation tool, not a replacement, and he encourages developers to use it as an assistant to strengthen their understanding of security concepts.Security as a Business EnablerThe conversation reinforces that secure coding is not just about avoiding breaches—it is about building better software. Organizations that prioritize security early can reduce costs, improve reliability, and increase customer confidence. Manico's approach to education is about empowering developers to think beyond compliance and see security as a critical component of software quality and business success.For organizations looking to enhance their security posture, developer training is an investment that pays off. Manicode Security offers customized training programs to meet the specific needs of teams, covering topics from secure coding fundamentals to advanced application security techniques. To learn more or schedule a session, Jim Manico can be reached at Jim@manicode.com.Tune in to the full episode to hear more insights from Jim Manico on how security training is shaping the future of application security.Learn more about Manicode: https://itspm.ag/manicode-security-7q8iNote: This story contains promotional content. Learn more.Guest: Jim Manico, Founder and Secure Coding Educator at Manicode Security | On Linkedin: https://www.linkedin.com/in/jmanico/ResourcesDownload the Course Catalog: https://itspm.ag/manicode-x684Learn more and catch more stories from Manicode Security: https://www.itspmagazine.com/directory/manicode-securityAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story
Henrik Plate joins us to discuss the OWASP Top 10 Open Source Risks, a guide highlighting critical security and operational challenges in using open source dependencies. The list includes risks like known vulnerabilities, compromised legitimate packages, name confusion attacks, and unmaintained software, providing developers and organizations a framework to assess and mitigate potential threats. Henrik offers insights on how developers and AppSec professionals can implement the guidelines. Our discussion also includes the need for a dedicated open-source risk list, and the importance of addressing known vulnerabilities, unmaintained projects, immature software, and more. The OWASP Top 10 Open Source Risks FOLLOW OUR SOCIAL MEDIA: ➜Twitter: @AppSecPodcast➜LinkedIn: The Application Security Podcast➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast Thanks for Listening! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Forecast = Ransomware storms surge with an 87% spike in industrial attacks—brace for ICS strikes from GRAPHITE and BAUXITE! Infostealers hit healthcare and education, while VPN vulnerabilities pour in—grab your digital umbrella! It's report season and today the crew kicks things off with a breakdown of Veracode's State of Software Security 2025 Report, highlighting significant improvements in OWASP Top 10 pass rates but also noting concerning trends in high-severity flaws and security debt. Next, we take a peek at Dragos's 2025 OT/ICS Cybersecurity Report, which reveals an increase in ransomware attacks against industrial organizations and the emergence of new threat groups like GRAPHITE and BAUXITE. The report also details the evolution of malware targeting critical infrastructure, such as Fuxnet and FrostyGoop. The Huntress 2025 Cyber Threat Report is then discussed, showcasing the dominance of infostealers and malicious scripts in the threat landscape, with healthcare and education sectors being prime targets. The report also highlights the shift in ransomware tactics towards data theft and extortion. The team also quickly covers a recent and _massive_ $1.5 billion Ethereum heist. We *FINALLY* cover some recent findings from Censys, including their innovative approach to discovering non-standard port usage in Industrial Control System protocols. This segment also touches on the growing threat posed by vulnerabilities in edge security products. We also *FINALLY* get around to checking out VulnCheck's research, including an analysis of Black Basta ransomware group's tactics based on leaked chat logs, and their efforts to automate Stakeholder Specific Vulnerability Categorization (SSVC) for more effective vulnerability prioritization. The episode wraps up with mentions of GreyNoise's latest reports on mass internet exploitation and a newly discovered DDoS botnet, providing listeners with a well-rounded view of the current cybersecurity landscape. Storm Watch Homepage >> Learn more about GreyNoise >>
⬥GUESTS⬥Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State University | On Linkedin: https://www.linkedin.com/in/sandydunnciso/Rock Lambros, CEO and founder of RockCyber | On LinkedIn | https://www.linkedin.com/in/rocklambros/Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinView This Show's Sponsors⬥EPISODE NOTES⬥The rise of large language models (LLMs) has reshaped industries, bringing both opportunities and risks. The latest OWASP Top 10 for LLMs aims to help organizations understand and mitigate these risks. In a recent episode of Redefining Cybersecurity, host Sean Martin sat down with Sandy Dunn and Rock Lambros to discuss the latest updates to this essential security framework.The OWASP Top 10 for LLMs: What It Is and Why It MattersOWASP has long been a trusted source for security best practices, and its LLM-specific Top 10 is designed to guide organizations in identifying and addressing key vulnerabilities in AI-driven applications. This initiative has rapidly gained traction, becoming a reference point for AI security governance, testing, and implementation. Organizations developing or integrating AI solutions are now evaluating their security posture against this list, ensuring safer deployment of LLM technologies.Key Updates for 2025The 2025 iteration of the OWASP Top 10 for LLMs introduces refinements and new focus areas based on industry feedback. Some categories have been consolidated for clarity, while new risks have been added to reflect emerging threats.• System Prompt Leakage (New) – Attackers may manipulate LLMs to extract system prompts, potentially revealing sensitive operational instructions and security mechanisms.• Vector and Embedding Risks (New) – Security concerns around vector databases and embeddings, which can lead to unauthorized data exposure or manipulation.Other notable changes include reordering certain risks based on real-world impact. Prompt Injection remains the top concern, while Sensitive Information Disclosure and Supply Chain Vulnerabilities have been elevated in priority.The Challenge of AI SecurityUnlike traditional software vulnerabilities, LLMs introduce non-deterministic behavior, making security testing more complex. Jailbreaking attacks—where adversaries bypass system safeguards through manipulative prompts—remain a persistent issue. Prompt injection attacks, where unauthorized instructions are inserted to manipulate output, are also difficult to fully eliminate.As Dunn explains, “There's no absolute fix. It's an architecture issue. Until we fundamentally redesign how we build LLMs, there will always be risk.”Beyond Compliance: A Holistic Approach to AI SecurityBoth Dunn and Lambros emphasize that organizations need to integrate AI security into their overall IT and cybersecurity strategy, rather than treating it as a separate issue. AI governance, supply chain integrity, and operational resilience must all be considered.Lambros highlights the importance of risk management over rigid compliance: “Organizations have to balance innovation with security. You don't have to lock everything down, but you need to understand where your vulnerabilities are and how they impact your business.”Real-World Impact and AdoptionThe OWASP Top 10 for LLMs has already been widely adopted, with companies incorporating it into their security frameworks. It has been translated into multiple languages and is serving as a global benchmark for AI security best practices.Additionally, initiatives like HackerPrompt 2.0 are helping security professionals stress-test AI models in real-world scenarios. OWASP is also facilitating industry collaboration through working groups on AI governance, threat intelligence, and agentic AI security.How to Get InvolvedFor those interested in contributing, OWASP provides open-access resources and welcomes participants to its AI security initiatives. Anyone can join the discussion, whether as an observer or an active contributor.As AI becomes more ingrained in business and society, frameworks like the OWASP Top 10 for LLMs are essential for guiding responsible innovation. To learn more, listen to the full episode and explore OWASP's latest AI security resources.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥OWASP GenAI: https://genai.owasp.org/Link to the 2025 version of the Top 10 for LLM Applications: https://genai.owasp.org/llm-top-10/Getting Involved: https://genai.owasp.org/contribute/OWASP LLM & Gen AI Security Summit at RSAC 2025: https://genai.owasp.org/event/rsa-conference-2025/AI Threat Mind Map: https://github.com/subzer0girl2/AI-Threat-Mind-MapGuide for Preparing and Responding to Deepfake Events: https://genai.owasp.org/resource/guide-for-preparing-and-responding-to-deepfake-events/AI Security Solution Cheat Sheet Q1-2025:https://genai.owasp.org/resource/ai-security-solution-cheat-sheet-q1-2025/HackAPrompt 2.0: https://www.hackaprompt.com/⬥ADDITIONAL INFORMATION⬥✨ To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist on YouTube:
⬥GUESTS⬥Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State University | On Linkedin: https://www.linkedin.com/in/sandydunnciso/Rock Lambros, CEO and founder of RockCyber | On LinkedIn | https://www.linkedin.com/in/rocklambros/Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinView This Show's Sponsors⬥EPISODE NOTES⬥The rise of large language models (LLMs) has reshaped industries, bringing both opportunities and risks. The latest OWASP Top 10 for LLMs aims to help organizations understand and mitigate these risks. In a recent episode of Redefining Cybersecurity, host Sean Martin sat down with Sandy Dunn and Rock Lambros to discuss the latest updates to this essential security framework.The OWASP Top 10 for LLMs: What It Is and Why It MattersOWASP has long been a trusted source for security best practices, and its LLM-specific Top 10 is designed to guide organizations in identifying and addressing key vulnerabilities in AI-driven applications. This initiative has rapidly gained traction, becoming a reference point for AI security governance, testing, and implementation. Organizations developing or integrating AI solutions are now evaluating their security posture against this list, ensuring safer deployment of LLM technologies.Key Updates for 2025The 2025 iteration of the OWASP Top 10 for LLMs introduces refinements and new focus areas based on industry feedback. Some categories have been consolidated for clarity, while new risks have been added to reflect emerging threats.• System Prompt Leakage (New) – Attackers may manipulate LLMs to extract system prompts, potentially revealing sensitive operational instructions and security mechanisms.• Vector and Embedding Risks (New) – Security concerns around vector databases and embeddings, which can lead to unauthorized data exposure or manipulation.Other notable changes include reordering certain risks based on real-world impact. Prompt Injection remains the top concern, while Sensitive Information Disclosure and Supply Chain Vulnerabilities have been elevated in priority.The Challenge of AI SecurityUnlike traditional software vulnerabilities, LLMs introduce non-deterministic behavior, making security testing more complex. Jailbreaking attacks—where adversaries bypass system safeguards through manipulative prompts—remain a persistent issue. Prompt injection attacks, where unauthorized instructions are inserted to manipulate output, are also difficult to fully eliminate.As Dunn explains, “There's no absolute fix. It's an architecture issue. Until we fundamentally redesign how we build LLMs, there will always be risk.”Beyond Compliance: A Holistic Approach to AI SecurityBoth Dunn and Lambros emphasize that organizations need to integrate AI security into their overall IT and cybersecurity strategy, rather than treating it as a separate issue. AI governance, supply chain integrity, and operational resilience must all be considered.Lambros highlights the importance of risk management over rigid compliance: “Organizations have to balance innovation with security. You don't have to lock everything down, but you need to understand where your vulnerabilities are and how they impact your business.”Real-World Impact and AdoptionThe OWASP Top 10 for LLMs has already been widely adopted, with companies incorporating it into their security frameworks. It has been translated into multiple languages and is serving as a global benchmark for AI security best practices.Additionally, initiatives like HackerPrompt 2.0 are helping security professionals stress-test AI models in real-world scenarios. OWASP is also facilitating industry collaboration through working groups on AI governance, threat intelligence, and agentic AI security.How to Get InvolvedFor those interested in contributing, OWASP provides open-access resources and welcomes participants to its AI security initiatives. Anyone can join the discussion, whether as an observer or an active contributor.As AI becomes more ingrained in business and society, frameworks like the OWASP Top 10 for LLMs are essential for guiding responsible innovation. To learn more, listen to the full episode and explore OWASP's latest AI security resources.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥OWASP GenAI: https://genai.owasp.org/Link to the 2025 version of the Top 10 for LLM Applications: https://genai.owasp.org/llm-top-10/Getting Involved: https://genai.owasp.org/contribute/OWASP LLM & Gen AI Security Summit at RSAC 2025: https://genai.owasp.org/event/rsa-conference-2025/AI Threat Mind Map: https://github.com/subzer0girl2/AI-Threat-Mind-MapGuide for Preparing and Responding to Deepfake Events: https://genai.owasp.org/resource/guide-for-preparing-and-responding-to-deepfake-events/AI Security Solution Cheat Sheet Q1-2025:https://genai.owasp.org/resource/ai-security-solution-cheat-sheet-q1-2025/HackAPrompt 2.0: https://www.hackaprompt.com/⬥ADDITIONAL INFORMATION⬥✨ To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist on YouTube:
Ken and Seth return for 2025 to review the accuracy of their predictions from 2024 and make a few new ones for this new year. Some hits and misses for last year, but overall the generic predictions for both AI/LLM growth and software supply chain security were accurate. However, they were wrong in their assumptions around LLM creation and training. For 2025, predictions on AI billing models, software supply chain attacks, OWASP Top 10 2025, and more.
Authentication (validating who you claim to be) and Authorization (enforcing what you are allowed to do) are critical in modern software development. While authentication seems to be a solved problem, modern software development faces many challenges with secure, fast, and resilient authorization mechanisms. To learn more about those challenges, we invited Alex Olivier, Co-Founder and CPO at Cerbos, an Open Source Scalable Authorization Solution. Alex shared insights on attribute-based vs. role-based access Control, the difference between stateful and stateless authorization implementations, why Broken Access Control is in the OWASP Top 10 Security Vulnerabilities, and how to observe the authorization solution for performance, security, and auditing purposes.Links we discussed during the episode:Alex's LinkedIn: https://www.linkedin.com/in/alexolivier/Cerbos on GitHub: https://github.com/cerbos/cerbosOWASP Broken Access Control: https://owasp.org/www-community/Broken_Access_Control
This week, we are pleased to be joined by Mick Baccio, global security advisor for Splunk SURGe, sharing their research on "LLM Security: Splunk & OWASP Top 10 for LLM-based Applications." The research dives into the rapid rise of AI and Large Language Models (LLMs) that initially seem magical, but behind the scenes, they are sophisticated systems built by humans. Despite their impressive capabilities, these systems are vulnerable to numerous cyber threats. Splunk's research explores the OWASP Top 10 for LLM Applications, a framework that highlights key vulnerabilities such as prompt injection, training data poisoning, and sensitive information disclosure. The research can be found here: LLM Security: Splunk & OWASP Top 10 for LLM-based Applications Learn more about your ad choices. Visit megaphone.fm/adchoices
This week, we are pleased to be joined by Mick Baccio, global security advisor for Splunk SURGe, sharing their research on "LLM Security: Splunk & OWASP Top 10 for LLM-based Applications." The research dives into the rapid rise of AI and Large Language Models (LLMs) that initially seem magical, but behind the scenes, they are sophisticated systems built by humans. Despite their impressive capabilities, these systems are vulnerable to numerous cyber threats. Splunk's research explores the OWASP Top 10 for LLM Applications, a framework that highlights key vulnerabilities such as prompt injection, training data poisoning, and sensitive information disclosure. The research can be found here: LLM Security: Splunk & OWASP Top 10 for LLM-based Applications Learn more about your ad choices. Visit megaphone.fm/adchoices
The Haunted House of API'sThe Witch's Brew: Stirring Up OWASP Vulnerabilities and API TestingToday, we are kicking off an amazing series for Cybersecurity Awareness month, entitled the Haunted House of API's, sponsored by our friends at Traceable AI. In this series, we are building awareness around API's, their security risks – and what you can do about it. Traceable AI is building One Platform to secure every API, so you can discover, protect, and test all your API's with contextual API security, enabling organizations to minimize risk and maximize the value API's bring to their customers.In today's episode, we will be talking with Jayesh Ahire, an expert in API testing and OWASP, will guide us through the "brew" of common vulnerabilities that haunt API ecosystems, focusing on the OWASP Top 10 for APIs. He'll share how organizations can use API security testing to spot and neutralize these vulnerabilities before they become major exploits. By emphasizing proactive security measures, Jayesh will offer insights into creating a strong API testing framework that keeps malicious actors at bay.Discussion questions:What are some of the most common vulnerabilities in APIs that align with the OWASP Top 10, and why are they so dangerous?Why is API security testing crucial for detecting these vulnerabilities early, and how does it differ from traditional security testing?Can you share an example of how an overlooked API vulnerability led to a significant security breach?How can organizations create an effective API testing framework that addresses these vulnerabilities?What tools or methods do you recommend for continuously testing APIs and ensuring they remain secure as they evolve?SponsorsTraceableLinkshttps://www.traceable.ai/https://www.linkedin.com/in/jayesh-ahire/https://owasp.org/Support this podcast at — https://redcircle.com/code-story/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam] & Project Lead, OWASP Top 10 for Larage Language Model Applications [@owasp]On LinkedIn | https://www.linkedin.com/in/wilsonsd/On Twitter | https://x.com/virtualsteve____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinView This Show's Sponsors___________________________Episode NotesIn this episode of Redefining CyberSecurity, host Sean Martin sat down with Steve Wilson, chief product officer at Exabeam, to discuss the critical topic of secure AI development. The conversation revolved around the nuances of developing and deploying large language models (LLMs) in the field of cybersecurity.Steve Wilson's expertise lies at the intersection of AI and cybersecurity, a point he emphasized while sharing his journey from founding the Top 10 group for large language models to authoring his new book, "The Developer's Playbook for Large Language Model Security." In this insightful discussion, Wilson and Martin explore the roles of developers and product managers in ensuring the safety and security of AI systems.One of the key themes in the conversation is the categorization of AI applications into chatbots, co-pilots, and autonomous agents. Wilson explains that while chatbots are open-ended, interacting with users on various topics, co-pilots focus on enhancing productivity within specific domains by interacting with user data. Autonomous agents are more independent, executing tasks with minimal human intervention.Wilson brings attention to the concept of overreliance on AI models and the associated risks. Highlighting that large language models can hallucinate or produce unreliable outputs, he stresses the importance of designing systems that account for these limitations. Product managers play a crucial role here, ensuring that AI applications are built to mitigate risks and communicate their reliability to users effectively.The discussion also touches on the importance of security guardrails and continuous monitoring. Wilson introduces the idea of using tools akin to web app firewalls (WAF) or runtime application self-protection (RASP) to keep AI models within safe operational parameters. He mentions frameworks like Nvidia's open-source project, Nemo Guardrails, which aid developers in implementing these defenses.Moreover, the conversation highlights the significance of testing and evaluation in AI development. Wilson parallels the education and evaluation of LLMs to training and testing a human-like system, underscoring that traditional unit tests may not suffice. Instead, flexible test cases and advanced evaluation tools are necessary. Another critical aspect Wilson discusses is the need for red teaming in AI security. By rigorously testing AI systems and exploring their vulnerabilities, organizations can better prepare for real-world threats. This proactive approach is essential for maintaining robust AI applications.Finally, Wilson shares insights from his book, including the Responsible AI Software Engineering (RAISE) framework. This comprehensive guide offers developers and product managers practical steps to integrate secure AI practices into their workflows. With an emphasis on continuous improvement and risk management, the RAISE framework serves as a valuable resource for anyone involved in AI development.About the BookLarge language models (LLMs) are not just shaping the trajectory of AI, they're also unveiling a new era of security challenges. This practical book takes you straight to the heart of these threats. Author Steve Wilson, chief product officer at Exabeam, focuses exclusively on LLMs, eschewing generalized AI security to delve into the unique characteristics and vulnerabilities inherent in these models.Complete with collective wisdom gained from the creation of the OWASP Top 10 for LLMs list—a feat accomplished by more than 400 industry experts—this guide delivers real-world guidance and practical strategies to help developers and security teams grapple with the realities of LLM applications. Whether you're architecting a new application or adding AI features to an existing one, this book is your go-to resource for mastering the security landscape of the next frontier in AI.___________________________SponsorsImperva: https://itspm.ag/imperva277117988LevelBlue: https://itspm.ag/attcybersecurity-3jdk3___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:
Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam] & Project Lead, OWASP Top 10 for Larage Language Model Applications [@owasp]On LinkedIn | https://www.linkedin.com/in/wilsonsd/On Twitter | https://x.com/virtualsteve____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinView This Show's Sponsors___________________________Episode NotesIn this episode of Redefining CyberSecurity, host Sean Martin sat down with Steve Wilson, chief product officer at Exabeam, to discuss the critical topic of secure AI development. The conversation revolved around the nuances of developing and deploying large language models (LLMs) in the field of cybersecurity.Steve Wilson's expertise lies at the intersection of AI and cybersecurity, a point he emphasized while sharing his journey from founding the Top 10 group for large language models to authoring his new book, "The Developer's Playbook for Large Language Model Security." In this insightful discussion, Wilson and Martin explore the roles of developers and product managers in ensuring the safety and security of AI systems.One of the key themes in the conversation is the categorization of AI applications into chatbots, co-pilots, and autonomous agents. Wilson explains that while chatbots are open-ended, interacting with users on various topics, co-pilots focus on enhancing productivity within specific domains by interacting with user data. Autonomous agents are more independent, executing tasks with minimal human intervention.Wilson brings attention to the concept of overreliance on AI models and the associated risks. Highlighting that large language models can hallucinate or produce unreliable outputs, he stresses the importance of designing systems that account for these limitations. Product managers play a crucial role here, ensuring that AI applications are built to mitigate risks and communicate their reliability to users effectively.The discussion also touches on the importance of security guardrails and continuous monitoring. Wilson introduces the idea of using tools akin to web app firewalls (WAF) or runtime application self-protection (RASP) to keep AI models within safe operational parameters. He mentions frameworks like Nvidia's open-source project, Nemo Guardrails, which aid developers in implementing these defenses.Moreover, the conversation highlights the significance of testing and evaluation in AI development. Wilson parallels the education and evaluation of LLMs to training and testing a human-like system, underscoring that traditional unit tests may not suffice. Instead, flexible test cases and advanced evaluation tools are necessary. Another critical aspect Wilson discusses is the need for red teaming in AI security. By rigorously testing AI systems and exploring their vulnerabilities, organizations can better prepare for real-world threats. This proactive approach is essential for maintaining robust AI applications.Finally, Wilson shares insights from his book, including the Responsible AI Software Engineering (RAISE) framework. This comprehensive guide offers developers and product managers practical steps to integrate secure AI practices into their workflows. With an emphasis on continuous improvement and risk management, the RAISE framework serves as a valuable resource for anyone involved in AI development.About the BookLarge language models (LLMs) are not just shaping the trajectory of AI, they're also unveiling a new era of security challenges. This practical book takes you straight to the heart of these threats. Author Steve Wilson, chief product officer at Exabeam, focuses exclusively on LLMs, eschewing generalized AI security to delve into the unique characteristics and vulnerabilities inherent in these models.Complete with collective wisdom gained from the creation of the OWASP Top 10 for LLMs list—a feat accomplished by more than 400 industry experts—this guide delivers real-world guidance and practical strategies to help developers and security teams grapple with the realities of LLM applications. Whether you're architecting a new application or adding AI features to an existing one, this book is your go-to resource for mastering the security landscape of the next frontier in AI.___________________________SponsorsImperva: https://itspm.ag/imperva277117988LevelBlue: https://itspm.ag/attcybersecurity-3jdk3___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:
In March 2024, the Australian Senate resolved that the Select Committee on Adopting Artificial Intelligence (AI) be established to inquire into and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia. The committee intends to report to the Parliament on or before 19 September 2024.More than 40 Australian AI experts made a joint submission to the Inquiry. The submission from Australians for AI Safety calls for the creation of an AI Safety Institute. “Australia has yet to position itself to learn from and contribute to growing global efforts. To achieve the economic and social benefits that AI promises, we need to be active in global action to ensure the safety of AI systems that approach or surpass human-level capabilities.” “Too often, lessons are learned only after something goes wrong. With AI systems that might approach or surpass human-level capabilities, we cannot afford for that to be the case.”This session has gathered experts and specialists in their field to discuss best practice alignment of AI applications and utilisation to safety and cybersecurity requirements. This includes quantum computing which is set to revolutionise sustainability, cybersecurity, ML, AI and many optimisation problems that classic computers can never imagine. In addition, we will also get briefed on: OWASP Top 10 for Large Language Model Applications; shedding light on the specific vulnerabilities LLMs face, including real world examples and detailed exploration of five key threats addressed using prompts and responses from LLMs; Prompt injection, insecure output handling, model denial of service, sensitive information disclosure, and model theft; How traditional cybersecurity methodologies can be applied to defend LLMs effectively; and How organisations can stay ahead of potential risks and ensure the security of their LLM-based applications.PanelistsDr Mahendra SamarawickramaDirector | Centre for Sustainable AIDr Mahendra Samarawickrama (GAICD, MBA, SMIEEE, ACS(CP)) is a leader in driving the convergence of Metaverse, AI, and Blockchain to revolutionize the future of customer experience and brand identity. He is the Australian ICT Professional of the Year 2022 and a director of The Centre for Sustainable AI and Meta61. He is an Advisory Council Member of Harvard Business Review (HBR), a Committee Member of the IEEE AI Standards, an Expert in AI ethics and governance at the Global AI Ethics Institute (GAIEI), a member of the European AI Alliance, a senior member of IEEE (SMIEEE), an industry Mentor in the UNSW business school, an honorary visiting scholar at the University of Technology Sydney (UTS), and a graduate member of the Australian Institute of Company Directors (GAICD).Ser Yoong GohHead of Compliance | ADVANCE.AI | ISACA Emerging Trends Working GroupSer Yoong is a seasoned technology professional who has held various roles with multinational corporations, consulting and also SMEs from various industries. He is recognised as a subject matter expert in the areas of cybersecurity, audit, risk and compliance from his working experience, having held various certifications and was also recognised as one of the Top 30 CSOs in 2021 from IDG. Shannon DavisPrincipal Security Strategist | Splunk SURGeShannon hails from Melbourne, Australia. Originally from Seattle, Washington, he has worked in a number of roles: a video game tester at Nintendo (Yoshi's Island broke his spirit), a hardware tester at Microsoft (handhelds have come a long way since then), a Windows NT admin for an early security startup and one of the first Internet broadcast companies, along with security roles for companies including Juniper and Cisco. Shannon enjoys getting outdoors for hikes and traveling.Greg SadlerCEO | Good Ancestors PolicyGreg Sadler is also CEO of Good Ancestors Policy, a charity that develops and advocates for Australian-specific policies aimed at solving this century's most challenging problems. Greg coordinates Australians for AI Safety and focuses on how Australia can help make frontier AI systems safe. Greg is on the board of a range of charities, including the Alliance to Feed the Earth in Disasters and Effective Altruism Australia. Lana TikhomirovPhD Candidate, Australian Institute for Machine Learning, University of AdelaideLana is a PhD Candidate in AI safety for human decision-making, focussed on medical AI. She has a background in cognitive science and uses bioethics and knowledge about algorithms to understand how to approach AI for high-risk human decisionsChris CubbageDirector - MYSECURITY MEDIA | MODERATORFor more information and the full series visit https://mysecuritymarketplace.com/security-risk-professional-insight-series/
In this episode we sit down with GenAI and Security Leader Steve Wilson to discuss securing the explosive adoption of GenAI and LLM's. Steve is the leader of the OWASP Top 10 for LLM's and the upcoming book The Developer's Playbook for LLM Security: Building Secure AI Applications-- First off, for those not familiar with your background, can you tell us a bit about yourself and what brought you to focusing on AI Security as you have currently?- Many may not be familiar with the OWASP LLM Top 10, can you tell us how the project came about, and some of the value it provides the community?- I don't want to talk through the list item by item, but I wanted to ask, what are some of the key similarities and key differences when it comes to securing AI systems and applications compared to broader historical AppSec?- Where do you think organizations should look to get started to try and keep pace with the businesses adoption of GenAI and LLM's?- You've also been working on publishing the Developers Playbook to LLM Security which I've been working my way through an early preview edition of and it is great. What are some of the core topics you cover in the book?- One hot topic in GenAI and LLM is the two large paths of either closed and open source models, services and platforms. What are some key considerations from your perspective for those adopting one or the other?- I know software supply chain security is a key part of LLM and GenAI security, why is that, and what should folks keep in mind?- For those wanting to learn more, where can they find more resources, such as the LLM Top 10, your book, any upcoming talks etc?
Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam] & Project Lead, OWASP Top 10 for Larage Language Model Applications [@owasp]On LinkedIn | https://www.linkedin.com/in/wilsonsd/On Twitter | https://x.com/virtualsteve____________________________Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesIn this episode of the Chat on the Road On Location series for OWASP AppSec Global in San Francisco, Sean Martin hosts a compelling conversation with Steve Wilson, Project Lead for the OWASP Top 10 for Large Language Model AI Applications. The discussion, as you might guess, centers on the OWASP Top 10 list for Large Language Models (LLMs) and the security challenges associated with these technologies. Wilson highlights the growing relevance of AppSec, particularly with the surge in interest in AI and LLMs.The conversation kicks off with an exploration of the LLM project that Wilson has been working on at OWASP, aimed at presenting an update on the OWASP Top 10 for LLMs. Wilson emphasizes the significance of prompt injection attacks, one of the key concerns on the OWASP list. He explains how attackers can craft prompts to manipulate LLMs into performing unintended actions, a tactic reminiscent of the SQL injection attacks that have plagued traditional software for years. This serves as a stark reminder of the need for vigilance in the development and deployment of LLMs.Supply chain risks are another critical issue discussed. Wilson draws parallels to the Log4j incident, stressing that the AI software supply chain is currently a weak link. With the rapid growth of platforms like Hugging Face, the provenance of AI models and training datasets becomes a significant concern. Ensuring the integrity and security of these components is paramount to building robust AI-driven systems.The notion of excessive agency is also explored—a concept that relates to the permissions and responsibilities assigned to LLMs. Wilson underscores the importance of limiting the scope of LLMs to prevent misuse or unauthorized actions. This point resonates with traditional security principles like least privilege but is recontextualized for the AI age. Overreliance on LLMs is another topic Martin and Wilson discuss.The conversation touches on how people can place undue trust in AI outputs, leading to potentially hazardous outcomes. Ensuring users understand the limitations and potential inaccuracies of LLM-generated content is essential for safe and effective AI utilization.Wilson also provides a preview of his upcoming session at the OWASP AppSec Global event, where he plans to share insights from the ongoing work on the 2.0 version of the OWASP Top 10 for LLMs. This next iteration will address how the field has matured and new security considerations that have emerged since the initial list.Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsAre you interested in sponsoring our event coverage with an ad placement in the podcast?Learn More
Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesIn this episode of "On Location With Sean Martin and Marco Ciappelli," our hosts dive into their time at Black Hat 2024 in Las Vegas, reflecting on key takeaways and sharing what's next on their journey. Whether you're deep into cybersecurity or just curious about the industry, this blog post offers a snapshot of what to expect from Sean and Marco.Recapping Black Hat 2024Marco CiappelliChoo, choo . . .Sean MartinIs that the sound of the fast train back from Vegas? Or just the rush of everything we experienced?Marco CiappelliI'm still wondering why there's no train from LA to Vegas. And don't get me started on LA to San Francisco—that's another conversation entirely.The conversation kicks off with a lighthearted nod to travel woes before shifting to the core of the episode: their reflections on Black Hat 2024. Sean and Marco bring unique perspectives, emphasizing the importance of thinking beyond cybersecurity's technical aspects to consider its broader impact on society and business.Sean's Operational InsightsSean MartinI like to look at things from an operational angle—how can we take what we learn and bring it back to the business to help leaders and practitioners do what they love?Sean's Black Hat 2024 Recap Newsletter explores the evolution from reactive data responses to strategic enablement, AI and automation, modular cybersecurity, and the invaluable role of human insights. His focus is clear: helping businesses become more resilient and adaptable through smarter cybersecurity practices.Marco's Societal ImpactMarco CiappelliCybersecurity isn't a destination—it's a journey. We're never going to be fully secure, and that's okay. Cultures change, technology evolves, and we have to keep adapting.Marco's take highlights the societal implications of cybersecurity. He talk about how different fields and nations are breaking down silos to collaborate more effectively. His newsletter often reflects on the need for digital literacy across business, society, and education, emphasizing the importance of broadening our understanding of technology's role.Upcoming Events and ConferencesThe duo is excited about their packed schedule for the rest of 2024 and beyond, including:CyberTech New York (September 2024): Focused on policy, innovation, SecOps, AppSec, and sustainability.OWASP AppSec San Francisco (September 2024): Covering the OWASP Top 10 for LLMs and more.Sector in Toronto (October 2024): Offering unique coverage ideas, closely tied to Black Hat.Did someone said that they will be back covering an APJ event, in Melbourne, before the end of the year??? Additional VenturesThey'll also be hosting innovation panels and keynotes at a company event in New Orleans, with CES in Las Vegas and VivaTech in Paris on the horizon for 2025, blending B2B startup insights with consumer tech, all with a cybersecurity twist.Subscribe and Stay TunedMarco and Sean invite you to subscribe to their newsletters and follow their podcast, "On Location," as they continue their journey around the globe—both physically and virtually—bringing fresh perspectives on business, technology, and cybersecurity. You'll also find unique "brand stories" that highlight innovations making our world safer and more sustainable.Stay connected, enjoy the ride, and don't forget to subscribe to both their newsletters and the "On Location" podcast on YouTube!Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsLevelBlue: https://itspm.ag/levelblue266f6cCoro: https://itspm.ag/coronet-30deSquareX: https://itspm.ag/sqrx-l91Britive: https://itspm.ag/britive-3fa6AppDome: https://itspm.ag/appdome-neuv____________________________Follow our Black Hat USA 2024 coverage: https://www.itspmagazine.com/black-hat-usa-2024-hacker-summer-camp-2024-event-coverage-in-las-vegasOn YouTube:
Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesIn this episode of "On Location With Sean Martin and Marco Ciappelli," our hosts dive into their time at Black Hat 2024 in Las Vegas, reflecting on key takeaways and sharing what's next on their journey. Whether you're deep into cybersecurity or just curious about the industry, this blog post offers a snapshot of what to expect from Sean and Marco.Recapping Black Hat 2024Marco CiappelliChoo, choo . . .Sean MartinIs that the sound of the fast train back from Vegas? Or just the rush of everything we experienced?Marco CiappelliI'm still wondering why there's no train from LA to Vegas. And don't get me started on LA to San Francisco—that's another conversation entirely.The conversation kicks off with a lighthearted nod to travel woes before shifting to the core of the episode: their reflections on Black Hat 2024. Sean and Marco bring unique perspectives, emphasizing the importance of thinking beyond cybersecurity's technical aspects to consider its broader impact on society and business.Sean's Operational InsightsSean MartinI like to look at things from an operational angle—how can we take what we learn and bring it back to the business to help leaders and practitioners do what they love?Sean's Black Hat 2024 Recap Newsletter explores the evolution from reactive data responses to strategic enablement, AI and automation, modular cybersecurity, and the invaluable role of human insights. His focus is clear: helping businesses become more resilient and adaptable through smarter cybersecurity practices.Marco's Societal ImpactMarco CiappelliCybersecurity isn't a destination—it's a journey. We're never going to be fully secure, and that's okay. Cultures change, technology evolves, and we have to keep adapting.Marco's take highlights the societal implications of cybersecurity. He talk about how different fields and nations are breaking down silos to collaborate more effectively. His newsletter often reflects on the need for digital literacy across business, society, and education, emphasizing the importance of broadening our understanding of technology's role.Upcoming Events and ConferencesThe duo is excited about their packed schedule for the rest of 2024 and beyond, including:CyberTech New York (September 2024): Focused on policy, innovation, SecOps, AppSec, and sustainability.OWASP AppSec San Francisco (September 2024): Covering the OWASP Top 10 for LLMs and more.Sector in Toronto (October 2024): Offering unique coverage ideas, closely tied to Black Hat.Did someone said that they will be back covering an APJ event, in Melbourne, before the end of the year??? Additional VenturesThey'll also be hosting innovation panels and keynotes at a company event in New Orleans, with CES in Las Vegas and VivaTech in Paris on the horizon for 2025, blending B2B startup insights with consumer tech, all with a cybersecurity twist.Subscribe and Stay TunedMarco and Sean invite you to subscribe to their newsletters and follow their podcast, "On Location," as they continue their journey around the globe—both physically and virtually—bringing fresh perspectives on business, technology, and cybersecurity. You'll also find unique "brand stories" that highlight innovations making our world safer and more sustainable.Stay connected, enjoy the ride, and don't forget to subscribe to both their newsletters and the "On Location" podcast on YouTube!Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsLevelBlue: https://itspm.ag/levelblue266f6cCoro: https://itspm.ag/coronet-30deSquareX: https://itspm.ag/sqrx-l91Britive: https://itspm.ag/britive-3fa6AppDome: https://itspm.ag/appdome-neuv____________________________Follow our Black Hat USA 2024 coverage: https://www.itspmagazine.com/black-hat-usa-2024-hacker-summer-camp-2024-event-coverage-in-las-vegasOn YouTube:
Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam] & Project Lead, OWASP Top 10 for Larage Language Model Applications [@owasp]On LinkedIn | https://www.linkedin.com/in/wilsonsd/On Twitter | https://x.com/virtualsteve____________________________Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesIn this episode of the Chat on the Road On Location series for OWASP AppSec Global in San Francisco, Sean Martin hosts a compelling conversation with Steve Wilson, Project Lead for the OWASP Top 10 for Large Language Model AI Applications. The discussion, as you might guess, centers on the OWASP Top 10 list for Large Language Models (LLMs) and the security challenges associated with these technologies. Wilson highlights the growing relevance of AppSec, particularly with the surge in interest in AI and LLMs.The conversation kicks off with an exploration of the LLM project that Wilson has been working on at OWASP, aimed at presenting an update on the OWASP Top 10 for LLMs. Wilson emphasizes the significance of prompt injection attacks, one of the key concerns on the OWASP list. He explains how attackers can craft prompts to manipulate LLMs into performing unintended actions, a tactic reminiscent of the SQL injection attacks that have plagued traditional software for years. This serves as a stark reminder of the need for vigilance in the development and deployment of LLMs.Supply chain risks are another critical issue discussed. Wilson draws parallels to the Log4j incident, stressing that the AI software supply chain is currently a weak link. With the rapid growth of platforms like Hugging Face, the provenance of AI models and training datasets becomes a significant concern. Ensuring the integrity and security of these components is paramount to building robust AI-driven systems.The notion of excessive agency is also explored—a concept that relates to the permissions and responsibilities assigned to LLMs. Wilson underscores the importance of limiting the scope of LLMs to prevent misuse or unauthorized actions. This point resonates with traditional security principles like least privilege but is recontextualized for the AI age. Overreliance on LLMs is another topic Martin and Wilson discuss.The conversation touches on how people can place undue trust in AI outputs, leading to potentially hazardous outcomes. Ensuring users understand the limitations and potential inaccuracies of LLM-generated content is essential for safe and effective AI utilization.Wilson also provides a preview of his upcoming session at the OWASP AppSec Global event, where he plans to share insights from the ongoing work on the 2.0 version of the OWASP Top 10 for LLMs. This next iteration will address how the field has matured and new security considerations that have emerged since the initial list.Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsAre you interested in sponsoring our event coverage with an ad placement in the podcast?Learn More
Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications. Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3 YouTube: https://youtu.be/64Achcz97PI Resources: AI Tooling: Azure AI Tooling Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog Prompt Shields to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety. Groundedness detection to detect “hallucinations” in model outputs, coming soon. Safety system messagesto steer your model’s behavior toward safe, responsible outputs, coming soon. Safety evaluations to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview. Risk and safety monitoring to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service. AI Defender for Cloud AI Security Posture Management AI security posture management (Preview) - Microsoft Defender for Cloud | Microsoft Learn AI Workloads Enable threat protection for AI workloads (preview) - Microsoft Defender for Cloud | Microsoft Learn AI Red Teaming Tool Announcing Microsoft’s open automation framework to red team generative AI Systems | Microsoft Security Blog AI Development Considerations: AI Assessment from Microsoft Conduct an AI assessment using Microsoft’s Responsible AI Impact Assessment Template Responsible AI Impact Assessment Guide for detailed instructions Microsoft Responsible AI Processes Follow Microsoft’s Responsible AI principles: fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability Utilize tools like the Responsible AI Dashboard for continuous monitoring and improvement Define Use Case and Model Architecture Determine the specific use case for your LLM Design the model architecture, focusing on the Transformer architecture Content Filtering System How to use content filters (preview) with Azure OpenAI Service - Azure OpenAI | Microsoft Learn Azure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system uses an ensemble of classification models to detect and prevent harmful content in both input prompts and output completions The filtering system covers four main categories: hate, sexual, violence, and self-harm Each category is assessed at four severity levels: safe, low, medium, and high Additional classifiers are available for detecting jailbreak risks and known content for text and code. JailBreaking Content Filters Red Teaming the LLM Plan and conduct red teaming exercises to identify potential vulnerabilities Use diverse red teamers to simulate adversarial attacks and test the model’s robustness Microsoft AI Red Team building future of safer AI | Microsoft Security Blog Create a Threat Model with OWASP Top 10 owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_1.pdf Develop a threat model and implement mitigations based on identified risks Other updates: Los Angeles Azure Extended Zones Carbon Optimization App Config Ref GA OS SKU In-Place Migration for AKS Operator CRD Support with Azure Monitor Managed Service Azure API Center Visual Studio Code Extension Pre-release Azure API Management WordPress Plugin Announcing a New OpenAI Feature for Developers on Azure
Join Chris Romeo and Robert Hurlbut as they sit down with Andrew Van Der Stok, a leading web application security specialist and executive director at OWASP. In this episode, Andrew discusses the latest with the OWASP Top 10 Project, the importance of data collection, and the need for developer engagement. Learn about the methodology behind building the OWASP Top 10, the significance of framework security, and much more. Tune in to get vital insights that could shape the future of web application security. Don't miss this informative discussion!Previous episodes with Andrew Van Der StockAndrew van der Stock — Taking Application Security to the MassesAndrew van der Stock and Brian Glas -- The Future of the OWASP Top 10Books mentioned in the episode:The Crown Road by Iain BanksEdward Tufte FOLLOW OUR SOCIAL MEDIA: ➜Twitter: @AppSecPodcast➜LinkedIn: The Application Security Podcast➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast Thanks for Listening! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
News includes a neat trick we learned that setup-beam can do for GitHub actions by reading a project's .tool-versions file, Wojtek's insight on reducing SDK API surfaces, Ash's support for UUIDv7, the introduction of the highly customizable Backpex admin panel, a new LiveView component library called SaladUI and its unique ReactJS component conversion feature, Jose Valim's technique of using AI for testing function names, and more! Show Notes online - http://podcast.thinkingelixir.com/209 (http://podcast.thinkingelixir.com/209) Elixir Community News - https://x.com/flo_arens/status/1805255159460532602 (https://x.com/flo_arens/status/1805255159460532602?utm_source=thinkingelixir&utm_medium=shownotes) – TIL setup-beam GitHub action can read asdf's .tool-versions file and parse the OTP and Elixir version out of it. - https://github.com/erlef/setup-beam (https://github.com/erlef/setup-beam?utm_source=thinkingelixir&utm_medium=shownotes) – The setup-beam GitHub action project. - https://github.com/erlef/setup-beam?tab=readme-ov-file#version-file (https://github.com/erlef/setup-beam?tab=readme-ov-file#version-file?utm_source=thinkingelixir&utm_medium=shownotes) – Link to README section about the version file support in setup-beam. - https://dashbit.co/blog/sdks-with-req-stripe (https://dashbit.co/blog/sdks-with-req-stripe?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post by Wojtek on reducing the surface of SDK APIs by focusing on data, not functions. - https://x.com/ZachSDaniel1/status/1805002425738334372 (https://x.com/ZachSDaniel1/status/1805002425738334372?utm_source=thinkingelixir&utm_medium=shownotes) – Ash now supports UUIDv7, a Time-Sortable Identifier for modern databases. - https://github.com/ash-project/ash/pull/1253 (https://github.com/ash-project/ash/pull/1253?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub pull request for Ash's support of UUIDv7. - https://uuid7.com/ (https://uuid7.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Information about UUID7 as a Time-Sortable Identifier. - https://elixirforum.com/t/backpex-a-highly-customizable-admin-panel-for-phoenix-liveview-applications/64314 (https://elixirforum.com/t/backpex-a-highly-customizable-admin-panel-for-phoenix-liveview-applications/64314?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction to Backpex, a new admin backend library for Phoenix LiveView applications. - https://github.com/naymspace/backpex (https://github.com/naymspace/backpex?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for Backpex, a customizable administration panel for Phoenix LiveView applications. - https://github.com/bluzky/salad_ui (https://github.com/bluzky/salad_ui?utm_source=thinkingelixir&utm_medium=shownotes) – SaladUI, a Tailwind LiveView UI toolkit that includes a unique feature to convert ReactJS components. - https://salad-storybook.fly.dev/welcome (https://salad-storybook.fly.dev/welcome?utm_source=thinkingelixir&utm_medium=shownotes) – Storybook for SaladUI to explore components. - https://ui.shadcn.com/ (https://ui.shadcn.com/?utm_source=thinkingelixir&utm_medium=shownotes) – React Shad/cn UI component framework storybook page. - https://salad-storybook.fly.dev/examples/convert_shadui (https://salad-storybook.fly.dev/examples/convert_shadui?utm_source=thinkingelixir&utm_medium=shownotes) – Example of converting a ReactJS component to SaladUI. - https://github.com/codedge-llc/accessible (https://github.com/codedge-llc/accessible?utm_source=thinkingelixir&utm_medium=shownotes) – Accessible, a package to add Access behavior support to Elixir structs. - https://paraxial.io/blog/owasp-top-ten (https://paraxial.io/blog/owasp-top-ten?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post on how the OWASP Top 10 applies to Elixir and Phoenix applications. - https://owasp.org/www-project-top-ten/ (https://owasp.org/www-project-top-ten/?utm_source=thinkingelixir&utm_medium=shownotes) – The OWASP Top 10, a standard awareness document for developers and web application security. - https://x.com/josevalim/status/1804117870764339546 (https://x.com/josevalim/status/1804117870764339546?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim's technique of using AI to help review or determine function names in APIs. - https://fly.io/phoenix-files/using-ai-to-boost-accessibility-and-seo/ (https://fly.io/phoenix-files/using-ai-to-boost-accessibility-and-seo/?utm_source=thinkingelixir&utm_medium=shownotes) – Article on using AI to boost image accessibility and SEO, demonstrating working with OpenAI and Anthropic using Elixir. - https://2024.elixirconf.com/ (https://2024.elixirconf.com/?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf 2024 details, taking place from August 28-30 with various speakers and talks focused on Elixir. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
We will discuss LLM security in general and some of the issues covered in the OWASP Top 10 for LLMs! Segment Resources: https://genai.owasp.org/ Skyrocketing IoT vulnerabilities, bricked computers?, MACBORG!, raw dogging source code, PHP strikes again and again, if you have a Netgear WNR614 replace it now, Arm Mali, new OpenSSH feature, weird headphones, decrypting firmware, and VPNs are still being hacked! Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-832
We will discuss LLM security in general and some of the issues covered in the OWASP Top 10 for LLMs! Segment Resources: https://genai.owasp.org/ Show Notes: https://securityweekly.com/psw-832
We will discuss LLM security in general and some of the issues covered in the OWASP Top 10 for LLMs! Segment Resources: https://genai.owasp.org/ Show Notes: https://securityweekly.com/psw-832
We will discuss LLM security in general and some of the issues covered in the OWASP Top 10 for LLMs! Segment Resources: https://genai.owasp.org/ Skyrocketing IoT vulnerabilities, bricked computers?, MACBORG!, raw dogging source code, PHP strikes again and again, if you have a Netgear WNR614 replace it now, Arm Mali, new OpenSSH feature, weird headphones, decrypting firmware, and VPNs are still being hacked! Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-832
Send us a Text Message.Todays episode contains basics of API Security. The podcast covers the following topics:00:00 Introduction00:51 What is API?04:53 Common Terms Associated with API09:23 API Landscape10:58 What is API Security?12:32 API Security Risks15:02 OWASP Top 10 for API Security15:39 API Attack Workflow18:29 Story of Real API related Security Incident19:17 API Security Best Practises20:21 API Security Market Landscape24:41 Appsec vs API SecurityLink to my blog on "API Security Checklist" https://thecyberman.substack.com/p/api-security-checklistSupport the Show.Google Drive link for Podcast content:https://drive.google.com/drive/folders/10vmcQ-oqqFDPojywrfYousPcqhvisnkoMy Profile on LinkedIn: https://www.linkedin.com/in/prashantmishra11/Youtube Channnel : https://www.youtube.com/@TheCybermanShow Twitter handle https://twitter.com/prashant_cyber PS: The views are my own and dont reflect any views from my employer.
Everyone is interested in generative AIs and LLMs, and everyone is looking for use cases and apps to apply them to. Just as the early days of the web inspired the original OWASP Top 10 over 20 years ago, the experimentation and adoption of LLMs has inspired a Top 10 list of their own. Sandy Dunn talks about why the list looks so familiar in many ways -- after all, LLMs are still software. But the list captures some new concepts that anyone looking to use LLMs or generative AIs should be aware of. https://llmtop10.com/ https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources https://owasp.org/www-project-ai-security-and-privacy-guide/ https://gandalf.lakera.ai/ https://quarkiq.com/blog How companies are benefiting from the enterprise browser. It's not just security when talking about the enterprise browser. It's the marriage between security AND productivity. In this interview, Mike will provide real live case studies on how different enterprises are benefitting. Segment Resources: https://www.island.io/resources https://www.island.io/press This segment is sponsored by Island. Visit https://www.securityweekly.com/islandrsac to learn more about them! The cybersecurity landscape continues to transform, with a growing focus on mitigating supply chain vulnerabilities, enforcing data governance, and incorporating AI into security measures. This transformation promises to steer DevSecOps teams toward software development processes with efficiency and security at the forefront. Josh Lemos, Chief Information Security Officer at GitLab will discuss the role of AI in securing software and data supply chains and helping developers work more efficiently while creating more secure code. This segment is sponsored by GitLab. Visit https://securityweekly.com/gitlabrsac to learn more about them! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-285
Everyone is interested in generative AIs and LLMs, and everyone is looking for use cases and apps to apply them to. Just as the early days of the web inspired the original OWASP Top 10 over 20 years ago, the experimentation and adoption of LLMs has inspired a Top 10 list of their own. Sandy Dunn talks about why the list looks so familiar in many ways -- after all, LLMs are still software. But the list captures some new concepts that anyone looking to use LLMs or generative AIs should be aware of. https://llmtop10.com/ https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources https://owasp.org/www-project-ai-security-and-privacy-guide/ https://gandalf.lakera.ai/ https://quarkiq.com/blog Show Notes: https://securityweekly.com/asw-285
Everyone is interested in generative AIs and LLMs, and everyone is looking for use cases and apps to apply them to. Just as the early days of the web inspired the original OWASP Top 10 over 20 years ago, the experimentation and adoption of LLMs has inspired a Top 10 list of their own. Sandy Dunn talks about why the list looks so familiar in many ways -- after all, LLMs are still software. But the list captures some new concepts that anyone looking to use LLMs or generative AIs should be aware of. https://llmtop10.com/ https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources https://owasp.org/www-project-ai-security-and-privacy-guide/ https://gandalf.lakera.ai/ https://quarkiq.com/blog How companies are benefiting from the enterprise browser. It's not just security when talking about the enterprise browser. It's the marriage between security AND productivity. In this interview, Mike will provide real live case studies on how different enterprises are benefitting. Segment Resources: https://www.island.io/resources https://www.island.io/press This segment is sponsored by Island. Visit https://www.securityweekly.com/islandrsac to learn more about them! The cybersecurity landscape continues to transform, with a growing focus on mitigating supply chain vulnerabilities, enforcing data governance, and incorporating AI into security measures. This transformation promises to steer DevSecOps teams toward software development processes with efficiency and security at the forefront. Josh Lemos, Chief Information Security Officer at GitLab will discuss the role of AI in securing software and data supply chains and helping developers work more efficiently while creating more secure code. This segment is sponsored by GitLab. Visit https://securityweekly.com/gitlabrsac to learn more about them! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-285
Software Engineering Radio - The Podcast for Professional Software Developers
Shachar Binyamin, CEO and co-founder of Inigo, joins host Priyanka Raghavan to discuss GraphQL security. They begin with a look at the state of adoption of GraphQL and why it's so popular. From there, they consider why GraphQL security is important as they take a deep dive into a range of known security issues that have been exploited in GraphQL, including authentication, authorization, and denial of service attacks with references from the OWASP Top 10 API Security Risks. They discuss some mitigation strategies and methodologies for solving GraphQL security problems, and the show ends with discussion of Inigo and Shachar's top three recommendations for building safe GraphQL applications. Brought to you by IEEE Software and IEEE Computer Society.
In this episode, I sat down with Aaron Weller, the Leader of HP's Privacy Engineering Center of Excellence (CoE), focused on providing technical solutions for privacy engineering across HP's global operations. Throughout our conversation, we discuss: what motivated HP's leadership to stand up a CoE for Privacy Engineering; Aaron's approach to staffing the CoE; how a CoE's can shift privacy left in a large, matrixed organization like HP's; and, how to leverage the CoE to proactively manage privacy risk.Aaron emphasizes the importance of understanding an organization's strategy when creating a CoE and shares his methods for gathering data to inform the center's roadmap and team building. He also highlights the great impact that a Center of Excellence can offer and gives advice for implementing one in your organization. We touch on the main challenges in privacy engineering today and the value of designing user-friendly privacy experiences. In addition, Aaron provides his perspective on selecting the right combination of Privacy Enhancing Technologies (PETs) for anonymity, how to go about implementing PETs, and the role that AI governance plays in his work. Topics Covered: Aaron's deep privacy and consulting background and how he ended up leading HP's Privacy Engineering Center of Excellence The definition of a "Center of Excellence" (CoE) and how a Privacy Engineering CoE can drive value for an organization and shift privacy leftWhat motivates a company like HP to launch a CoE for Privacy Engineering and what it's reporting line should beAaron's approach to creating a Privacy Engineering CoE roadmap; his strategy for staffing this CoE; and the skills & abilities that he soughtHow HP's Privacy Engineering CoE works with the business to advise on, and select, the right PETs for each business use caseWhy it's essential to know the privacy guarantees that your organization wants to assert before selecting the right PETs to get you thereLessons Learned from setting up a Privacy Engineering CoE and how to get executive sponsorshipThe amount of time that Privacy teams have had to work on AI issues over the past year, and advice on preventing burnoutAaron's hypothesis about the value of getting an early handle on governance over the adoption of innovative technologiesThe importance of being open to continuous learning in the field of privacy engineering Guest Info: Connect with Aaron on LinkedInLearn about HP's Privacy Engineering Center of ExcellenceReview the OWASP Machine Learning Security Top 10Review the OWASP Top 10 for LLM Applications Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing Partners Top privacy talent - when you need it, where you need it.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
The OWASP Top 10 gets its first update after a year, Metasploit gets its first rewrite (but it's still in Perl), PHP adds support for prepared statements, RSA Conference puts passwords on notice while patching remains hard, and more! Show Notes: https://securityweekly.com/asw-279
Sometimes infosec problems can be summarized succinctly, like "patching is hard". Sometimes a succinct summary sounds convincing, but is based on old data, irrelevant data, or made up data. Adrian Sanabria walks through some of the archeological work he's done to dig up the source of some myths. We talk about some of our favorite (as in most disliked) myths to point out how oversimplified slogans and oversimplified threat models lead to bad advice -- and why bad advice can make users less secure. Segment resources: https://www.oreilly.com/library/view/cybersecurity-myths-and/9780137929214/ The OWASP Top 10 gets its first update after a year, Metasploit gets its first rewrite (but it's still in Perl), PHP adds support for prepared statements, RSA Conference puts passwords on notice while patching remains hard, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-279
Sometimes infosec problems can be summarized succinctly, like "patching is hard". Sometimes a succinct summary sounds convincing, but is based on old data, irrelevant data, or made up data. Adrian Sanabria walks through some of the archeological work he's done to dig up the source of some myths. We talk about some of our favorite (as in most disliked) myths to point out how oversimplified slogans and oversimplified threat models lead to bad advice -- and why bad advice can make users less secure. Segment resources: https://www.oreilly.com/library/view/cybersecurity-myths-and/9780137929214/ The OWASP Top 10 gets its first update after a year, Metasploit gets its first rewrite (but it's still in Perl), PHP adds support for prepared statements, RSA Conference puts passwords on notice while patching remains hard, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-279
In this episode of CISO Tradecraft, host G. Mark Hardy delves into the crucial topic of the OWASP Top 10 Web Application Security Risks, offering insights on how attackers exploit vulnerabilities and practical advice on securing web applications. He introduces OWASP and its significant contributions to software security, then progresses to explain each of the OWASP Top 10 risks in detail, such as broken access control, injection flaws, and security misconfigurations. Through examples and recommendations, listeners are equipped with the knowledge to better protect their web applications and ultimately improve their cybersecurity posture. OWASP Cheat Sheets: https://cheatsheetseries.owasp.org/ OWASP Top 10: https://owasp.org/www-project-top-ten/ Transcripts: https://docs.google.com/document/d/17Tzyd6i6qRqNfMJ8OOEOOGpGGW0S8w32 Chapters 00:00 Introduction 01:11 Introducing OWASP: A Pillar in Cybersecurity 02:28 The Evolution of Web Vulnerabilities 05:01 Exploring Web Application Security Risks 07:46 Diving Deep into OWASP Top 10 Risks 09:28 1) Broken Access Control 14:09 2) Cryptographic Failures 18:40 3) Injection Attacks 23:57 4) Insecure Design 25:15 5) Security Misconfiguration 29:27 6) Vulnerable and Outdated Software Components 32:31 7) Identification and Authentication Failures 36:49 8) Software and Data Integrity Failures 38:46 9) Security Logging and Monitoring Practices 40:32 10) Server Side Request Forgery (SSRF) 42:15 Recap and Conclusion: Mastering Web Application Security
Guest: Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State university [@BoiseState]On Linkedin | https://www.linkedin.com/in/sandydunnciso/____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin____________________________This Episode's SponsorsImperva | https://itspm.ag/imperva277117988Pentera | https://itspm.ag/penteri67a___________________________Episode NotesIn this episode of Redefining CyberSecurity, host Sean Martin and cybersecurity expert, Sandy Dunn, navigate the intricate landscape of AI applications and large language models (LLMs). They explore the potential benefits and pitfalls, emphasizing the need for strategic balance and caution in implementation.Sandy shares insights from her extensive experience, including her role in creating a comprehensive checklist to help organizations effectively integrate AI without expanding their attack surface. This checklist, a product of her involvement with the OWASP TOP 10 LLM project, serves as a valuable resource for cybersecurity teams and developers alike.The conversation also explores the legal implications of AI, underscoring the recent surge in privacy laws across several states and countries. Sandy and Sean highlight the importance of understanding these laws and the potential repercussions of non-compliance.Ethics also play a central role in their discussion, with both agreeing on the necessity of ethical considerations when implementing AI. They caution against the hasty integration of large language models without adequate preparation and understanding of the business case.The duo also examine the potential for AI to be manipulated and the importance of maintaining good cybersecurity hygiene. They encourage listeners to use AI as an opportunity to improve their entire environment, while also being mindful of the potential risks.While the use of AI and large language models presents a host of benefits to organizations, it is crucial to consider the potential security risks. By understanding the business case, recognizing legal implications, considering ethical aspects, utilizing comprehensive checklists, and maintaining robust cybersecurity, organizations can safely navigate the complex landscape of AI.___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:
In this episode of CISO Tradecraft, host G Mark Hardy invites Scott Russo, a cybersecurity and engineering expert for a deep dive into the creation and maintenance of secure developer training programs. Scott discusses the importance of hands-on engaging training and the intersection of cybersecurity with teaching and mentorship. Scott shares his experiences building a secure developer training program, emphasizing the importance of gamification, tiered training, showmanship, and real-world examples to foster engagement and efficient learning. Note this episode will continue in with a part two in the next episode ISACA Event (10 Jan 2024) With G Mark Hardy - https://www.cisotradecraft.com/isaca Scott Russo - https://www.linkedin.com/in/scott-russo/ HBR Balanced Scorecard - https://hbr.org/1992/01/the-balanced-scorecard-measures-that-drive-performance-2 Transcripts - https://docs.google.com/document/d/124IqIzBnG3tPj64O2mZeO-IDTx9wIIxJ Youtube - https://youtu.be/NkrtTncAuBA Chapters 00:00 Introduction 03:00 Overview of Secure Developer Training Program 04:46 Motivation Behind Creating the Training Program 06:03 Objectives of the Secure Developer Training Program 07:45 Defining the Term 'Secure Developer' 14:49 Keeping the Training Program Current and Engaging 21:10 Real World Impact of the Training Program 21:46 Understanding the Cybersecurity Budget Argument 21:58 Incorporating Real World Examples into Training 22:26 Personal Experiences and Stories in Training 24:06 Industry Best Practices and Standards 24:18 Aligning with OWASP Top 10 25:53 Balancing OWASP Top 10 with Other Standards 26:12 The Importance of Good Stories in Training 26:32 Duration of the Training Program 28:37 Resources Required for the Training Program 32:23 Measuring the Effectiveness of the Training Program 36:07 Gamification and Certifications in Training 38:56 Tailoring Training to Different Levels of Experience 41:03 Conclusion and Final Thoughts
In this episode, host Ron Eddings is joined by Sr. Director of Red Team Operations at Coalfire, Pete Deros, to discuss the hottest topic around; adversarial AI. Ron and Pete discuss how AI is used and how the adversary is using AI so everyone can stay one step ahead of them as well. Impactful Moments 00:00 - Welcome 01:35 - Introducing Pete Deros 03:30 - More Easily Phished 05:09 - 11 Labs Video 06:42 - Is this AI or LLM? 9:18 - AI or LLMs: Who has the Speed? 10:36 - Fine Tuning LLMs 14:37 - WormGPT & Hallucinations 17:01 - LLMs Changing Second to Second 18:38 - A Word From Our Sponsor 20:19 - ‘Write me Ransomware!' 23:24 - Working Around AI Roadblocks 28:00 - “Undetectable for A Human” 31:58 - Pete Can Help You Floss! 34:56 - OWASP Top 10 & Resources 37:00 - Check out Coalfire Links: Connect with our guest Pete Deros: https://www.linkedin.com/in/pete-deros-94524b9a/ Coalfire's Website: https://www.coalfire.com/ Coalfire Securialities Report: https://www.coalfire.com/insights/resources/reports/securealities-report-2023-compliance OWASP Top 10 LLM: https://owasp.org/www-project-top-10-for-large-language-model-applications/ Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/ Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord
Talk Python To Me - Python conversations for passionate developers
Do you worry about your developer / data science supply chain safety? All the packages for the Python ecosystem are much of what makes Python awesome. But the are also a bit of an open door to your code and machine. Luckily the PSF is taking this seriously and hired Mike Fiedler as the full time PyPI Safety & Security Engineer (not to be confused with the Security Developer in Residence staffed by Seth Michael Larson). Mike is here to give us the state of the PyPI security and plans for the future. Links from the show Mike on Twitter: @mikefiedler Mike on Mastodon: @miketheman@hachyderm.io Supply Chain examples SolarWinds: csoonline.com XcodeGhost: wikipedia.org Google Ad Malware: medium.com PyPI: pypi.org OWASP Top 10: owasp.org Trusted Publishers: docs.pypi.org libraries.io: libraries.io GitHub Full 2FA: github.blog Mike's Latest Blog Post: blog.pypi.org pprintpp package: github.com ICDiff: github.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to us on YouTube: youtube.com Follow Talk Python on Mastodon: talkpython Follow Michael on Mastodon: mkennedy Sponsors Sentry Error Monitoring, Code TALKPYTHON Talk Python Training