Podcasts about owasp top

  • 138PODCASTS
  • 311EPISODES
  • 38mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 22, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about owasp top

Latest podcast episodes about owasp top

Hacking Humans
OWASP insecure design (noun) [Word Notes]

Hacking Humans

Play Episode Listen Later Apr 22, 2025 8:19


Please enjoy this encore episode of Word Notes. A broad OWASP Top 10 software development category representing missing, ineffective, or unforeseen security measures. CyberWire Glossary link: https://thecyberwire.com/glossary/owasp-insecure-design Audio reference link: “Oceans Eleven Problem Constraints Assumptions.” by Steve Jones, YouTube, 4 November 2015.

Word Notes
OWASP insecure design (noun)

Word Notes

Play Episode Listen Later Apr 22, 2025 8:19


Please enjoy this encore episode of Word Notes. A broad OWASP Top 10 software development category representing missing, ineffective, or unforeseen security measures. CyberWire Glossary link: https://thecyberwire.com/glossary/owasp-insecure-design Audio reference link: “Oceans Eleven Problem Constraints Assumptions.” by Steve Jones, YouTube, 4 November 2015. Learn more about your ad choices. Visit megaphone.fm/adchoices

ITSPmagazine | Technology. Cybersecurity. Society
From Overload to Insight: Are We Getting Smarter, or Just Letting AI Think for Us? | A RSA Conference 2025 Conversation with Steve Wilson | On Location Coverage with Sean Martin and Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 4, 2025 27:26


In a conversation that sets the tone for this year's RSA Conference, Steve Wilson, shares a candid look at how AI is intersecting with cybersecurity in real and measurable ways. Wilson, who also leads the OWASP Top 10 for Large Language Models project and recently authored a book published by O'Reilly on the topic, brings a multi-layered perspective to a discussion that blends strategy, technology, and organizational behavior.Wilson's session title at RSA Conference—“Are the Machines Learning, or Are We?”—asks a timely question. Security teams are inundated with data, but without meaningful visibility—defined not just as seeing, but understanding and acting on what you see—confidence in defense capabilities may be misplaced. Wilson references a study conducted with IDC that highlights this very disconnect: organizations feel secure, yet admit they can't see enough of their environment to justify that confidence.This episode tackles one of the core paradoxes of AI in cybersecurity: it offers the promise of enhanced detection, speed, and insight, but only if applied thoughtfully. Generative AI and large language models (LLMs) aren't magical fixes, and they struggle with large datasets. But when layered atop refined systems like user and entity behavior analytics (UEBA), they can help junior analysts punch above their weight—or even automate early-stage investigations.Wilson doesn't stop at the tools. He zooms out to the business implications, where visibility, talent shortages, and tech complexity converge. He challenges security leaders to rethink what visibility truly means and to recognize the mounting noise problem. The industry is chasing 40% more CVEs year over year—an unsustainable growth curve that demands better signal-to-noise filtering.At its heart, the episode raises important strategic questions: Are businesses merely offloading thinking to machines? Or are they learning how to apply these technologies to think more clearly, act more decisively, and structure teams differently?Whether you're building a SOC strategy, rethinking tooling, or just navigating the AI hype cycle, this conversation with Steve Wilson offers grounded insights with real implications for today—and tomorrow.

ITSPmagazine | Technology. Cybersecurity. Society
Building and Securing Intelligent Workflows: Why Your AI Strategy Needs Agentic AI Threat Modeling and a Zero Trust Mindset | A Conversation with Ken Huang | Redefining CyberSecurity with Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Mar 25, 2025 43:10


⬥GUEST⬥Ken Huang, Co-Chair, AI Safety Working Groups at Cloud Security Alliance | On LinkedIn: https://www.linkedin.com/in/kenhuang8/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥In this episode of Redefining CyberSecurity, host Sean Martin speaks with Ken Huang, Co-Chair of the Cloud Security Alliance (CSA) AI Working Group and author of several books including Generative AI Security and the upcoming Agent AI: Theory and Practice. The conversation centers on what agentic AI is, how it is being implemented, and what security, development, and business leaders need to consider as adoption grows.Agentic AI refers to systems that can autonomously plan, execute, and adapt tasks using large language models (LLMs) and integrated tools. Unlike traditional chatbots, agentic systems handle multi-step workflows, delegate tasks to specialized agents, and dynamically respond to inputs using tools like vector databases or APIs. This creates new possibilities for business automation but also introduces complex security and governance challenges.Practical Applications and Emerging Use CasesKen outlines current use cases where agentic AI is being applied: startups using agentic models to support scientific research, enterprise tools like Salesforce's AgentForce automating workflows, and internal chatbots acting as co-workers by tapping into proprietary data. As agentic AI matures, these systems may manage travel bookings, orchestrate ticketing operations, or even assist in robotic engineering—all with minimal human intervention.Implications for Development and Security TeamsDevelopment teams adopting agentic AI frameworks—such as AutoGen or CrewAI—must recognize that most do not come with out-of-the-box security controls. Ken emphasizes the need for SDKs that add authentication, monitoring, and access controls. For IT and security operations, agentic systems challenge traditional boundaries; agents often span across cloud environments, demanding a zero-trust mindset and dynamic policy enforcement.Security leaders are urged to rethink their programs. Agentic systems must be validated for accuracy, reliability, and risk—especially when multiple agents operate together. Threat modeling and continuous risk assessment are no longer optional. Enterprises are encouraged to start small: deploy a single-agent system, understand the workflow, validate security controls, and scale as needed.The Call for Collaboration and Mindset ShiftAgentic AI isn't just a technological shift—it requires a cultural one. Huang recommends cross-functional engagement and alignment with working groups at CSA, OWASP, and other communities to build resilient frameworks and avoid duplicated effort. Zero Trust becomes more than an architecture—it becomes a guiding principle for how agentic AI is developed, deployed, and defended.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥BOOK | Generative AI Security: https://link.springer.com/book/10.1007/978-3-031-54252-7BOOK | Agentic AI: Theories and Practices, to be published August by Springer: https://link.springer.com/book/9783031900259BOOK | The Handbook of CAIO (with a business focus): https://www.amazon.com/Handbook-Chief-AI-Officers-Revolution/dp/B0DFYNXGMRMore books at Amazon, including books published by Cambridge University Press and John Wiley, etc.: https://www.amazon.com/stores/Ken-Huang/author/B0D3J7L7GNVideo Course Mentioned During this Episode: "Generative AI for Cybersecurity" video course by EC-Council with 255 people rated averaged 5 starts: https://codered.eccouncil.org/course/generative-ai-for-cybersecurity-course?logged=falsePodcast: The 2025 OWASP Top 10 for LLMs: What's Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 

Redefining CyberSecurity
Building and Securing Intelligent Workflows: Why Your AI Strategy Needs Agentic AI Threat Modeling and a Zero Trust Mindset | A Conversation with Ken Huang | Redefining CyberSecurity with Sean Martin

Redefining CyberSecurity

Play Episode Listen Later Mar 25, 2025 43:10


⬥GUEST⬥Ken Huang, Co-Chair, AI Safety Working Groups at Cloud Security Alliance | On LinkedIn: https://www.linkedin.com/in/kenhuang8/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥In this episode of Redefining CyberSecurity, host Sean Martin speaks with Ken Huang, Co-Chair of the Cloud Security Alliance (CSA) AI Working Group and author of several books including Generative AI Security and the upcoming Agent AI: Theory and Practice. The conversation centers on what agentic AI is, how it is being implemented, and what security, development, and business leaders need to consider as adoption grows.Agentic AI refers to systems that can autonomously plan, execute, and adapt tasks using large language models (LLMs) and integrated tools. Unlike traditional chatbots, agentic systems handle multi-step workflows, delegate tasks to specialized agents, and dynamically respond to inputs using tools like vector databases or APIs. This creates new possibilities for business automation but also introduces complex security and governance challenges.Practical Applications and Emerging Use CasesKen outlines current use cases where agentic AI is being applied: startups using agentic models to support scientific research, enterprise tools like Salesforce's AgentForce automating workflows, and internal chatbots acting as co-workers by tapping into proprietary data. As agentic AI matures, these systems may manage travel bookings, orchestrate ticketing operations, or even assist in robotic engineering—all with minimal human intervention.Implications for Development and Security TeamsDevelopment teams adopting agentic AI frameworks—such as AutoGen or CrewAI—must recognize that most do not come with out-of-the-box security controls. Ken emphasizes the need for SDKs that add authentication, monitoring, and access controls. For IT and security operations, agentic systems challenge traditional boundaries; agents often span across cloud environments, demanding a zero-trust mindset and dynamic policy enforcement.Security leaders are urged to rethink their programs. Agentic systems must be validated for accuracy, reliability, and risk—especially when multiple agents operate together. Threat modeling and continuous risk assessment are no longer optional. Enterprises are encouraged to start small: deploy a single-agent system, understand the workflow, validate security controls, and scale as needed.The Call for Collaboration and Mindset ShiftAgentic AI isn't just a technological shift—it requires a cultural one. Huang recommends cross-functional engagement and alignment with working groups at CSA, OWASP, and other communities to build resilient frameworks and avoid duplicated effort. Zero Trust becomes more than an architecture—it becomes a guiding principle for how agentic AI is developed, deployed, and defended.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥BOOK | Generative AI Security: https://link.springer.com/book/10.1007/978-3-031-54252-7BOOK | Agentic AI: Theories and Practices, to be published August by Springer: https://link.springer.com/book/9783031900259BOOK | The Handbook of CAIO (with a business focus): https://www.amazon.com/Handbook-Chief-AI-Officers-Revolution/dp/B0DFYNXGMRMore books at Amazon, including books published by Cambridge University Press and John Wiley, etc.: https://www.amazon.com/stores/Ken-Huang/author/B0D3J7L7GNVideo Course Mentioned During this Episode: "Generative AI for Cybersecurity" video course by EC-Council with 255 people rated averaged 5 starts: https://codered.eccouncil.org/course/generative-ai-for-cybersecurity-course?logged=falsePodcast: The 2025 OWASP Top 10 for LLMs: What's Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 

CISO Tradecraft
#223 - A CISO Primer on Agentic AI

CISO Tradecraft

Play Episode Listen Later Mar 10, 2025 25:43 Transcription Available


In this episode of CISO Tradecraft, G. Mark Hardy dives deep into the world of Agentic AI and its impact on cybersecurity. The discussion covers the definition and characteristics of Agentic AI, as well as expert insights on its feasibility. Learn about its primary functions—perception, cognition, and action—and explore practical cybersecurity applications. Discover the rapid advancements made by tech giants and potential risks involved. This episode is a comprehensive guide to understanding and securely implementing Agentic AI in your enterprise. Transcripts https://docs.google.com/document/d/1tIv2NKX0DL4NTnvqKV9rKrgrewa68m3W References Vladimir Putin - https://www.rt.com/news/401731-ai-rule-world-putin/ Minds and Machines - https://link.springer.com/article/10.1007/s44163-024-00216-2 Anthropic - https://www.cnbc.com/2024/10/22/anthropic-announces-ai-agents-for-complex-tasks-racing-openai.html Convergence AI - https://convergence.ai/training-web-agents-with-web-world-models-dec-2024/ OpenAI Operator - https://openai.com/index/introducing-operator/ ByteDance UITARS - https://venturebeat.com/ai/bytedances-ui-tars-can-take-over-your-computer-outperforms-gpt-4o-and-claude/ Zapier - https://www.linkedin.com/pulse/openai-bytedance-zapier-launch-ai-agents-getcoai-l6blf/ Microsoft OmniParser - https://www.microsoft.com/en-us/research/articles/omniparser-v2-turning-any-llm-into-a-computer-use-agent/ Google Project Mariner - https://deepmind.google/technologies/project-mariner/ Rajeev Sharma - Agentic AI Architecture - https://markovate.com/blog/agentic-ai-architecture/ NIST.AI.600-1 - https://doi.org/10.6028/NIST.AI.600-1 Mitre ATLAS - https://atlas.mitre.org/ OWASP Top 10 for LLMs - https://owasp.org/www-project-top-10-for-large-language-model-applications/ ISO 42001 - https://www.iso.org/standard/81230.html Chapters  00:00 Introduction and Intriguing Quote 01:10 Defining Agentic AI 02:01 Expert Insights on Agency 04:32 Agentic AI in Practice 06:54 Recent Developments in Agentic AI 08:20 Deep Dive into Agentic AI Infrastructure 15:35 Use Cases for Agentic AI 21:12 Challenges and Considerations 24:22 Conclusion and Recap

ITSPmagazine | Technology. Cybersecurity. Society
Turning Developers into Security Champions: The Business Case for Secure Development | A Manicode Brand Story with Jim Manico

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Mar 6, 2025 42:25


Organizations build and deploy applications at an unprecedented pace, but security is often an afterthought. This episode of ITSPmagazine's Brand Story features Jim Manico, founder of Manicode Security, in conversation with hosts Sean Martin and Marco Ciappelli. The discussion explores the current state of application security, the importance of developer training, and how organizations can integrate security from the ground up to drive better business outcomes.The Foundation of Secure DevelopmentJim Manico has spent decades helping engineers and architects understand and implement secure coding practices. His work with the Open Web Application Security Project (OWASP), including contributions to the OWASP Top 10 and the OWASP Cheat Sheet Series, has influenced how security is approached in software development. He emphasizes that security should not be an afterthought but a fundamental part of the development process.He highlights OWASP's role in providing documentation, security tools, and standards like the Application Security Verification Standard (ASVS), which is now in its 5.0 release. These resources help organizations build secure applications, but Manico points out that simply having the guidance available isn't enough—engineers need the right training to apply security principles effectively.Why Training MattersManico has trained thousands of engineers worldwide and sees firsthand the impact of hands-on education. He explains that developers often lack formal security training, which leads to common mistakes such as insecure authentication, improper data handling, and vulnerabilities in third-party dependencies. His training programs focus on practical, real-world applications, allowing developers to immediately integrate security into their work.Security training also helps businesses beyond just compliance. While some companies initially engage in training to meet regulatory requirements, many realize the long-term value of security in reducing risk, improving product quality, and building customer trust. Manico shares an example of a startup that embedded security from the beginning, investing heavily in training early on. That approach helped differentiate them in the market and contributed to their success as a multi-billion-dollar company.The Role of AI and Continuous LearningManico acknowledges that the speed of technological change presents challenges for security training. Frameworks, programming languages, and attack techniques evolve constantly, requiring continuous learning. He has integrated AI tools into his training workflow to help answer complex questions, identify knowledge gaps, and refine content. AI serves as an augmentation tool, not a replacement, and he encourages developers to use it as an assistant to strengthen their understanding of security concepts.Security as a Business EnablerThe conversation reinforces that secure coding is not just about avoiding breaches—it is about building better software. Organizations that prioritize security early can reduce costs, improve reliability, and increase customer confidence. Manico's approach to education is about empowering developers to think beyond compliance and see security as a critical component of software quality and business success.For organizations looking to enhance their security posture, developer training is an investment that pays off. Manicode Security offers customized training programs to meet the specific needs of teams, covering topics from secure coding fundamentals to advanced application security techniques. To learn more or schedule a session, Jim Manico can be reached at Jim@manicode.com.Tune in to the full episode to hear more insights from Jim Manico on how security training is shaping the future of application security.Learn more about Manicode: https://itspm.ag/manicode-security-7q8iNote: This story contains promotional content. Learn more.Guest: Jim Manico, Founder and Secure Coding Educator at Manicode Security | On Linkedin: https://www.linkedin.com/in/jmanico/ResourcesDownload the Course Catalog: https://itspm.ag/manicode-x684Learn more and catch more stories from Manicode Security: https://www.itspmagazine.com/directory/manicode-securityAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story

Application Security PodCast
Henrik Plate -- OWASP Top 10 Open Source Risks

Application Security PodCast

Play Episode Listen Later Mar 4, 2025 38:26


Henrik Plate joins us to discuss the OWASP Top 10 Open Source Risks, a guide highlighting critical security and operational challenges in using open source dependencies. The list includes risks like known vulnerabilities, compromised legitimate packages, name confusion attacks, and unmaintained software, providing developers and organizations a framework to assess and mitigate potential threats. Henrik offers insights on how developers and AppSec professionals can implement the guidelines. Our discussion also includes the need for a dedicated open-source risk list, and the importance of addressing known vulnerabilities, unmaintained projects, immature software, and more. The OWASP Top 10 Open Source Risks FOLLOW OUR SOCIAL MEDIA: ➜Twitter: @AppSecPodcast➜LinkedIn: The Application Security Podcast➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast Thanks for Listening! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Storm⚡️Watch by GreyNoise Intelligence
Cyber Apocalypse 2025: Ransomware Rampage, ICS Mayhem, & Vulnerability Avalanche Exposed

Storm⚡️Watch by GreyNoise Intelligence

Play Episode Listen Later Mar 4, 2025 60:38


Forecast = Ransomware storms surge with an 87% spike in industrial attacks—brace for ICS strikes from GRAPHITE and BAUXITE! Infostealers hit healthcare and education, while VPN vulnerabilities pour in—grab your digital umbrella! ‍ It's report season and today the crew kicks things off with a breakdown of Veracode's State of Software Security 2025 Report, highlighting significant improvements in OWASP Top 10 pass rates but also noting concerning trends in high-severity flaws and security debt. Next, we take a peek at Dragos's 2025 OT/ICS Cybersecurity Report, which reveals an increase in ransomware attacks against industrial organizations and the emergence of new threat groups like GRAPHITE and BAUXITE. The report also details the evolution of malware targeting critical infrastructure, such as Fuxnet and FrostyGoop. The Huntress 2025 Cyber Threat Report is then discussed, showcasing the dominance of infostealers and malicious scripts in the threat landscape, with healthcare and education sectors being prime targets. The report also highlights the shift in ransomware tactics towards data theft and extortion. The team also quickly covers a recent and _massive_ $1.5 billion Ethereum heist. We *FINALLY* cover some recent findings from Censys, including their innovative approach to discovering non-standard port usage in Industrial Control System protocols. This segment also touches on the growing threat posed by vulnerabilities in edge security products. We also *FINALLY* get around to checking out VulnCheck's research, including an analysis of Black Basta ransomware group's tactics based on leaked chat logs, and their efforts to automate Stakeholder Specific Vulnerability Categorization (SSVC) for more effective vulnerability prioritization. The episode wraps up with mentions of GreyNoise's latest reports on mass internet exploitation and a newly discovered DDoS botnet, providing listeners with a well-rounded view of the current cybersecurity landscape. Storm Watch Homepage >> Learn more about GreyNoise >>  

ITSPmagazine | Technology. Cybersecurity. Society
The 2025 OWASP Top 10 for LLMs: What's Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros | Redefining CyberSecurity with Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Feb 13, 2025 47:58


⬥GUESTS⬥Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State University | On Linkedin: https://www.linkedin.com/in/sandydunnciso/Rock Lambros, CEO and founder of RockCyber | On LinkedIn | https://www.linkedin.com/in/rocklambros/Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinView This Show's Sponsors⬥EPISODE NOTES⬥The rise of large language models (LLMs) has reshaped industries, bringing both opportunities and risks. The latest OWASP Top 10 for LLMs aims to help organizations understand and mitigate these risks. In a recent episode of Redefining Cybersecurity, host Sean Martin sat down with Sandy Dunn and Rock Lambros to discuss the latest updates to this essential security framework.The OWASP Top 10 for LLMs: What It Is and Why It MattersOWASP has long been a trusted source for security best practices, and its LLM-specific Top 10 is designed to guide organizations in identifying and addressing key vulnerabilities in AI-driven applications. This initiative has rapidly gained traction, becoming a reference point for AI security governance, testing, and implementation. Organizations developing or integrating AI solutions are now evaluating their security posture against this list, ensuring safer deployment of LLM technologies.Key Updates for 2025The 2025 iteration of the OWASP Top 10 for LLMs introduces refinements and new focus areas based on industry feedback. Some categories have been consolidated for clarity, while new risks have been added to reflect emerging threats.• System Prompt Leakage (New) – Attackers may manipulate LLMs to extract system prompts, potentially revealing sensitive operational instructions and security mechanisms.• Vector and Embedding Risks (New) – Security concerns around vector databases and embeddings, which can lead to unauthorized data exposure or manipulation.Other notable changes include reordering certain risks based on real-world impact. Prompt Injection remains the top concern, while Sensitive Information Disclosure and Supply Chain Vulnerabilities have been elevated in priority.The Challenge of AI SecurityUnlike traditional software vulnerabilities, LLMs introduce non-deterministic behavior, making security testing more complex. Jailbreaking attacks—where adversaries bypass system safeguards through manipulative prompts—remain a persistent issue. Prompt injection attacks, where unauthorized instructions are inserted to manipulate output, are also difficult to fully eliminate.As Dunn explains, “There's no absolute fix. It's an architecture issue. Until we fundamentally redesign how we build LLMs, there will always be risk.”Beyond Compliance: A Holistic Approach to AI SecurityBoth Dunn and Lambros emphasize that organizations need to integrate AI security into their overall IT and cybersecurity strategy, rather than treating it as a separate issue. AI governance, supply chain integrity, and operational resilience must all be considered.Lambros highlights the importance of risk management over rigid compliance: “Organizations have to balance innovation with security. You don't have to lock everything down, but you need to understand where your vulnerabilities are and how they impact your business.”Real-World Impact and AdoptionThe OWASP Top 10 for LLMs has already been widely adopted, with companies incorporating it into their security frameworks. It has been translated into multiple languages and is serving as a global benchmark for AI security best practices.Additionally, initiatives like HackerPrompt 2.0 are helping security professionals stress-test AI models in real-world scenarios. OWASP is also facilitating industry collaboration through working groups on AI governance, threat intelligence, and agentic AI security.How to Get InvolvedFor those interested in contributing, OWASP provides open-access resources and welcomes participants to its AI security initiatives. Anyone can join the discussion, whether as an observer or an active contributor.As AI becomes more ingrained in business and society, frameworks like the OWASP Top 10 for LLMs are essential for guiding responsible innovation. To learn more, listen to the full episode and explore OWASP's latest AI security resources.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥OWASP GenAI: https://genai.owasp.org/Link to the 2025 version of the Top 10 for LLM Applications: https://genai.owasp.org/llm-top-10/Getting Involved: https://genai.owasp.org/contribute/OWASP LLM & Gen AI Security Summit at RSAC 2025: https://genai.owasp.org/event/rsa-conference-2025/AI Threat Mind Map: https://github.com/subzer0girl2/AI-Threat-Mind-MapGuide for Preparing and Responding to Deepfake Events: https://genai.owasp.org/resource/guide-for-preparing-and-responding-to-deepfake-events/AI Security Solution Cheat Sheet Q1-2025:https://genai.owasp.org/resource/ai-security-solution-cheat-sheet-q1-2025/HackAPrompt 2.0: https://www.hackaprompt.com/⬥ADDITIONAL INFORMATION⬥✨ To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist on YouTube:

Redefining CyberSecurity
The 2025 OWASP Top 10 for LLMs: What's Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros | Redefining CyberSecurity with Sean Martin

Redefining CyberSecurity

Play Episode Listen Later Feb 13, 2025 46:45


⬥GUESTS⬥Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State University | On Linkedin: https://www.linkedin.com/in/sandydunnciso/Rock Lambros, CEO and founder of RockCyber | On LinkedIn | https://www.linkedin.com/in/rocklambros/Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinView This Show's Sponsors⬥EPISODE NOTES⬥The rise of large language models (LLMs) has reshaped industries, bringing both opportunities and risks. The latest OWASP Top 10 for LLMs aims to help organizations understand and mitigate these risks. In a recent episode of Redefining Cybersecurity, host Sean Martin sat down with Sandy Dunn and Rock Lambros to discuss the latest updates to this essential security framework.The OWASP Top 10 for LLMs: What It Is and Why It MattersOWASP has long been a trusted source for security best practices, and its LLM-specific Top 10 is designed to guide organizations in identifying and addressing key vulnerabilities in AI-driven applications. This initiative has rapidly gained traction, becoming a reference point for AI security governance, testing, and implementation. Organizations developing or integrating AI solutions are now evaluating their security posture against this list, ensuring safer deployment of LLM technologies.Key Updates for 2025The 2025 iteration of the OWASP Top 10 for LLMs introduces refinements and new focus areas based on industry feedback. Some categories have been consolidated for clarity, while new risks have been added to reflect emerging threats.• System Prompt Leakage (New) – Attackers may manipulate LLMs to extract system prompts, potentially revealing sensitive operational instructions and security mechanisms.• Vector and Embedding Risks (New) – Security concerns around vector databases and embeddings, which can lead to unauthorized data exposure or manipulation.Other notable changes include reordering certain risks based on real-world impact. Prompt Injection remains the top concern, while Sensitive Information Disclosure and Supply Chain Vulnerabilities have been elevated in priority.The Challenge of AI SecurityUnlike traditional software vulnerabilities, LLMs introduce non-deterministic behavior, making security testing more complex. Jailbreaking attacks—where adversaries bypass system safeguards through manipulative prompts—remain a persistent issue. Prompt injection attacks, where unauthorized instructions are inserted to manipulate output, are also difficult to fully eliminate.As Dunn explains, “There's no absolute fix. It's an architecture issue. Until we fundamentally redesign how we build LLMs, there will always be risk.”Beyond Compliance: A Holistic Approach to AI SecurityBoth Dunn and Lambros emphasize that organizations need to integrate AI security into their overall IT and cybersecurity strategy, rather than treating it as a separate issue. AI governance, supply chain integrity, and operational resilience must all be considered.Lambros highlights the importance of risk management over rigid compliance: “Organizations have to balance innovation with security. You don't have to lock everything down, but you need to understand where your vulnerabilities are and how they impact your business.”Real-World Impact and AdoptionThe OWASP Top 10 for LLMs has already been widely adopted, with companies incorporating it into their security frameworks. It has been translated into multiple languages and is serving as a global benchmark for AI security best practices.Additionally, initiatives like HackerPrompt 2.0 are helping security professionals stress-test AI models in real-world scenarios. OWASP is also facilitating industry collaboration through working groups on AI governance, threat intelligence, and agentic AI security.How to Get InvolvedFor those interested in contributing, OWASP provides open-access resources and welcomes participants to its AI security initiatives. Anyone can join the discussion, whether as an observer or an active contributor.As AI becomes more ingrained in business and society, frameworks like the OWASP Top 10 for LLMs are essential for guiding responsible innovation. To learn more, listen to the full episode and explore OWASP's latest AI security resources.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥OWASP GenAI: https://genai.owasp.org/Link to the 2025 version of the Top 10 for LLM Applications: https://genai.owasp.org/llm-top-10/Getting Involved: https://genai.owasp.org/contribute/OWASP LLM & Gen AI Security Summit at RSAC 2025: https://genai.owasp.org/event/rsa-conference-2025/AI Threat Mind Map: https://github.com/subzer0girl2/AI-Threat-Mind-MapGuide for Preparing and Responding to Deepfake Events: https://genai.owasp.org/resource/guide-for-preparing-and-responding-to-deepfake-events/AI Security Solution Cheat Sheet Q1-2025:https://genai.owasp.org/resource/ai-security-solution-cheat-sheet-q1-2025/HackAPrompt 2.0: https://www.hackaprompt.com/⬥ADDITIONAL INFORMATION⬥✨ To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist on YouTube:

Absolute AppSec
Episode 270 - 2025 AppSec Predictions

Absolute AppSec

Play Episode Listen Later Jan 7, 2025


Ken and Seth return for 2025 to review the accuracy of their predictions from 2024 and make a few new ones for this new year. Some hits and misses for last year, but overall the generic predictions for both AI/LLM growth and software supply chain security were accurate. However, they were wrong in their assumptions around LLM creation and training. For 2025, predictions on AI billing models, software supply chain attacks, OWASP Top 10 2025, and more.

PurePerformance
The Security and Resiliency Challenges of Cloud Native Authorization with Alex Olivier

PurePerformance

Play Episode Listen Later Nov 11, 2024 52:35


Authentication (validating who you claim to be) and Authorization (enforcing what you are allowed to do) are critical in modern software development. While authentication seems to be a solved problem, modern software development faces many challenges with secure, fast, and resilient authorization mechanisms. To learn more about those challenges, we invited Alex Olivier, Co-Founder and CPO at Cerbos, an Open Source Scalable Authorization Solution. Alex shared insights on attribute-based vs. role-based access Control, the difference between stateful and stateless authorization implementations, why Broken Access Control is in the OWASP Top 10 Security Vulnerabilities, and how to observe the authorization solution for performance, security, and auditing purposes.Links we discussed during the episode:Alex's LinkedIn: https://www.linkedin.com/in/alexolivier/Cerbos on GitHub: https://github.com/cerbos/cerbosOWASP Broken Access Control: https://owasp.org/www-community/Broken_Access_Control

The CyberWire
LLM security 101. [Research Saturday]

The CyberWire

Play Episode Listen Later Oct 26, 2024 20:53


This week, we are pleased to be joined by Mick Baccio, global security advisor for Splunk SURGe, sharing their research on "LLM Security: Splunk & OWASP Top 10 for LLM-based Applications." The research dives into the rapid rise of AI and Large Language Models (LLMs) that initially seem magical, but behind the scenes, they are sophisticated systems built by humans. Despite their impressive capabilities, these systems are vulnerable to numerous cyber threats. Splunk's research explores the OWASP Top 10 for LLM Applications, a framework that highlights key vulnerabilities such as prompt injection, training data poisoning, and sensitive information disclosure. The research can be found here: LLM Security: Splunk & OWASP Top 10 for LLM-based Applications Learn more about your ad choices. Visit megaphone.fm/adchoices

Research Saturday
LLM security 101.

Research Saturday

Play Episode Listen Later Oct 26, 2024 20:53


This week, we are pleased to be joined by Mick Baccio, global security advisor for Splunk SURGe, sharing their research on "LLM Security: Splunk & OWASP Top 10 for LLM-based Applications." The research dives into the rapid rise of AI and Large Language Models (LLMs) that initially seem magical, but behind the scenes, they are sophisticated systems built by humans. Despite their impressive capabilities, these systems are vulnerable to numerous cyber threats. Splunk's research explores the OWASP Top 10 for LLM Applications, a framework that highlights key vulnerabilities such as prompt injection, training data poisoning, and sensitive information disclosure. The research can be found here: LLM Security: Splunk & OWASP Top 10 for LLM-based Applications Learn more about your ad choices. Visit megaphone.fm/adchoices

Code Story
The Haunted House of APIs - The Witch's Brew with Jayesh Ahire

Code Story

Play Episode Listen Later Oct 22, 2024 20:54


The Haunted House of API'sThe Witch's Brew: Stirring Up OWASP Vulnerabilities and API TestingToday, we are kicking off an amazing series for Cybersecurity Awareness month, entitled the Haunted House of API's, sponsored by our friends at Traceable AI. In this series, we are building awareness around API's, their security risks – and what you can do about it. Traceable AI is building One Platform to secure every API, so you can discover, protect, and test all your API's with contextual API security, enabling organizations to minimize risk and maximize the value API's bring to their customers.In today's episode, we will be talking with Jayesh Ahire, an expert in API testing and OWASP, will guide us through the "brew" of common vulnerabilities that haunt API ecosystems, focusing on the OWASP Top 10 for APIs. He'll share how organizations can use API security testing to spot and neutralize these vulnerabilities before they become major exploits. By emphasizing proactive security measures, Jayesh will offer insights into creating a strong API testing framework that keeps malicious actors at bay.Discussion questions:What are some of the most common vulnerabilities in APIs that align with the OWASP Top 10, and why are they so dangerous?Why is API security testing crucial for detecting these vulnerabilities early, and how does it differ from traditional security testing?Can you share an example of how an overlooked API vulnerability led to a significant security breach?How can organizations create an effective API testing framework that addresses these vulnerabilities?What tools or methods do you recommend for continuously testing APIs and ensuring they remain secure as they evolve?SponsorsTraceableLinkshttps://www.traceable.ai/https://www.linkedin.com/in/jayesh-ahire/https://owasp.org/Support this podcast at — https://redcircle.com/code-story/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

ITSPmagazine | Technology. Cybersecurity. Society
Book | The Developer's Playbook for Large Language Model Security: Building Secure AI Applications | A Conversation with Steve Wilson | Redefining CyberSecurity with Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Sep 24, 2024 34:35


Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam] & Project Lead,  OWASP Top 10 for Larage Language Model Applications [@owasp]On LinkedIn | https://www.linkedin.com/in/wilsonsd/On Twitter | https://x.com/virtualsteve____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinView This Show's Sponsors___________________________Episode NotesIn this episode of Redefining CyberSecurity, host Sean Martin sat down with Steve Wilson, chief product officer at Exabeam, to discuss the critical topic of secure AI development. The conversation revolved around the nuances of developing and deploying large language models (LLMs) in the field of cybersecurity.Steve Wilson's expertise lies at the intersection of AI and cybersecurity, a point he emphasized while sharing his journey from founding the Top 10 group for large language models to authoring his new book, "The Developer's Playbook for Large Language Model Security." In this insightful discussion, Wilson and Martin explore the roles of developers and product managers in ensuring the safety and security of AI systems.One of the key themes in the conversation is the categorization of AI applications into chatbots, co-pilots, and autonomous agents. Wilson explains that while chatbots are open-ended, interacting with users on various topics, co-pilots focus on enhancing productivity within specific domains by interacting with user data. Autonomous agents are more independent, executing tasks with minimal human intervention.Wilson brings attention to the concept of overreliance on AI models and the associated risks. Highlighting that large language models can hallucinate or produce unreliable outputs, he stresses the importance of designing systems that account for these limitations. Product managers play a crucial role here, ensuring that AI applications are built to mitigate risks and communicate their reliability to users effectively.The discussion also touches on the importance of security guardrails and continuous monitoring. Wilson introduces the idea of using tools akin to web app firewalls (WAF) or runtime application self-protection (RASP) to keep AI models within safe operational parameters. He mentions frameworks like Nvidia's open-source project, Nemo Guardrails, which aid developers in implementing these defenses.Moreover, the conversation highlights the significance of testing and evaluation in AI development. Wilson parallels the education and evaluation of LLMs to training and testing a human-like system, underscoring that traditional unit tests may not suffice. Instead, flexible test cases and advanced evaluation tools are necessary. Another critical aspect Wilson discusses is the need for red teaming in AI security. By rigorously testing AI systems and exploring their vulnerabilities, organizations can better prepare for real-world threats. This proactive approach is essential for maintaining robust AI applications.Finally, Wilson shares insights from his book, including the Responsible AI Software Engineering (RAISE) framework. This comprehensive guide offers developers and product managers practical steps to integrate secure AI practices into their workflows. With an emphasis on continuous improvement and risk management, the RAISE framework serves as a valuable resource for anyone involved in AI development.About the BookLarge language models (LLMs) are not just shaping the trajectory of AI, they're also unveiling a new era of security challenges. This practical book takes you straight to the heart of these threats. Author Steve Wilson, chief product officer at Exabeam, focuses exclusively on LLMs, eschewing generalized AI security to delve into the unique characteristics and vulnerabilities inherent in these models.Complete with collective wisdom gained from the creation of the OWASP Top 10 for LLMs list—a feat accomplished by more than 400 industry experts—this guide delivers real-world guidance and practical strategies to help developers and security teams grapple with the realities of LLM applications. Whether you're architecting a new application or adding AI features to an existing one, this book is your go-to resource for mastering the security landscape of the next frontier in AI.___________________________SponsorsImperva: https://itspm.ag/imperva277117988LevelBlue: https://itspm.ag/attcybersecurity-3jdk3___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

Redefining CyberSecurity
Book | The Developer's Playbook for Large Language Model Security: Building Secure AI Applications | A Conversation with Steve Wilson | Redefining CyberSecurity with Sean Martin

Redefining CyberSecurity

Play Episode Listen Later Sep 24, 2024 34:35


Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam] & Project Lead,  OWASP Top 10 for Larage Language Model Applications [@owasp]On LinkedIn | https://www.linkedin.com/in/wilsonsd/On Twitter | https://x.com/virtualsteve____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinView This Show's Sponsors___________________________Episode NotesIn this episode of Redefining CyberSecurity, host Sean Martin sat down with Steve Wilson, chief product officer at Exabeam, to discuss the critical topic of secure AI development. The conversation revolved around the nuances of developing and deploying large language models (LLMs) in the field of cybersecurity.Steve Wilson's expertise lies at the intersection of AI and cybersecurity, a point he emphasized while sharing his journey from founding the Top 10 group for large language models to authoring his new book, "The Developer's Playbook for Large Language Model Security." In this insightful discussion, Wilson and Martin explore the roles of developers and product managers in ensuring the safety and security of AI systems.One of the key themes in the conversation is the categorization of AI applications into chatbots, co-pilots, and autonomous agents. Wilson explains that while chatbots are open-ended, interacting with users on various topics, co-pilots focus on enhancing productivity within specific domains by interacting with user data. Autonomous agents are more independent, executing tasks with minimal human intervention.Wilson brings attention to the concept of overreliance on AI models and the associated risks. Highlighting that large language models can hallucinate or produce unreliable outputs, he stresses the importance of designing systems that account for these limitations. Product managers play a crucial role here, ensuring that AI applications are built to mitigate risks and communicate their reliability to users effectively.The discussion also touches on the importance of security guardrails and continuous monitoring. Wilson introduces the idea of using tools akin to web app firewalls (WAF) or runtime application self-protection (RASP) to keep AI models within safe operational parameters. He mentions frameworks like Nvidia's open-source project, Nemo Guardrails, which aid developers in implementing these defenses.Moreover, the conversation highlights the significance of testing and evaluation in AI development. Wilson parallels the education and evaluation of LLMs to training and testing a human-like system, underscoring that traditional unit tests may not suffice. Instead, flexible test cases and advanced evaluation tools are necessary. Another critical aspect Wilson discusses is the need for red teaming in AI security. By rigorously testing AI systems and exploring their vulnerabilities, organizations can better prepare for real-world threats. This proactive approach is essential for maintaining robust AI applications.Finally, Wilson shares insights from his book, including the Responsible AI Software Engineering (RAISE) framework. This comprehensive guide offers developers and product managers practical steps to integrate secure AI practices into their workflows. With an emphasis on continuous improvement and risk management, the RAISE framework serves as a valuable resource for anyone involved in AI development.About the BookLarge language models (LLMs) are not just shaping the trajectory of AI, they're also unveiling a new era of security challenges. This practical book takes you straight to the heart of these threats. Author Steve Wilson, chief product officer at Exabeam, focuses exclusively on LLMs, eschewing generalized AI security to delve into the unique characteristics and vulnerabilities inherent in these models.Complete with collective wisdom gained from the creation of the OWASP Top 10 for LLMs list—a feat accomplished by more than 400 industry experts—this guide delivers real-world guidance and practical strategies to help developers and security teams grapple with the realities of LLM applications. Whether you're architecting a new application or adding AI features to an existing one, this book is your go-to resource for mastering the security landscape of the next frontier in AI.___________________________SponsorsImperva: https://itspm.ag/imperva277117988LevelBlue: https://itspm.ag/attcybersecurity-3jdk3___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

Cyber Security Weekly Podcast
Episode 412 - AI, ML & Automation | Aligning Safety & Cybersecurity - Episode 6

Cyber Security Weekly Podcast

Play Episode Listen Later Sep 8, 2024 62:41


In March 2024, the Australian Senate resolved that the Select Committee on Adopting Artificial Intelligence (AI) be established to inquire into and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia. The committee intends to report to the Parliament on or before 19 September 2024.More than 40 Australian AI experts made a joint submission to the Inquiry. The submission from Australians for AI Safety calls for the creation of an AI Safety Institute. “Australia has yet to position itself to learn from and contribute to growing global efforts. To achieve the economic and social benefits that AI promises, we need to be active in global action to ensure the safety of AI systems that approach or surpass human-level capabilities.” “Too often, lessons are learned only after something goes wrong. With AI systems that might approach or surpass human-level capabilities, we cannot afford for that to be the case.”This session has gathered experts and specialists in their field to discuss best practice alignment of AI applications and utilisation to safety and cybersecurity requirements. This includes quantum computing which is set to revolutionise sustainability, cybersecurity, ML, AI and many optimisation problems that classic computers can never imagine. In addition, we will also get briefed on: OWASP Top 10 for Large Language Model Applications; shedding light on the specific vulnerabilities LLMs face, including real world examples and detailed exploration of five key threats addressed using prompts and responses from LLMs; Prompt injection, insecure output handling, model denial of service, sensitive information disclosure, and model theft; How traditional cybersecurity methodologies can be applied to defend LLMs effectively; and How organisations can stay ahead of potential risks and ensure the security of their LLM-based applications.PanelistsDr Mahendra SamarawickramaDirector | Centre for Sustainable AIDr Mahendra Samarawickrama (GAICD, MBA, SMIEEE, ACS(CP)) is a leader in driving the convergence of Metaverse, AI, and Blockchain to revolutionize the future of customer experience and brand identity. He is the Australian ICT Professional of the Year 2022 and a director of The Centre for Sustainable AI and Meta61. He is an Advisory Council Member of Harvard Business Review (HBR), a Committee Member of the IEEE AI Standards, an Expert in AI ethics and governance at the Global AI Ethics Institute (GAIEI), a member of the European AI Alliance, a senior member of IEEE (SMIEEE), an industry Mentor in the UNSW business school, an honorary visiting scholar at the University of Technology Sydney (UTS), and a graduate member of the Australian Institute of Company Directors (GAICD).Ser Yoong GohHead of Compliance | ADVANCE.AI | ISACA Emerging Trends Working GroupSer Yoong is a seasoned technology professional who has held various roles with multinational corporations, consulting and also SMEs from various industries. He is recognised as a subject matter expert in the areas of cybersecurity, audit, risk and compliance from his working experience, having held various certifications and was also recognised as one of the Top 30 CSOs in 2021 from IDG. Shannon DavisPrincipal Security Strategist | Splunk SURGeShannon hails from Melbourne, Australia. Originally from Seattle, Washington, he has worked in a number of roles: a video game tester at Nintendo (Yoshi's Island broke his spirit), a hardware tester at Microsoft (handhelds have come a long way since then), a Windows NT admin for an early security startup and one of the first Internet broadcast companies, along with security roles for companies including Juniper and Cisco. Shannon enjoys getting outdoors for hikes and traveling.Greg SadlerCEO | Good Ancestors PolicyGreg Sadler is also CEO of Good Ancestors Policy, a charity that develops and advocates for Australian-specific policies aimed at solving this century's most challenging problems. Greg coordinates Australians for AI Safety and focuses on how Australia can help make frontier AI systems safe. Greg is on the board of a range of charities, including the Alliance to Feed the Earth in Disasters and Effective Altruism Australia. Lana TikhomirovPhD Candidate, Australian Institute for Machine Learning, University of AdelaideLana is a PhD Candidate in AI safety for human decision-making, focussed on medical AI. She has a background in cognitive science and uses bioethics and knowledge about algorithms to understand how to approach AI for high-risk human decisionsChris CubbageDirector - MYSECURITY MEDIA | MODERATORFor more information and the full series visit https://mysecuritymarketplace.com/security-risk-professional-insight-series/

Resilient Cyber
Resilient Cyber w/ Steve Wilson - Securing the Adoption of GenAI & LLM's

Resilient Cyber

Play Episode Listen Later Aug 28, 2024 28:40


In this episode we sit down with GenAI and Security Leader Steve Wilson to discuss securing the explosive adoption of GenAI and LLM's. Steve is the leader of the OWASP Top 10 for LLM's and the upcoming book The Developer's Playbook for LLM Security: Building Secure AI Applications-- First off, for those not familiar with your background, can you tell us a bit about yourself and what brought you to focusing on AI Security as you have currently?- Many may not be familiar with the OWASP LLM Top 10, can you tell us how the project came about, and some of the value it provides the community?- I don't want to talk through the list item by item, but I wanted to ask, what are some of the key similarities and key differences when it comes to securing AI systems and applications compared to broader historical AppSec?- Where do you think organizations should look to get started to try and keep pace with the businesses adoption of GenAI and LLM's?- You've also been working on publishing the Developers Playbook to LLM Security which I've been working my way through an early preview edition of and it is great. What are some of the core topics you cover in the book?- One hot topic in GenAI and LLM is the two large paths of either closed and open source models, services and platforms. What are some key considerations from your perspective for those adopting one or the other?- I know software supply chain security is a key part of LLM and GenAI security, why is that, and what should folks keep in mind?- For those wanting to learn more, where can they find more resources, such as the LLM Top 10, your book, any upcoming talks etc?

ITSPmagazine | Technology. Cybersecurity. Society
Recapping Black Hat 2024 and What's Next | On Location Coverage with Sean Martin and Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Aug 20, 2024 20:30


Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesIn this episode of "On Location With Sean Martin and Marco Ciappelli," our hosts dive into their time at Black Hat 2024 in Las Vegas, reflecting on key takeaways and sharing what's next on their journey. Whether you're deep into cybersecurity or just curious about the industry, this blog post offers a snapshot of what to expect from Sean and Marco.Recapping Black Hat 2024Marco CiappelliChoo, choo . . .Sean MartinIs that the sound of the fast train back from Vegas? Or just the rush of everything we experienced?Marco CiappelliI'm still wondering why there's no train from LA to Vegas. And don't get me started on LA to San Francisco—that's another conversation entirely.The conversation kicks off with a lighthearted nod to travel woes before shifting to the core of the episode: their reflections on Black Hat 2024. Sean and Marco bring unique perspectives, emphasizing the importance of thinking beyond cybersecurity's technical aspects to consider its broader impact on society and business.Sean's Operational InsightsSean MartinI like to look at things from an operational angle—how can we take what we learn and bring it back to the business to help leaders and practitioners do what they love?Sean's Black Hat 2024 Recap Newsletter explores the evolution from reactive data responses to strategic enablement, AI and automation, modular cybersecurity, and the invaluable role of human insights. His focus is clear: helping businesses become more resilient and adaptable through smarter cybersecurity practices.Marco's Societal ImpactMarco CiappelliCybersecurity isn't a destination—it's a journey. We're never going to be fully secure, and that's okay. Cultures change, technology evolves, and we have to keep adapting.Marco's take highlights the societal implications of cybersecurity. He talk about how different fields and nations are breaking down silos to collaborate more effectively. His newsletter often reflects on the need for digital literacy across business, society, and education, emphasizing the importance of broadening our understanding of technology's role.Upcoming Events and ConferencesThe duo is excited about their packed schedule for the rest of 2024 and beyond, including:CyberTech New York (September 2024): Focused on policy, innovation, SecOps, AppSec, and sustainability.OWASP AppSec San Francisco (September 2024): Covering the OWASP Top 10 for LLMs and more.Sector in Toronto (October 2024): Offering unique coverage ideas, closely tied to Black Hat.Did someone said that they will be back covering an APJ event, in Melbourne, before the end of the year???  Additional VenturesThey'll also be hosting innovation panels and keynotes at a company event in New Orleans, with CES in Las Vegas and VivaTech in Paris on the horizon for 2025, blending B2B startup insights with consumer tech, all with a cybersecurity twist.Subscribe and Stay TunedMarco and Sean invite you to subscribe to their newsletters and follow their podcast, "On Location," as they continue their journey around the globe—both physically and virtually—bringing fresh perspectives on business, technology, and cybersecurity. You'll also find unique "brand stories" that highlight innovations making our world safer and more sustainable.Stay connected, enjoy the ride, and don't forget to subscribe to both their newsletters and the "On Location" podcast on YouTube!Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsLevelBlue: https://itspm.ag/levelblue266f6cCoro: https://itspm.ag/coronet-30deSquareX: https://itspm.ag/sqrx-l91Britive: https://itspm.ag/britive-3fa6AppDome: https://itspm.ag/appdome-neuv____________________________Follow our Black Hat USA  2024 coverage: https://www.itspmagazine.com/black-hat-usa-2024-hacker-summer-camp-2024-event-coverage-in-las-vegasOn YouTube:

ITSPmagazine | Technology. Cybersecurity. Society
OWASP Top 10 For Large Language Models: Project Update | An OWASP 2024 Global AppSec San Francisco Conversation with Steve Wilson | On Location Coverage with Sean Martin and Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Aug 20, 2024 23:31


Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam] & Project Lead,  OWASP Top 10 for Larage Language Model Applications [@owasp]On LinkedIn | https://www.linkedin.com/in/wilsonsd/On Twitter | https://x.com/virtualsteve____________________________Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesIn this episode of the Chat on the Road On Location series for OWASP AppSec Global in San Francisco, Sean Martin hosts a compelling conversation with Steve Wilson, Project Lead for the OWASP Top 10 for Large Language Model AI Applications. The discussion, as you might guess, centers on the OWASP Top 10 list for Large Language Models (LLMs) and the security challenges associated with these technologies. Wilson highlights the growing relevance of AppSec, particularly with the surge in interest in AI and LLMs.The conversation kicks off with an exploration of the LLM project that Wilson has been working on at OWASP, aimed at presenting an update on the OWASP Top 10 for LLMs. Wilson emphasizes the significance of prompt injection attacks, one of the key concerns on the OWASP list. He explains how attackers can craft prompts to manipulate LLMs into performing unintended actions, a tactic reminiscent of the SQL injection attacks that have plagued traditional software for years. This serves as a stark reminder of the need for vigilance in the development and deployment of LLMs.Supply chain risks are another critical issue discussed. Wilson draws parallels to the Log4j incident, stressing that the AI software supply chain is currently a weak link. With the rapid growth of platforms like Hugging Face, the provenance of AI models and training datasets becomes a significant concern. Ensuring the integrity and security of these components is paramount to building robust AI-driven systems.The notion of excessive agency is also explored—a concept that relates to the permissions and responsibilities assigned to LLMs. Wilson underscores the importance of limiting the scope of LLMs to prevent misuse or unauthorized actions. This point resonates with traditional security principles like least privilege but is recontextualized for the AI age. Overreliance on LLMs is another topic Martin and Wilson discuss.The conversation touches on how people can place undue trust in AI outputs, leading to potentially hazardous outcomes. Ensuring users understand the limitations and potential inaccuracies of LLM-generated content is essential for safe and effective AI utilization.Wilson also provides a preview of his upcoming session at the OWASP AppSec Global event, where he plans to share insights from the ongoing work on the 2.0 version of the OWASP Top 10 for LLMs. This next iteration will address how the field has matured and new security considerations that have emerged since the initial list.Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsAre you interested in sponsoring our event coverage with an ad placement in the podcast?Learn More

Redefining CyberSecurity
OWASP Top 10 For Large Language Models: Project Update | An OWASP 2024 Global AppSec San Francisco Conversation with Steve Wilson | On Location Coverage with Sean Martin and Marco Ciappelli

Redefining CyberSecurity

Play Episode Listen Later Aug 20, 2024 23:31


Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam] & Project Lead,  OWASP Top 10 for Larage Language Model Applications [@owasp]On LinkedIn | https://www.linkedin.com/in/wilsonsd/On Twitter | https://x.com/virtualsteve____________________________Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesIn this episode of the Chat on the Road On Location series for OWASP AppSec Global in San Francisco, Sean Martin hosts a compelling conversation with Steve Wilson, Project Lead for the OWASP Top 10 for Large Language Model AI Applications. The discussion, as you might guess, centers on the OWASP Top 10 list for Large Language Models (LLMs) and the security challenges associated with these technologies. Wilson highlights the growing relevance of AppSec, particularly with the surge in interest in AI and LLMs.The conversation kicks off with an exploration of the LLM project that Wilson has been working on at OWASP, aimed at presenting an update on the OWASP Top 10 for LLMs. Wilson emphasizes the significance of prompt injection attacks, one of the key concerns on the OWASP list. He explains how attackers can craft prompts to manipulate LLMs into performing unintended actions, a tactic reminiscent of the SQL injection attacks that have plagued traditional software for years. This serves as a stark reminder of the need for vigilance in the development and deployment of LLMs.Supply chain risks are another critical issue discussed. Wilson draws parallels to the Log4j incident, stressing that the AI software supply chain is currently a weak link. With the rapid growth of platforms like Hugging Face, the provenance of AI models and training datasets becomes a significant concern. Ensuring the integrity and security of these components is paramount to building robust AI-driven systems.The notion of excessive agency is also explored—a concept that relates to the permissions and responsibilities assigned to LLMs. Wilson underscores the importance of limiting the scope of LLMs to prevent misuse or unauthorized actions. This point resonates with traditional security principles like least privilege but is recontextualized for the AI age. Overreliance on LLMs is another topic Martin and Wilson discuss.The conversation touches on how people can place undue trust in AI outputs, leading to potentially hazardous outcomes. Ensuring users understand the limitations and potential inaccuracies of LLM-generated content is essential for safe and effective AI utilization.Wilson also provides a preview of his upcoming session at the OWASP AppSec Global event, where he plans to share insights from the ongoing work on the 2.0 version of the OWASP Top 10 for LLMs. This next iteration will address how the field has matured and new security considerations that have emerged since the initial list.Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsAre you interested in sponsoring our event coverage with an ad placement in the podcast?Learn More

Redefining CyberSecurity
Recapping Black Hat 2024 and What's Next | On Location Coverage with Sean Martin and Marco Ciappelli

Redefining CyberSecurity

Play Episode Listen Later Aug 20, 2024 20:30


Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesIn this episode of "On Location With Sean Martin and Marco Ciappelli," our hosts dive into their time at Black Hat 2024 in Las Vegas, reflecting on key takeaways and sharing what's next on their journey. Whether you're deep into cybersecurity or just curious about the industry, this blog post offers a snapshot of what to expect from Sean and Marco.Recapping Black Hat 2024Marco CiappelliChoo, choo . . .Sean MartinIs that the sound of the fast train back from Vegas? Or just the rush of everything we experienced?Marco CiappelliI'm still wondering why there's no train from LA to Vegas. And don't get me started on LA to San Francisco—that's another conversation entirely.The conversation kicks off with a lighthearted nod to travel woes before shifting to the core of the episode: their reflections on Black Hat 2024. Sean and Marco bring unique perspectives, emphasizing the importance of thinking beyond cybersecurity's technical aspects to consider its broader impact on society and business.Sean's Operational InsightsSean MartinI like to look at things from an operational angle—how can we take what we learn and bring it back to the business to help leaders and practitioners do what they love?Sean's Black Hat 2024 Recap Newsletter explores the evolution from reactive data responses to strategic enablement, AI and automation, modular cybersecurity, and the invaluable role of human insights. His focus is clear: helping businesses become more resilient and adaptable through smarter cybersecurity practices.Marco's Societal ImpactMarco CiappelliCybersecurity isn't a destination—it's a journey. We're never going to be fully secure, and that's okay. Cultures change, technology evolves, and we have to keep adapting.Marco's take highlights the societal implications of cybersecurity. He talk about how different fields and nations are breaking down silos to collaborate more effectively. His newsletter often reflects on the need for digital literacy across business, society, and education, emphasizing the importance of broadening our understanding of technology's role.Upcoming Events and ConferencesThe duo is excited about their packed schedule for the rest of 2024 and beyond, including:CyberTech New York (September 2024): Focused on policy, innovation, SecOps, AppSec, and sustainability.OWASP AppSec San Francisco (September 2024): Covering the OWASP Top 10 for LLMs and more.Sector in Toronto (October 2024): Offering unique coverage ideas, closely tied to Black Hat.Did someone said that they will be back covering an APJ event, in Melbourne, before the end of the year???  Additional VenturesThey'll also be hosting innovation panels and keynotes at a company event in New Orleans, with CES in Las Vegas and VivaTech in Paris on the horizon for 2025, blending B2B startup insights with consumer tech, all with a cybersecurity twist.Subscribe and Stay TunedMarco and Sean invite you to subscribe to their newsletters and follow their podcast, "On Location," as they continue their journey around the globe—both physically and virtually—bringing fresh perspectives on business, technology, and cybersecurity. You'll also find unique "brand stories" that highlight innovations making our world safer and more sustainable.Stay connected, enjoy the ride, and don't forget to subscribe to both their newsletters and the "On Location" podcast on YouTube!Be sure to follow our Coverage Journey and subscribe to our podcasts!____________________________This Episode's SponsorsLevelBlue: https://itspm.ag/levelblue266f6cCoro: https://itspm.ag/coronet-30deSquareX: https://itspm.ag/sqrx-l91Britive: https://itspm.ag/britive-3fa6AppDome: https://itspm.ag/appdome-neuv____________________________Follow our Black Hat USA  2024 coverage: https://www.itspmagazine.com/black-hat-usa-2024-hacker-summer-camp-2024-event-coverage-in-las-vegasOn YouTube:

The Azure Podcast
Episode 502 - Azure Open AI and Security

The Azure Podcast

Play Episode Listen Later Aug 15, 2024


Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications.   Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3 YouTube: https://youtu.be/64Achcz97PI Resources: AI Tooling: Azure AI Tooling Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog Prompt Shields to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety.  Groundedness detection to detect “hallucinations” in model outputs, coming soon.  Safety system messagesto steer your model’s behavior toward safe, responsible outputs, coming soon. Safety evaluations to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview.   Risk and safety monitoring to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service. AI Defender for Cloud AI Security Posture Management AI security posture management (Preview) - Microsoft Defender for Cloud | Microsoft Learn AI Workloads Enable threat protection for AI workloads (preview) - Microsoft Defender for Cloud | Microsoft Learn        AI Red Teaming Tool Announcing Microsoft’s open automation framework to red team generative AI Systems | Microsoft Security Blog AI Development Considerations:   AI Assessment from Microsoft Conduct an AI assessment using Microsoft’s Responsible AI Impact Assessment Template Responsible AI Impact Assessment Guide for detailed instructions Microsoft Responsible AI Processes Follow Microsoft’s Responsible AI principles: fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability Utilize tools like the Responsible AI Dashboard for continuous monitoring and improvement Define Use Case and Model Architecture Determine the specific use case for your LLM Design the model architecture, focusing on the Transformer architecture   Content Filtering System How to use content filters (preview) with Azure OpenAI Service - Azure OpenAI | Microsoft Learn Azure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system uses an ensemble of classification models to detect and prevent harmful content in both input prompts and output completions The filtering system covers four main categories: hate, sexual, violence, and self-harm Each category is assessed at four severity levels: safe, low, medium, and high Additional classifiers are available for detecting jailbreak risks and known content for text and code. JailBreaking Content Filters Red Teaming the LLM Plan and conduct red teaming exercises to identify potential vulnerabilities Use diverse red teamers to simulate adversarial attacks and test the model’s robustness Microsoft AI Red Team building future of safer AI | Microsoft Security Blog Create a Threat Model with OWASP Top 10 owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_1.pdf Develop a threat model and implement mitigations based on identified risks   Other updates: Los Angeles Azure Extended Zones Carbon Optimization App Config Ref GA OS SKU In-Place Migration for AKS Operator CRD Support with Azure Monitor Managed Service Azure API Center Visual Studio Code Extension Pre-release Azure API Management WordPress Plugin Announcing a New OpenAI Feature for Developers on Azure

Application Security PodCast
Andrew Van Der Stock -- The New OWASP Top Ten

Application Security PodCast

Play Episode Listen Later Jul 23, 2024 51:51


Join Chris Romeo and Robert Hurlbut as they sit down with Andrew Van Der Stok, a leading web application security specialist and executive director at OWASP. In this episode, Andrew discusses the latest with the OWASP Top 10 Project, the importance of data collection, and the need for developer engagement. Learn about the methodology behind building the OWASP Top 10, the significance of framework security, and much more. Tune in to get vital insights that could shape the future of web application security. Don't miss this informative discussion!Previous episodes with Andrew Van Der StockAndrew van der Stock — Taking Application Security to the MassesAndrew van der Stock and Brian Glas -- The Future of the OWASP Top 10Books mentioned in the episode:The Crown Road by Iain BanksEdward Tufte FOLLOW OUR SOCIAL MEDIA: ➜Twitter: @AppSecPodcast➜LinkedIn: The Application Security Podcast➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast Thanks for Listening! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Thinking Elixir Podcast
209: New Admin Panel, LiveView Component Kit, and more!

Thinking Elixir Podcast

Play Episode Listen Later Jul 2, 2024 36:40


News includes a neat trick we learned that setup-beam can do for GitHub actions by reading a project's .tool-versions file, Wojtek's insight on reducing SDK API surfaces, Ash's support for UUIDv7, the introduction of the highly customizable Backpex admin panel, a new LiveView component library called SaladUI and its unique ReactJS component conversion feature, Jose Valim's technique of using AI for testing function names, and more! Show Notes online - http://podcast.thinkingelixir.com/209 (http://podcast.thinkingelixir.com/209) Elixir Community News - https://x.com/flo_arens/status/1805255159460532602 (https://x.com/flo_arens/status/1805255159460532602?utm_source=thinkingelixir&utm_medium=shownotes) – TIL setup-beam GitHub action can read asdf's .tool-versions file and parse the OTP and Elixir version out of it. - https://github.com/erlef/setup-beam (https://github.com/erlef/setup-beam?utm_source=thinkingelixir&utm_medium=shownotes) – The setup-beam GitHub action project. - https://github.com/erlef/setup-beam?tab=readme-ov-file#version-file (https://github.com/erlef/setup-beam?tab=readme-ov-file#version-file?utm_source=thinkingelixir&utm_medium=shownotes) – Link to README section about the version file support in setup-beam. - https://dashbit.co/blog/sdks-with-req-stripe (https://dashbit.co/blog/sdks-with-req-stripe?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post by Wojtek on reducing the surface of SDK APIs by focusing on data, not functions. - https://x.com/ZachSDaniel1/status/1805002425738334372 (https://x.com/ZachSDaniel1/status/1805002425738334372?utm_source=thinkingelixir&utm_medium=shownotes) – Ash now supports UUIDv7, a Time-Sortable Identifier for modern databases. - https://github.com/ash-project/ash/pull/1253 (https://github.com/ash-project/ash/pull/1253?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub pull request for Ash's support of UUIDv7. - https://uuid7.com/ (https://uuid7.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Information about UUID7 as a Time-Sortable Identifier. - https://elixirforum.com/t/backpex-a-highly-customizable-admin-panel-for-phoenix-liveview-applications/64314 (https://elixirforum.com/t/backpex-a-highly-customizable-admin-panel-for-phoenix-liveview-applications/64314?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction to Backpex, a new admin backend library for Phoenix LiveView applications. - https://github.com/naymspace/backpex (https://github.com/naymspace/backpex?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for Backpex, a customizable administration panel for Phoenix LiveView applications. - https://github.com/bluzky/salad_ui (https://github.com/bluzky/salad_ui?utm_source=thinkingelixir&utm_medium=shownotes) – SaladUI, a Tailwind LiveView UI toolkit that includes a unique feature to convert ReactJS components. - https://salad-storybook.fly.dev/welcome (https://salad-storybook.fly.dev/welcome?utm_source=thinkingelixir&utm_medium=shownotes) – Storybook for SaladUI to explore components. - https://ui.shadcn.com/ (https://ui.shadcn.com/?utm_source=thinkingelixir&utm_medium=shownotes) – React Shad/cn UI component framework storybook page. - https://salad-storybook.fly.dev/examples/convert_shadui (https://salad-storybook.fly.dev/examples/convert_shadui?utm_source=thinkingelixir&utm_medium=shownotes) – Example of converting a ReactJS component to SaladUI. - https://github.com/codedge-llc/accessible (https://github.com/codedge-llc/accessible?utm_source=thinkingelixir&utm_medium=shownotes) – Accessible, a package to add Access behavior support to Elixir structs. - https://paraxial.io/blog/owasp-top-ten (https://paraxial.io/blog/owasp-top-ten?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post on how the OWASP Top 10 applies to Elixir and Phoenix applications. - https://owasp.org/www-project-top-ten/ (https://owasp.org/www-project-top-ten/?utm_source=thinkingelixir&utm_medium=shownotes) – The OWASP Top 10, a standard awareness document for developers and web application security. - https://x.com/josevalim/status/1804117870764339546 (https://x.com/josevalim/status/1804117870764339546?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim's technique of using AI to help review or determine function names in APIs. - https://fly.io/phoenix-files/using-ai-to-boost-accessibility-and-seo/ (https://fly.io/phoenix-files/using-ai-to-boost-accessibility-and-seo/?utm_source=thinkingelixir&utm_medium=shownotes) – Article on using AI to boost image accessibility and SEO, demonstrating working with OpenAI and Anthropic using Elixir. - https://2024.elixirconf.com/ (https://2024.elixirconf.com/?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf 2024 details, taking place from August 28-30 with various speakers and talks focused on Elixir. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

Paul's Security Weekly
GenAI, Security, and More Lies - Aubrey King - PSW #832

Paul's Security Weekly

Play Episode Listen Later Jun 14, 2024 174:18


We will discuss LLM security in general and some of the issues covered in the OWASP Top 10 for LLMs! Segment Resources: https://genai.owasp.org/ Skyrocketing IoT vulnerabilities, bricked computers?, MACBORG!, raw dogging source code, PHP strikes again and again, if you have a Netgear WNR614 replace it now, Arm Mali, new OpenSSH feature, weird headphones, decrypting firmware, and VPNs are still being hacked! Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-832

Paul's Security Weekly TV
GenAI, Security, and More Lies - Aubrey King - PSW #832

Paul's Security Weekly TV

Play Episode Listen Later Jun 13, 2024 63:50


We will discuss LLM security in general and some of the issues covered in the OWASP Top 10 for LLMs! Segment Resources: https://genai.owasp.org/ Show Notes: https://securityweekly.com/psw-832

Paul's Security Weekly (Video-Only)
GenAI, Security, and More Lies - Aubrey King - PSW #832

Paul's Security Weekly (Video-Only)

Play Episode Listen Later Jun 13, 2024 63:50


We will discuss LLM security in general and some of the issues covered in the OWASP Top 10 for LLMs! Segment Resources: https://genai.owasp.org/ Show Notes: https://securityweekly.com/psw-832

Paul's Security Weekly (Podcast-Only)
GenAI, Security, and More Lies - Aubrey King - PSW #832

Paul's Security Weekly (Podcast-Only)

Play Episode Listen Later Jun 12, 2024 174:18


We will discuss LLM security in general and some of the issues covered in the OWASP Top 10 for LLMs! Segment Resources: https://genai.owasp.org/ Skyrocketing IoT vulnerabilities, bricked computers?, MACBORG!, raw dogging source code, PHP strikes again and again, if you have a Netgear WNR614 replace it now, Arm Mali, new OpenSSH feature, weird headphones, decrypting firmware, and VPNs are still being hacked! Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-832

The Cyberman Show
What is API Security? API Security Risks, API Market Landscape #83

The Cyberman Show

Play Episode Listen Later May 28, 2024 28:05


Send us a Text Message.Todays episode contains basics of API Security. The podcast covers the following topics:00:00 Introduction00:51 What is API?04:53 Common Terms Associated with API09:23 API Landscape10:58 What is API Security?12:32 API Security Risks15:02 OWASP Top 10 for API Security15:39 API Attack Workflow18:29 Story of Real API related Security Incident19:17 API Security Best Practises20:21 API Security Market Landscape24:41 Appsec vs API SecurityLink to my blog on "API Security Checklist" https://thecyberman.substack.com/p/api-security-checklistSupport the Show.Google Drive link for Podcast content:https://drive.google.com/drive/folders/10vmcQ-oqqFDPojywrfYousPcqhvisnkoMy Profile on LinkedIn: https://www.linkedin.com/in/prashantmishra11/Youtube Channnel : https://www.youtube.com/@TheCybermanShow Twitter handle https://twitter.com/prashant_cyber PS: The views are my own and dont reflect any views from my employer.

Paul's Security Weekly
Inside the OWASP Top 10 for LLM Applications - Sandy Dunn, Mike Fey, Josh Lemos - ASW #285

Paul's Security Weekly

Play Episode Listen Later May 14, 2024 66:40


Everyone is interested in generative AIs and LLMs, and everyone is looking for use cases and apps to apply them to. Just as the early days of the web inspired the original OWASP Top 10 over 20 years ago, the experimentation and adoption of LLMs has inspired a Top 10 list of their own. Sandy Dunn talks about why the list looks so familiar in many ways -- after all, LLMs are still software. But the list captures some new concepts that anyone looking to use LLMs or generative AIs should be aware of. https://llmtop10.com/ https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources https://owasp.org/www-project-ai-security-and-privacy-guide/ https://gandalf.lakera.ai/ https://quarkiq.com/blog How companies are benefiting from the enterprise browser. It's not just security when talking about the enterprise browser. It's the marriage between security AND productivity. In this interview, Mike will provide real live case studies on how different enterprises are benefitting. Segment Resources: https://www.island.io/resources https://www.island.io/press This segment is sponsored by Island. Visit https://www.securityweekly.com/islandrsac to learn more about them! The cybersecurity landscape continues to transform, with a growing focus on mitigating supply chain vulnerabilities, enforcing data governance, and incorporating AI into security measures. This transformation promises to steer DevSecOps teams toward software development processes with efficiency and security at the forefront. Josh Lemos, Chief Information Security Officer at GitLab will discuss the role of AI in securing software and data supply chains and helping developers work more efficiently while creating more secure code. This segment is sponsored by GitLab. Visit https://securityweekly.com/gitlabrsac to learn more about them! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-285

Paul's Security Weekly TV
Inside the OWASP Top 10 for LLM Applications - Sandy Dunn - ASW #285

Paul's Security Weekly TV

Play Episode Listen Later May 14, 2024 37:33


Everyone is interested in generative AIs and LLMs, and everyone is looking for use cases and apps to apply them to. Just as the early days of the web inspired the original OWASP Top 10 over 20 years ago, the experimentation and adoption of LLMs has inspired a Top 10 list of their own. Sandy Dunn talks about why the list looks so familiar in many ways -- after all, LLMs are still software. But the list captures some new concepts that anyone looking to use LLMs or generative AIs should be aware of. https://llmtop10.com/ https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources https://owasp.org/www-project-ai-security-and-privacy-guide/ https://gandalf.lakera.ai/ https://quarkiq.com/blog Show Notes: https://securityweekly.com/asw-285

Application Security Weekly (Audio)
Inside the OWASP Top 10 for LLM Applications - Sandy Dunn, Mike Fey, Josh Lemos - ASW #285

Application Security Weekly (Audio)

Play Episode Listen Later May 14, 2024 66:40


Everyone is interested in generative AIs and LLMs, and everyone is looking for use cases and apps to apply them to. Just as the early days of the web inspired the original OWASP Top 10 over 20 years ago, the experimentation and adoption of LLMs has inspired a Top 10 list of their own. Sandy Dunn talks about why the list looks so familiar in many ways -- after all, LLMs are still software. But the list captures some new concepts that anyone looking to use LLMs or generative AIs should be aware of. https://llmtop10.com/ https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources https://owasp.org/www-project-ai-security-and-privacy-guide/ https://gandalf.lakera.ai/ https://quarkiq.com/blog How companies are benefiting from the enterprise browser. It's not just security when talking about the enterprise browser. It's the marriage between security AND productivity. In this interview, Mike will provide real live case studies on how different enterprises are benefitting. Segment Resources: https://www.island.io/resources https://www.island.io/press This segment is sponsored by Island. Visit https://www.securityweekly.com/islandrsac to learn more about them! The cybersecurity landscape continues to transform, with a growing focus on mitigating supply chain vulnerabilities, enforcing data governance, and incorporating AI into security measures. This transformation promises to steer DevSecOps teams toward software development processes with efficiency and security at the forefront. Josh Lemos, Chief Information Security Officer at GitLab will discuss the role of AI in securing software and data supply chains and helping developers work more efficiently while creating more secure code. This segment is sponsored by GitLab. Visit https://securityweekly.com/gitlabrsac to learn more about them! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-285

Application Security Weekly (Video)
Inside the OWASP Top 10 for LLM Applications - Sandy Dunn - ASW #285

Application Security Weekly (Video)

Play Episode Listen Later May 14, 2024 37:33


Everyone is interested in generative AIs and LLMs, and everyone is looking for use cases and apps to apply them to. Just as the early days of the web inspired the original OWASP Top 10 over 20 years ago, the experimentation and adoption of LLMs has inspired a Top 10 list of their own. Sandy Dunn talks about why the list looks so familiar in many ways -- after all, LLMs are still software. But the list captures some new concepts that anyone looking to use LLMs or generative AIs should be aware of. https://llmtop10.com/ https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources https://owasp.org/www-project-ai-security-and-privacy-guide/ https://gandalf.lakera.ai/ https://quarkiq.com/blog Show Notes: https://securityweekly.com/asw-285

Software Engineering Radio - The Podcast for Professional Software Developers
SE Radio 613: Shachar Binyamin on GraphQL Security

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Apr 24, 2024 56:17


Shachar Binyamin, CEO and co-founder of Inigo, joins host Priyanka Raghavan to discuss GraphQL security. They begin with a look at the state of adoption of GraphQL and why it's so popular. From there, they consider why GraphQL security is important as they take a deep dive into a range of known security issues that have been exploited in GraphQL, including authentication, authorization, and denial of service attacks with references from the OWASP Top 10 API Security Risks. They discuss some mitigation strategies and methodologies for solving GraphQL security problems, and the show ends with discussion of Inigo and Shachar's top three recommendations for building safe GraphQL applications. Brought to you by IEEE Software and IEEE Computer Society.

The Shifting Privacy Left Podcast
S3E10: 'How a Privacy Engineering Center of Excellence Shifts Privacy Left' with Aaron Weller (HP)

The Shifting Privacy Left Podcast

Play Episode Listen Later Apr 9, 2024 40:13 Transcription Available


In this episode, I sat down with Aaron Weller, the Leader of HP's Privacy Engineering Center of Excellence (CoE), focused on providing technical solutions for privacy engineering across HP's global operations. Throughout our conversation, we discuss: what motivated HP's leadership to stand up a CoE for Privacy Engineering; Aaron's approach to staffing the CoE; how a CoE's can shift privacy left in a large, matrixed organization like HP's; and, how to leverage the CoE to proactively manage privacy risk.Aaron emphasizes the importance of understanding an organization's strategy when creating a CoE and shares his methods for gathering data to inform the center's roadmap and team building. He also highlights the great impact that a Center of Excellence can offer and gives advice for implementing one in your organization. We touch on the main challenges in privacy engineering today and the value of designing user-friendly privacy experiences. In addition, Aaron provides his perspective on selecting the right combination of Privacy Enhancing Technologies (PETs) for anonymity, how to go about implementing PETs, and the role that AI governance plays in his work. Topics Covered: Aaron's deep privacy and consulting background and how he ended up leading HP's Privacy Engineering Center of Excellence The definition of a "Center of Excellence" (CoE) and how a Privacy Engineering CoE can drive value for an organization and shift privacy leftWhat motivates a company like HP to launch a CoE for Privacy Engineering and what it's reporting line should beAaron's approach to creating a Privacy Engineering CoE roadmap; his strategy for staffing this CoE; and the skills & abilities that he soughtHow HP's Privacy Engineering CoE works with the business to advise on, and select, the right PETs for each business use caseWhy it's essential to know the privacy guarantees that your organization wants to assert before selecting the right PETs to get you thereLessons Learned from setting up a Privacy Engineering CoE and how to get executive sponsorshipThe amount of time that Privacy teams have had to work on AI issues over the past year, and advice on preventing burnoutAaron's hypothesis about the value of getting an early handle on governance over the adoption of innovative technologiesThe importance of being open to continuous learning in the field of privacy engineering Guest Info: Connect with Aaron on LinkedInLearn about HP's Privacy Engineering Center of ExcellenceReview the OWASP Machine Learning Security Top 10Review the OWASP Top 10 for LLM Applications Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing Partners Top privacy talent - when you need it, where you need it.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Paul's Security Weekly TV
Top 10's First Update, Metasploit's Second Update, PHP Prepares Statements, RSA & MS - ASW #279

Paul's Security Weekly TV

Play Episode Listen Later Apr 3, 2024 26:34


The OWASP Top 10 gets its first update after a year, Metasploit gets its first rewrite (but it's still in Perl), PHP adds support for prepared statements, RSA Conference puts passwords on notice while patching remains hard, and more! Show Notes: https://securityweekly.com/asw-279

Paul's Security Weekly
Infosec Myths, Mistakes, and Misconceptions - Adrian Sanabria - ASW #279

Paul's Security Weekly

Play Episode Listen Later Apr 2, 2024 60:57


Sometimes infosec problems can be summarized succinctly, like "patching is hard". Sometimes a succinct summary sounds convincing, but is based on old data, irrelevant data, or made up data. Adrian Sanabria walks through some of the archeological work he's done to dig up the source of some myths. We talk about some of our favorite (as in most disliked) myths to point out how oversimplified slogans and oversimplified threat models lead to bad advice -- and why bad advice can make users less secure. Segment resources: https://www.oreilly.com/library/view/cybersecurity-myths-and/9780137929214/ The OWASP Top 10 gets its first update after a year, Metasploit gets its first rewrite (but it's still in Perl), PHP adds support for prepared statements, RSA Conference puts passwords on notice while patching remains hard, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-279

Application Security Weekly (Audio)
Infosec Myths, Mistakes, and Misconceptions - Adrian Sanabria - ASW #279

Application Security Weekly (Audio)

Play Episode Listen Later Apr 2, 2024 60:57


Sometimes infosec problems can be summarized succinctly, like "patching is hard". Sometimes a succinct summary sounds convincing, but is based on old data, irrelevant data, or made up data. Adrian Sanabria walks through some of the archeological work he's done to dig up the source of some myths. We talk about some of our favorite (as in most disliked) myths to point out how oversimplified slogans and oversimplified threat models lead to bad advice -- and why bad advice can make users less secure. Segment resources: https://www.oreilly.com/library/view/cybersecurity-myths-and/9780137929214/ The OWASP Top 10 gets its first update after a year, Metasploit gets its first rewrite (but it's still in Perl), PHP adds support for prepared statements, RSA Conference puts passwords on notice while patching remains hard, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-279

Application Security Weekly (Video)
Top 10's First Update, Metasploit's Second Update, PHP Prepares Statements, RSA & MS - ASW #279

Application Security Weekly (Video)

Play Episode Listen Later Apr 2, 2024 26:34


The OWASP Top 10 gets its first update after a year, Metasploit gets its first rewrite (but it's still in Perl), PHP adds support for prepared statements, RSA Conference puts passwords on notice while patching remains hard, and more! Show Notes: https://securityweekly.com/asw-279

CISO Tradecraft
#174 - OWASP Top 10 Web Application Attacks

CISO Tradecraft

Play Episode Listen Later Mar 25, 2024 44:23 Transcription Available


In this episode of CISO Tradecraft, host G. Mark Hardy delves into the crucial topic of the OWASP Top 10 Web Application Security Risks, offering insights on how attackers exploit vulnerabilities and practical advice on securing web applications. He introduces OWASP and its significant contributions to software security, then progresses to explain each of the OWASP Top 10 risks in detail, such as broken access control, injection flaws, and security misconfigurations. Through examples and recommendations, listeners are equipped with the knowledge to better protect their web applications and ultimately improve their cybersecurity posture. OWASP Cheat Sheets: https://cheatsheetseries.owasp.org/ OWASP Top 10: https://owasp.org/www-project-top-ten/ Transcripts: https://docs.google.com/document/d/17Tzyd6i6qRqNfMJ8OOEOOGpGGW0S8w32 Chapters 00:00 Introduction 01:11 Introducing OWASP: A Pillar in Cybersecurity 02:28 The Evolution of Web Vulnerabilities 05:01 Exploring Web Application Security Risks 07:46 Diving Deep into OWASP Top 10 Risks 09:28 1) Broken Access Control 14:09 2) Cryptographic Failures 18:40 3) Injection Attacks 23:57 4) Insecure Design 25:15 5) Security Misconfiguration 29:27 6) Vulnerable and Outdated Software Components 32:31 7) Identification and Authentication Failures 36:49 8) Software and Data Integrity Failures 38:46 9) Security Logging and Monitoring Practices 40:32 10) Server Side Request Forgery (SSRF) 42:15 Recap and Conclusion: Mastering Web Application Security

CYBER LIFE
Cyber Life Podcast Ep. 28 - Mobile Application Security with Nabeela Bukhari

CYBER LIFE

Play Episode Listen Later Jan 24, 2024 25:59


In this episode, I speak with Nabeela Bukhari about mobile application security. Be sure to check out the resources linked below.Nabeela is a senior security engineer primarily focused on app security and mobile app security. She holds a degree in Electronics Engineering and several certifications. Nabeela is also a volunteer with BBWIC and helps mentor women in their cybersecurity careers around the world.Resources shared on the podcast: https://mas.owasp.org/MASTG/ - MSTG Guide https://owasp.org/www-project-mobile-top-10/ - OWASP TOP 10 Mobilehttps://github.com/MobSF/Mobile-Security-Framework-MobSF- MOBSFTools: Frida- https://frida.re/ Objection- https://github.com/sensepost/objection/wiki/componentsDrozer- https://github.com/WithSecureLabs/drozerJADX-Gui- https://github.com/skylot/jadxVulnerable Android apps for learning:InjuredAndroid https://github.com/B3nac/InjuredAndroidWalkthrough Video: https://www.youtube.com/watch?v=PMKnPaGWxtgGoogle Play Link: https://play.google.com/store/apps/details?id=b3nac.injuredandroidAndroid AppSecCTF site: ctf.hpandro.raviramesh.infoWalkthrough Video:https://www.youtube.com/c/AndroidAppSecGoogle Play Link: https://play.google.com/store/apps/details?id=com.hpandro.androidsecurityDamn Vulnerable BankLink: https://github.com/rewanthtammana/Damn-Vulnerable-BankWalkthrough Video: https://rewanthtammana.com/damn-vulnerable-bank/Insecure ShopLink: https://github.com/optiv/InsecureShop/releases/download/v1.0/InsecureShop.apkGitHub: https://github.com/optiv/InsecureShopWalkthrough Video: https://docs.insecureshopapp.com/AndroGoat Link: https://github.com/satishpatnayak/MyTest/blob/master/AndroGoat.apkGitHub: https://github.com/satishpatnayak/AndroGoatWalkthrough Video: https://medium.com/androgoatCrackmesLink: https://github.com/satishpatnayak/MyTest/blob/master/AndroGoat.apkGitHub: https://github.com/OWASP/owasp-mstg/tree/master/Crackmes/AndroidWalkthrough: https://github.com/OWASP/owasp-mstg/tree/master/CrackmesInsecureBank Link: https://github.com/dineshshetty/Android-InsecureBankv2/raw/master/InsecureBankv2.apkGitHub: https://github.com/dineshshetty/Android-InsecureBankv2Oversecured Vulnerable Android AppGitHub: https://github.com/oversecured/ovaaBlog: https://blog.oversecured.com/DIVA AndroidGitHub: https://github.com/payatu/diva-androidWalkthrough: http://www.payatu.com/damn-insecure-and-vulnerable-app/MSTG Hacking PlaygroundGitHub links: https://github.com/OWASP/MSTG-Hacking-Playground https://github.com/OWASP/MSTG-Hacking-Playground/tree/master/Android/MSTG-Android-Java-Apphttps://github.com/OWASP/MSTG-Hacking-Playground/tree/master/Android/MSTG-Android-Kotlin-AppAsk me a Question Here: https://topmate.io/ken_underhill Get better at job interviews and build your confidence with this short course.https://cyberken23.gumroad.com/l/jbilol/youtube20 If you need cybersecurity training, here are some good resources. Please note that I earn a small affiliate commission if you sign up through these links for the training. Learn Ethical Hacking skills https://get.haikuinc.io/crk0rg6li6qd Get Ethical Hacking skills, SOC Analyst skills, and more through StationX. https://www.stationx.net/cyberlife Support this podcast at — https://redcircle.com/cyber-life/donations

ITSPmagazine | Technology. Cybersecurity. Society
OWASP LLM AI Security & Governance Checklist: Practical Steps To Harness the Benefits of Large Language Models While Minimizing Potential Security Risks | A Conversation with Sandy Dunn | Redefining CyberSecurity Podcast with Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jan 15, 2024 48:15


Guest: Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State university [@BoiseState]On Linkedin | https://www.linkedin.com/in/sandydunnciso/____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin____________________________This Episode's SponsorsImperva | https://itspm.ag/imperva277117988Pentera | https://itspm.ag/penteri67a___________________________Episode NotesIn this episode of Redefining CyberSecurity, host Sean Martin and cybersecurity expert, Sandy Dunn, navigate the intricate landscape of AI applications and large language models (LLMs). They explore the potential benefits and pitfalls, emphasizing the need for strategic balance and caution in implementation.Sandy shares insights from her extensive experience, including her role in creating a comprehensive checklist to help organizations effectively integrate AI without expanding their attack surface. This checklist, a product of her involvement with the OWASP TOP 10 LLM project, serves as a valuable resource for cybersecurity teams and developers alike.The conversation also explores the legal implications of AI, underscoring the recent surge in privacy laws across several states and countries. Sandy and Sean highlight the importance of understanding these laws and the potential repercussions of non-compliance.Ethics also play a central role in their discussion, with both agreeing on the necessity of ethical considerations when implementing AI. They caution against the hasty integration of large language models without adequate preparation and understanding of the business case.The duo also examine the potential for AI to be manipulated and the importance of maintaining good cybersecurity hygiene. They encourage listeners to use AI as an opportunity to improve their entire environment, while also being mindful of the potential risks.While the use of AI and large language models presents a host of benefits to organizations, it is crucial to consider the potential security risks. By understanding the business case, recognizing legal implications, considering ethical aspects, utilizing comprehensive checklists, and maintaining robust cybersecurity, organizations can safely navigate the complex landscape of AI.___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

CISO Tradecraft
#160 - Secure Developer Training Programs (with Scott Russo) Part 1

CISO Tradecraft

Play Episode Listen Later Dec 18, 2023 42:21


In this episode of CISO Tradecraft, host G Mark Hardy invites Scott Russo, a cybersecurity and engineering expert for a deep dive into the creation and maintenance of secure developer training programs. Scott discusses the importance of hands-on engaging training and the intersection of cybersecurity with teaching and mentorship. Scott shares his experiences building a secure developer training program, emphasizing the importance of gamification, tiered training, showmanship, and real-world examples to foster engagement and efficient learning. Note this episode will continue in with a part two in the next episode ISACA Event (10 Jan 2024) With G Mark Hardy - https://www.cisotradecraft.com/isaca Scott Russo - https://www.linkedin.com/in/scott-russo/ HBR Balanced Scorecard - https://hbr.org/1992/01/the-balanced-scorecard-measures-that-drive-performance-2 Transcripts - https://docs.google.com/document/d/124IqIzBnG3tPj64O2mZeO-IDTx9wIIxJ Youtube - https://youtu.be/NkrtTncAuBA  Chapters 00:00 Introduction 03:00 Overview of Secure Developer Training Program 04:46 Motivation Behind Creating the Training Program 06:03 Objectives of the Secure Developer Training Program 07:45 Defining the Term 'Secure Developer' 14:49 Keeping the Training Program Current and Engaging 21:10 Real World Impact of the Training Program 21:46 Understanding the Cybersecurity Budget Argument 21:58 Incorporating Real World Examples into Training 22:26 Personal Experiences and Stories in Training 24:06 Industry Best Practices and Standards 24:18 Aligning with OWASP Top 10 25:53 Balancing OWASP Top 10 with Other Standards 26:12 The Importance of Good Stories in Training 26:32 Duration of the Training Program 28:37 Resources Required for the Training Program 32:23 Measuring the Effectiveness of the Training Program 36:07 Gamification and Certifications in Training 38:56 Tailoring Training to Different Levels of Experience 41:03 Conclusion and Final Thoughts  

Hacker Valley Studio
Adversarial AI: Navigating the Cybersecurity Landscape

Hacker Valley Studio

Play Episode Listen Later Nov 7, 2023 39:37


In this episode, host Ron Eddings is joined by Sr. Director of Red Team Operations at Coalfire, Pete Deros, to discuss the hottest topic around; adversarial AI. Ron and Pete discuss how AI is used and how the adversary is using AI so everyone can stay one step ahead of them as well. Impactful Moments 00:00 - Welcome 01:35 - Introducing Pete Deros 03:30 - More Easily Phished 05:09 - 11 Labs Video 06:42 - Is this AI or LLM? 9:18 - AI or LLMs: Who has the Speed? 10:36 - Fine Tuning LLMs 14:37 - WormGPT & Hallucinations 17:01 - LLMs Changing Second to Second 18:38 - A Word From Our Sponsor 20:19 - ‘Write me Ransomware!' 23:24 - Working Around AI Roadblocks 28:00 - “Undetectable for A Human” 31:58 - Pete Can Help You Floss! 34:56 - OWASP Top 10 & Resources 37:00 - Check out Coalfire Links: Connect with our guest Pete Deros: https://www.linkedin.com/in/pete-deros-94524b9a/ Coalfire's Website: https://www.coalfire.com/ Coalfire Securialities Report: https://www.coalfire.com/insights/resources/reports/securealities-report-2023-compliance OWASP Top 10 LLM: https://owasp.org/www-project-top-10-for-large-language-model-applications/ Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/ Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord

Application Security PodCast
Steve Wilson and Gavin Klondike -- OWASP Top Ten for LLM Release

Application Security PodCast

Play Episode Listen Later Oct 31, 2023 51:43 Transcription Available


Steve Wilson and Gavin Klondike are part of the core team for the OWASP Top 10 for Large Language Model Applications project. They join Robert and Chris to discuss the implementation and potential challenges of AI, and present the OWASP Top Ten for LLM version 1.0. Steve and Gavin provide insights into the issues of prompt injection, insecure output handling, training data poisoning, and others. Specifically, they emphasize the significance of understanding the risk of allowing excessive agency to LLMs and the role of secure plugin designs in mitigating vulnerabilities.The conversation dives deep into the importance of secure supply chains in AI development, looking at the potential risks associated with downloading anonymous models from community-sharing platforms like Huggingface. The discussion also highlights the potential threat implications of hallucinations, where AI produces results based on what it thinks it's expected to produce and tends to please people, rather than generating factually accurate results. Wilson and Klondike also discuss how certain standard programming principles, such as 'least privilege', can be applied to AI development. They encourage developers to conscientiously manage the extent of privileges they give to their models to avert discrepancies and miscommunications from excessive agency. They conclude the discussion with a forward-looking perspective on how the OWASP Top Ten for LLM Applications will develop in the future.Links:OWASP Top Ten for LLM Applications project homepage:https://owasp.org/www-project-top-10-for-large-language-model-applications/OWASP Top Ten for LLM Applications summary PDF: https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_1.pdfFOLLOW OUR SOCIAL MEDIA: ➜Twitter: @AppSecPodcast➜LinkedIn: The Application Security Podcast➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast Thanks for Listening! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Talk Python To Me - Python conversations for passionate developers

Do you worry about your developer / data science supply chain safety? All the packages for the Python ecosystem are much of what makes Python awesome. But the are also a bit of an open door to your code and machine. Luckily the PSF is taking this seriously and hired Mike Fiedler as the full time PyPI Safety & Security Engineer (not to be confused with the Security Developer in Residence staffed by Seth Michael Larson). Mike is here to give us the state of the PyPI security and plans for the future. Links from the show Mike on Twitter: @mikefiedler Mike on Mastodon: @miketheman@hachyderm.io Supply Chain examples SolarWinds: csoonline.com XcodeGhost: wikipedia.org Google Ad Malware: medium.com PyPI: pypi.org OWASP Top 10: owasp.org Trusted Publishers: docs.pypi.org libraries.io: libraries.io GitHub Full 2FA: github.blog Mike's Latest Blog Post: blog.pypi.org pprintpp package: github.com ICDiff: github.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to us on YouTube: youtube.com Follow Talk Python on Mastodon: talkpython Follow Michael on Mastodon: mkennedy Sponsors Sentry Error Monitoring, Code TALKPYTHON Talk Python Training

Giant Robots Smashing Into Other Giant Robots
492: Backstop.it and Varo Bank with Rishi Malik

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Sep 14, 2023 40:17


Victoria and Will interview Rishi Malik, the Founder of Backstop.it and VP of Engineering at Varo Bank. They talk about Rishi's recent adventure at DEF CON, the renowned annual security conference that he's attended for six years, and describes how it has transformed from a mere learning experience into a thrilling competition for him and his team. The conference = their playground for tackling an array of security challenges and brain-teasing puzzles, with a primary focus on cloud security competitions. They talk about the significance of community in such events and how problem-solving through interaction adds value. Rishi shares his background, tracing his path from firmware development through various tech companies to his current roles in security and engineering management. The vital topic of security in the fintech and banking sector highlights the initial concerns people had when online banking emerged. Rishi navigates through the technical intricacies of security measures, liability protection, and the regulatory framework that safeguards online banking for consumers. He also highlights the evolving landscape, where technological advancements and convenience have bolstered consumer confidence in online banking. Rishi shares his unique approach to leadership and decision-making, and pearls of wisdom for budding engineers starting their careers. His advice revolves around nurturing curiosity and relentlessly seeking to understand the "why" behind systems and processes. __ Backstop.it (https://backstop.it/) Follow Backstop.it on X (https://twitter.com/wearebackstop). Varo Bank (https://www.varomoney.com/) Follow Varo Bank on Instagram (https://www.instagram.com/varobank/), Facebook (https://www.facebook.com/varomoney/), X (https://twitter.com/varobank), YouTube (https://www.youtube.com/varomoney), or LinkedIn (https://www.linkedin.com/company/varobank/). Follow Rishi Malik on LinkedIn (https://www.linkedin.com/in/rishilmalik/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido. WILL: And I'm your other host, Will Larry. And with us today is Rishi Malik, Founder of Backstop.it and VP of Engineering at Varo Bank. Rishi, thank you for joining us. RISHI: Thanks for having me. I'm excited to be here. VICTORIA: Yes, Rishi. I'm so excited to talk with you today about your security background and get into your role at Varo and Backstop IT. But first, I wanted to hear a little bit more about your recent experience attending DEF CON. How was that? RISHI: It was awesome. I do have quite the background in security at this point. And one of the things I started doing early on, as I was getting up to speed and learning more about the security-specific side of things, was beginning to attend DEF CON itself. So, I've now gone six years straight. And it started out as just kind of experiencing the conference and security and meeting folks. But it's progressed to where I now bring a team of people where we go and we compete. We have a good time. But we do get to kind of bring the security side of things into the software engineering and engineering leadership stuff that we all do on a day-to-day basis. VICTORIA: Yeah. And what kind of puzzles do you solve with your team when you attend DEF CON? RISHI: There's definitely a lot of variety there, which I think is part of the fun. So, DEF CON frequently has electronic badges, you know, with random puzzles on there that you have to solve. Some of it are cryptographic. Some of them are kind of random cultural things. Sometimes there's music challenges based around it. Sometimes, it's social and interactive. And you have to go find the right type of badge or the right person behind it to unlock something. So, all of those, you know, typically exist and are a ton of fun. Primarily, in the last few years, we've been focusing more on the cloud CTF. So, in this case, it's our team competing against other teams and really focused on cloud security. So, it's, you know, figuring out vulnerabilities in, you know, specially designed puzzles around AWS and GCP, the application side of things as well, and competing to see how well you can do. Three years ago, the last couple of years, we've not won it, but we've been pretty competitive. And the great thing is the field is expanding as more and more people get into CTF themselves but, more importantly, into cloud infrastructure and cloud knowledge there. So, it's just great to see that expansion and see what people are into, what people are learning, and how challenging some of these things can be. VICTORIA: I love the idea of having a puzzle at a conference where you have to find a specific person to solve it. And yeah, I'm always interested in ways where we can have these events where you're getting together and building community and growing expertise in a field but in a way that makes it fun [laughs] and isn't just life-draining long, like, talks about random stuff. RISHI: [laughs] I think what you're touching on there is crucial. And you said the word community, and, to me, that is, you know, a big part of what DEF CON and, you know, hacking and security culture is. But it is, I think, one of the things that kind of outside of this, we tend to miss it more, you know, specifically, like, focused conferences. It is more about kind of the content, you know, the hallway track is always a thing. But it's less intentional than I personally, at this stage, really prefer, you know. So, I do like those things where it is encouraging interaction. For me, I'd rather go to happy hour with some people who are really well versed in the subject that they're in rather than even necessarily listening to a talk from them on what they're doing. Simply because I think the community aspect, the social aspect, actually gets you more of the information that is more relevant to what you're doing on a day-to-day basis than just consuming it passively. VICTORIA: I agree because consuming it passively or even intentionally remotely, there are things that you didn't even think to think about [laughs] that aren't going to come up just on your own. You have to have another person there who's...Actually, I have a good friend who's co-working with me this week who's at Ticketmaster. And so, just hearing about some of the problems they have and issues there has been entertaining for me. So yeah, I love that about DEF CON, and I love hearing about community stories and fun ways that companies can get a benefit out of coming together and just putting good content out there. RISHI: Absolutely. I think problem-solving is where you get the most value out of it as a company and as a business. VICTORIA: Yeah, maybe that's a good segue to tell me a little bit more about your background and how you came to be where you are today. RISHI: Yeah. For me growing up, I was always that problem-solver type of person. So, I think that's what kind of naturally gravitated me towards tech and, you know, hardware and software engineering. You know, so, for me, I go back quite a while. I'd been doing a lot of development, you know, in the early days of my career. I started out doing firmware development back in the days of large tape libraries, right? So, if you think about, like, big businesses back before cloud was a big thing and even back before SSDs were a thing, you know, it was all spinning disks. It was all tape. And that's kind of the area that I started in. So, I was working on robots that actually move tapes around these giant tape libraries that are, you know, taller than I am that you can walk inside of because they're so big, for big corporations to be able to backup their data on an overnight basis. You have to do that kind of stuff. Then I started going into smaller and smaller companies, into web tech, into startups, then into venture-backed startups. And then, eventually, I started my own company and did that for a while. All of this is really just kind of, you know, software engineering in a nutshell, lots of different languages, lots of different technologies. But really, from the standpoint of, here's a whole bunch of hard problems that need to be solved. Let's figure out how we can do that and how we can make some money by solving some of these problems. That eventually kind of led me down the security path as well and the engineering management side of things, which is what I do now, both at Backstop...is a security consulting business and being VP of Engineering at Varo Bank. WILL: How was your journey? Because you started as an intern in 2003. RISHI: [laughs] WILL: And then, you know, 20 years later. So, how was your journey through all of that? [laughs] RISHI: [laughs] You know, I hadn't actually put it together that it has been 20 years this year until you said that. So, that's awesome. It's been a blast, you know. I can honestly say it's been wildly different than what I imagined 20 years ago and interesting in different ways. I think I'm very fortunate to be able to say that. When I started out as an intern in 2003, technologies were very different. I was doing some intern shifts with the federal government, you know, so the pace was wildly different. And when I think of where technology has come now, and where the industry has gone, and what I get to do on a day-to-day basis, I'm kind of just almost speechless at just how far we've come in 20 years, how easy some things are, how remarkably hard some other things are that should honestly be easy at this point, but just the things that we can do. I'm old enough that I remember cell phones being a thing and then smartphones coming out and playing with them and being like, yeah, this is kind of mediocre. I don't really know why people would want this. And the iPhone coming out and just changing the game and being like, okay, now I get it. You know, to the experience of the internet and, you know, mobile data and everywhere. It's just phenomenal the advances that we've had in the last 20 years. And it makes me excited for the next 20 years to see what we can do as we go forward. VICTORIA: I'm going to take personal offense to someone knowing that technology being too old [laughs], but, yeah, because it really wasn't that long ago. And I think one thing I always think about having a background in civic tech and in financial tech as well is that the future is here; it's just not evenly distributed. So, now, if you're building a new company, of course, the default is to go straight to the cloud. But many companies and organizations that have been around for 60-80 years and using the internet right when it first came out are still in really old technologies that just simply work. And maybe they're not totally sure why, and change is difficult and slow. So, I wonder if you have any experience that you can take from the banking or fintech industry on how to make the most out of modern security and compliance platforms. RISHI: Yeah, you know, I think most people in tech especially...and the gray hairs on me are saying the younger folks in tech especially don't realize just how much older technologies still exist and will exist for quite some time. When you think of banking itself, you know, most of the major companies that you can think of, you know, in the U.S. especially but kind of across the world that are the top tier names of banks, and networks, and stuff like that, still run mainframes. When you swipe your credit card, there's a very good chance that is processed on a mainframe. And that's not a bad thing. But it's just, you know when you talk to younger engineers, it's not something that kind of crosses their mind. They feel like it is old-tech. The bulk of businesses don't actually run on the cloud. Having been through it, I've racked and stacked servers and had to figure out how to physically take hardware across, you know, country borders and things like those lines. And now, when I do want to spin up a server somewhere else, it's just a different AWS region. So, it's remarkably easy, at this point, to solve a lot of those problems. But once you're up and live and you have customers, you know, where downtime is impactful or, you know, the cost of moving to the cloud or modernizing your technology is substantial, things tend to move a lot slower. And I think you see that, especially when it comes to security, because we have more modern movements like DevOps bringing security into it. And with a lot of the, you know, the modern security and compliance platforms that exist, they work very, very well for what they do, especially when you're a startup or your whole tech stack is modernized. The biggest challenges, I think, seem to come in when you have that hybrid aspect of it. You do have some cloud infrastructure you have to secure. You do have some physical data centers you have to secure. You have something that is, you know, on-premise in your office. You have something that is co [inaudible 10:01] somewhere else. Or you also have to deal with stuff like, you know, much less modern tech, you know, when it comes to mainframes and security and kind of being responsible for all of that. And I think that is a big challenge because security is one of those things where it's, you know, if you think of your house, you can have the strongest locks on your door and everything else like that. But if you have one weak point, you have a window that's left open, that's all it takes. And so, it has to be all-inclusive and holistic. And I think that is remarkably hard to do well, even despite where technology has come to these days. WILL: Speaking of securities, I remember when the Internet banking started a couple of years ago. And some of the biggest, I guess, fears were, like, the security around it, the safety. Because, you know, your money, you're putting your money in it, and you can't go to a physical location to talk to anyone or anything. And the more and more you learn about it...at first, I was terrified of it because you couldn't go talk to someone. But the more and more I learned about it, I was like, oh, there's so much security around it. In your role, what does that look like for you? Because you have such a huge impact with people's money. So, how do you overcome that fear that people have? RISHI: There's, I think, a number of steps that kind of go into it. And, you know, in 2023, it's certainly a little bit easier than it used to be. But, you know, very similar, I've had the same questions, you know, and concerns that you're describing. And I remember using one of the first banks that was essentially all digital and kind of wondering, you know, where is my money going? What happens if something goes wrong? And all of those types of things. And so, I think there is kind of a number of different aspects that go into it. One is, you know, obviously, the technical aspects of security, you know, when you put your credit card number in on the internet, you know, is it encrypted? You know, is it over, you know, TLS? What's happening there? You know, how safe and secure is all that kind of thing? You know, at this point, pretty much everyone, at least in the U.S., has been affected by credit card breaches, huge companies like Home Depot and Target that got cards accessed or, you know, just even the smaller companies when you're buying something random from maybe something...a smaller website on the internet. You know, that's all a little bit better now. So, I think what you have there was just kind of a little bit of becoming comfortable with what exists now. The other aspect, though, I think, then comes into, well, what happens when something goes wrong? And I think there's a number of aspects that are super helpful for that. I think the liability aspect of credit card, you know, companies saying, you know, and the banks "You're not liable for a fraudulent transaction," I think that was a very big and important step that really helps with that. And on top of that, then I think when you have stuff like the FDIC, you know, and insurance in the U.S., you know, that is government-backed that says, you know what? Even if this is an online-only digital bank, you're safe. You're protected. The government's got your back in that regard. And we're going to make sure that's covered. At Varo, that's one of the key things that we think about a lot because we are a bank. Now, most FinTechs, actually, aren't banks, right? They partner with other third-party banks to provide their financial services. Whereas at Varo, we are federally regulated. And so, we have the full FDIC protection. We get the benefits of that. But it also means that we deal with the regulation aspects and being able to prove that we are safe and secure and show the regulators that we're doing the right things for our customers. And I think that's huge and important because, obviously, it's safety for customers. But then it changes how you begin to think about how you're designing products, and how you're [inaudible 13:34] them, and, you know, how you're marketing them. Are we making a mobile app that shows that we're safe, and secure, and stable? Or are we doing this [inaudible 13:42] thing of moving too fast and breaking things? When it's people's money, you have to be very, very dialed into that. You still have to be able to move fast, but you have to show the protection and the safety that people have because it is impactful to their lives. And so, I think from the FinTech perspective, that's a shift that's been happening over the last couple of years to continue that. The last thing I'll say, too, is that part of it has just come from technology itself and the comfort there. It used to be that people who were buying, you know, items on the internet were more the exception rather than the rule. And now with Amazon, with Shopify, with all the other stuff that's out there, like, it's much more than a norm. And so, all of that just adds that level of comfort that says, I know I'm doing the right things as a consumer, that I'm protected. If I, you know, do have problems, my bank's got my back. The government is watching out for what's happening and trying to do what they can do to regulate all of that. So, I think all of that has combined to get to that point where we can do much more of our banking online and safely. And I think that's a pretty fantastic thing when it comes to what customers get from that. I am old enough that I remember having to figure out times to get to the bank because they're open nine to five, and, you know, I have to deposit my paycheck. And, you know, I work nine to five, and maybe more hours pass, and I had no idea when I can go get that submitted. And now, when I have to deposit something, I can just take a picture with my phone, and it safely makes it to my account. So, I think the convenience that we have now is really amazing, but it has certainly taken some time. And I think a number of different industry and commercial players kind of come together and make that happen. MID-ROLL AD: Now that you have funding, it's time to design, build, and ship the most impactful MVP that wows customers now and can scale in the future. thoughtbot Liftoff brings you the most reliable cross-functional team of product experts to mitigate risk and set you up for long-term success. As your trusted, experienced technical partner, we'll help launch your new product and guide you into a future-forward business that takes advantage of today's new technologies and agile best practices. Make the right decisions for tomorrow today. Get in touch at thoughtbot.com/liftoff. VICTORIA: I appreciate that perspective on approaching security from the user experience of wanting safety. And I'm curious if we can talk in contrast from that experience to the developer experience with security. And how do you, as a new leader in this financial product company, prioritize security and introduce it from a, like, building a safety culture perspective? RISHI: I think you just said that very eloquently. It is a safety culture. And cultural changes are hard. And I think for quite some time in the developer industry, security was either an afterthought or somebody else's problem. You know, it's the security team that has to think about it. It's, you know, and even these days, it's the red team that's going to go, you know, find these answers or whatever I'm shipping as a developer. My only thing to focus on is how fast I can ship, or, you know, what I'm shipping, rather than how secure is what I'm shipping. And so, I think to really be effective at that, it is a cultural shift. You have to think and talk about security from the outset. And you have to bake those processes into how you build product. Those security conversations really do need to start at the design phase. And, you know, thinking about a mobile app for a bank as an example, you know, it starts when you're just thinking about the different screens on a mobile app that people are going to go through. How are people interpreting this? You know, what is the [inaudible 17:23], and the feeling, and the emotions, that we're building towards? You know, is that safe and secure or, you know, is it not? But then it starts getting to the architecture and the design of the systems themselves to say, well, here's how they're going to enter information, here's how we're passing this back and forth. And especially in a world where a lot of software isn't just 100% in-house, but we're calling other partners for that, you know, be it, you know, infrastructure or risk, you know, or compliance, or whatever else it may be, how are we protecting people's data? How are we making sure our third parties are protecting people's data? You know, how are we encrypting it? How are we thinking about their safety all the way through? Again, even all the way down to the individual developer that's writing code, how are we verifying they're writing good, high-quality, secure code? Part of it is training, part of it is culture, part of it is using good tooling around that to be able to make sure and say, when humans make mistakes because we are all human and we all will make mistakes, how are we catching that? What are the layers do we have to make sure that if a mistake does happen, we either catch it before it happens or, you know, we have defense in depth such that that mistake in and of itself isn't enough to cause a, you know, compromise or a problem for our customers? So, I think it starts right from the start. And then, every kind of step along the way for delivering value for customers, also let's add that security and privacy and compliance perspective in there as well. VICTORIA: Yes, I agree. And I don't want to work for a company where if I make a small human mistake, I'm going to potentially cost someone tens or however many thousands of dollars. [laughs] WILL: I have a question around that. How, as a leader, how does that affect you day to day? Because I feel like there's some companies, maybe thoughtbot, maybe other companies, that a decision is not as critical as working as a bank. So, you, as a leader, how do you handle that? RISHI: There's a couple of things I try and consider in any given big or important decision I have to make, the aspects around, like, you know, the context, what the decision is, and that type of stuff. But from a higher level, there's kind of two things I try and keep in mind. And when I say keep in mind, like, when it's a big, impactful decision, I will actually go through the steps of, you know, writing it down or talking this out loud, sometimes by myself, sometimes with others, just, again, to make sure we are actually getting to the meat of it. But the first thing I'm trying to think of is kind of the Amazon idea of one-way versus two-way doors. If we make this decision and this is the wrong decision, what are the ramifications of that? You know, is it super easy to undo and there's very little risk with it? Or is it once we've made this decision or the negative outcome of this decision has happened, is it unfixable to a certain degree? You know, and that is a good reminder in my head to make sure that, you know, A, I am considering it deeply. And that, B, if it is something where the ramifications, you know, are super huge, that you do take the time, and you do the legwork necessary to make sure you're making a good, valid decision, you know, based on the data, based on the risks involved and that there's a deep understanding of the problem there. The second thing I try to think of is our customers. So, at Varo, our customers aren't who most banks target. A lot of banks want you to take all your money, put it in there, and they're going to loan that money out to make their money. And Varo is not that type of bank, and we focus on a pretty different segment of the market. What that means is our customers need their money. They need it safely and reliably, and it needs to be accurate when they have it. And what I mean by that is, you know, frequently, our customers may not have, you know, hundreds or a thousand dollars worth of float in their bank accounts. So, if they're going and they're buying groceries and they can't because there's an error on our side because we're down, and because the transactions haven't settled, then that is very, very impactful to them, you know, as an individual. And I think about that with most of these decisions because being in software and being in engineering I am fortunate enough that I'm not necessarily experiencing the same economic struggles that our customers may have. And so, that reminder helps me to think about it from their perspective. In addition, I also like to try and think of it from the perspective...from my mom, actually, who, you know, she is retired age. She's a teacher. She's non-technical. And so, I think about her because I'd say, okay, when we're making a product or a design decision, how easy is it for her to understand? And my biases when I think about that, really kind of come into focus when I think about how she would interpret things. Because, you know, again, for me, I'm in tech. I think about things, you know, very analytically. And I just have a ton of experience across the industry, which she doesn't have. So, even something as simple as a little bit of copy for a page that makes a ton of sense to me, when I think about how she would interpret it, it's frequently wildly different. And so, all of those things, I think, kind of come together to help make a very strong and informed decision in these types of situations where the negative outcomes really do matter. But you are, you know, as Varo is, you're a startup. And you do need to be able to build more products quickly because our customers have needs that aren't being met by the existing banking industry. And so, we need to provide value to them so that their lives are a bit better. VICTORIA: I love that focus on a specific market segment and their needs and solving for that problem. And we know that if you're at a certain income level, it's more expensive [laughs] because of the overdraft fees and other things that can cause you problems. So, I really appreciate that that's the mission at Varo, and that's who you're focusing on to create a better banking product that makes more sense. I'm curious if there were any surprises and challenges that you could share from that discovery process and finding out, you know, exactly what were those things where your mom was, like, uh, actually, I need something completely different. [laughs] RISHI: Yeah, so, [chuckles] I'm chuckling because, you know, it's not, like, a single kind of time or event. It's, you know, definitely an ongoing process. But, you know, as actually, we were talking, you know, about earlier in terms of being kind of comfortable with doing things digital and online, that in and of itself is something that even in 2023, my mom isn't as comfortable or as confident as, you know, say, maybe the three of us are. As an example, when sending money, you know, kind of like a peer-to-peer basis, like, if I'm sending my mom a little bit of money, or she's sending me something, you're kind of within the family. Things that I would think would be kind of very easy and straightforward actually do cause her a little bit more concern. Okay, I'm entering my debit card number into this so that it can get, you know, the cash transferred into my bank account. You know, again, for me, it didn't even cross my mind, actually, that that would be something uncomfortable. But for my mom, that was something where she actually had some concerns about it and was messaging me. Her kind of personal point of view on that was, I would rather use a credit card for this and get the money on a credit card instead of a debit card because the debit card is linked to a bank account, and the security around that needs to be, you know, much tighter. And so, it made her more uncomfortable entering that on her phone. Whereas even a credit card it would have given her a little bit more peace of mind simply because it wasn't directly tied to her bank account. So, that's just, you know, the most recent example. I mean, honestly, that was earlier today, but it's something I hadn't thought of. And, again, for most of our customers, maybe that's not the case and how they think. But for folks that are at that retirement age, you know, in a world where there are constant barrages of scam, you know, emails, and phone calls, and text messages going around, the concern was definitely there. VICTORIA: That happened to me. Last week, I was on vacation with my family, and we needed to pay my mom for the house we'd rented. And I had to teach her how to use Zelle and set up Zelle. [laughter] It was a week-long process. But we got there, and it works [laughs] now. But yeah, it's interesting what concerns they have. And the funny part about it was that my sister-in-law happens to be, like, a lawyer who prevents class action lawsuits at a major bank. And she reassured us that it was, in fact, secure. [laughs] I think it's interesting thinking about that user experience for security. And I'm curious, again, like, compare again with the developer experience and using security toolings. And I wonder if you had any top recommendations on tools that make the developer experience a little more comfortable and feeling like you're deploying with security in mind. RISHI: That, in particular, is a bit of a hard question to answer. I try and stay away from specific vendors when it comes to that because I think a lot of it is contextual. But I could definitely talk through, like, some of the tools that I use and the way I like to think about it, especially from the developer perspective. I think, first off, consider what aspect of the software development, you know, lifecycle you're in. If you are an engineer writing, you know, mostly application code and dealing with building product and features and stuff like that, start from that angle. I could even take a step back and say security as an industry is very, very wide at this point. There is somebody trying to sell you a tool for basically every step in the SDLC process, and honestly, before and after to [inaudible 26:23]. I would even almost say it's, to some extent, kind of information and vendor overload in a lot of ways. So, I think what's important is to think about what your particular aspect of that is. Again, as an application engineer, or if you're building cloud infrastructure, or if you're an SRE, you know, or a platform team, kind of depending on what you are, your tooling will be different. The concepts are all kind of similar ideas, but how you go about what you build will be different. In general, I like to say, from the app side of things, A, start with considering the code you're writing. And that's a little bit cultural, but it's also kind of more training. Are you writing code with a security mindset? are you designing systems with a security mindset? These aren't things that are typically taught, you know, in school if you go get a CS degree, or even in a lot of companies in terms of the things that you should be thinking about. So, A, start from there. And if you don't feel like you think about, you know, is this design secure? Have we done, you know, threat modeling on it? Are we considering all of the error paths or the negative ways people can break the system? Then, start from that and start going through some of the security training that exists out there. And there's a lot of different aspects or avenues by which you can get that to be able to say, like, okay, I know I'm at least thinking about the code I write with a security mindset, even if you haven't actually changed anything about the code you're writing yet. What I actually think is really helpful for a lot of engineers is to have them try and break things. It's why I like to compete in CTFs, but it's also why I like to have my engineers do the same types of things. Trying to break software is both really insightful from the aspect that you don't get when you're just writing code and shipping it because it's not something you have time to do, but it's also a great way to build up some of the skills that you need to then protect against. And there's a lot of good, you know, cyber ranges out there. There's lots of good, just intentionally vulnerable applications that you can find on GitHub but that you can just run, you know, locally even on your machine and say, okay, now I have a little web app stood up. I know this is vulnerable. What do I do? How do I go and break it? Because then all of a sudden, the code that you're writing you start to think about a little bit differently. It's not just about how am I solving this product problem or this development problem? But it's, how am I doing this in a way that is safe and secure? Again, as an application side of things, you know, just make sure you know the OWASP Top 10 inside and out. Those are the most basic things a lot of engineers miss. And it only takes, again, one miss for it to be critical. So, start reviewing it. And then, you start to think about the tooling aspect of it. People are human. We're going to make mistakes. So, how do we use the power of technology to be able to stop this? You know, and there is static scanning tools. Like, there's a whole bunch of different ones out there. You know, Semgrep is a great one that's open source just to get started with that can help you find the vulnerable code that may exist there. Consider the SQL queries that you're writing, and most importantly, how you're writing them. You know, are you taking user input and just chucking it in there, or are you sanitizing it? When I ask these questions, for a lot of engineers, it's not usually yes or no. It's much more of an, well, I don't know. Because in software, we do a really good job of writing abstraction layers. But that also means, you know, to some extent, there may be a little bit of magic in there, or a lack thereof of magic that you don't necessarily know about. And so, you have to be able to dive into the libraries. You have to know what you're doing to even be able to say something like, oh no, this SQL query is safe from this user input because we have sanitized it. We have, you know, done a prepared statement, whatever it may be. Or, no, actually, we are just doing something here that's been vulnerable, and we didn't realize we were, and so now that's something we have to address. So, I think, like, that aspect in and of itself, which isn't, you know, a crazy ton of things. It's not spending a ton of money on different tools. But it's just internalizing the fact that you start to think a little bit differently. It provides a ton of value. The last thing on that, too, is to be able to say, especially if you're coming from a development side, or even just from a founder or a startup side of things, what are my big risks? What do I need to take care of first? What are the giant holes or flaws? You know, and what is my threat model around that? Obviously, as a bank, you have to care very deeply right from the start. You know, if you're not a bank, if you're not dealing with financial transactions, or PII, or anything like that, there are some things that you can deal with a little bit later. So, you have to know your industry, and you have to know what people are trying to do and the threat models and the threat vectors that can exist based on where you are. WILL: That's amazing. You know, earlier, we talked about you being an engineer for 20 years, different areas, and stuff like that. Do you have any advice for engineers that are starting out right now? And, you know, from probably year one to year, you know, anything under ten years of experience, do you have any advice that you usually give engineers when you're chatting with them? RISHI: The advice I tend to give people who are just starting out is be the type of person that asks, "How does this work?" Or "Why does this work?" And then do the work to figure out the answer. Maybe it is talking to someone; maybe it's diving into the details; maybe it's reading a book in some aspect that you haven't had much exposure to. When I look at my career and when I look at the careers of folks around me and the people that I've seen be most successful, both in engineering but also on the business side, that desire to know why something is the case is I think, one of the biggest things that determines success. And then the ability to answer that question by putting in the right types of work, the right types of scientific method and processes and such, are the other factor. So, to me, that's what I try and get across to people. I say that mostly to junior folks because I think when you're getting started, it's really difficult. There's a ton out there. And we've, again, as software engineers, and hardware engineers, and cloud, and all this kind of stuff, done a pretty good job of building a ton of abstraction layers. All of our abstraction layers [inaudible 32:28] to some degree. You know, so as you start, you know, writing a bunch of code, you start finding a bunch of bugs that you don't necessarily know how to solve and that don't make any sense in the avenue that you've been exposed to. But as soon as you get into the next layer, you understand how that works begin to make a lot more sense. So, I think being comfortable with saying, "I have no idea why this is the case, but I'm going to go find out," makes the biggest difference for people just starting out their career. WILL: I love that advice. Not too long ago, my manager encouraged me to write a blog post on something that I thought that I really knew. And when I started writing that blog post, I was like, oh boy, I have no idea. I know how to do it, but I don't know the why behind it. And so, I was very thankful that he encouraged me to write a blog post on it. Because once you start explaining it to other people, I feel you really have to know the whys. And so, I love that advice. That's really good advice. VICTORIA: Me too. And it makes sense with what we see statistically as well in the DORA research. The DevOps Research Association publishes a survey every year, the State of DevOps Report. And one of the biggest findings I remember from last year's was that the most secure and reliable systems have the most open communication and high trust among the teams. And so, being able to have that curiosity as a junior developer, you need to be in an environment where you can feel comfortable asking questions [laughs], and you can approach different people, and you're encouraged to make those connections and write blog posts like Will was saying. RISHI: Absolutely, absolutely. I think you touched on something very important there as well. The psychological safety really makes a big difference. And I think that's critical for, again, like, folks especially earlier in their career or have recently transitioned to tech, or whatever the case may be. Because asking "Why?" should be something that excites people, and there are companies where that's not necessarily the case, right? Where you asking why, it seems to be viewed as a sign that you don't know something, and therefore, you're not as good as what you should be, you know, the level you should be at or for whatever they expect. But I do think that's the wrong attitude. I think the more people ask why, the more people are able and comfortable to be able to say, "I don't know, but I'm going to go find out," and then being able to be successful with that makes way better systems. It makes way safer and more secure systems. And, honestly, I think it makes humans, in general, better humans because we can do that. VICTORIA: I think that's a great note to start to wrap up on. Is there any questions that you have for me or Will? RISHI: Yeah. I would love to hear from both of you as to what you see; with the experiences that you have and what you do, the biggest impediments or speed bumps are when it comes to developers being able to write and ship secure code. VICTORIA: When we're talking with new clients, it depends on where they are in really the adoption of their product and the maturity of their organization. Some early founders really have no technology experience. They have never managed an IT organization. You know, setting up basic employee account access and IDs is some of the initial steps you have to take to really get to where you can do identity management, and permissions management, and all the things that are really table stakes for security. And then others have some progress, and they have a fair amount of data. And maybe it's in that situation, like you said before, where it's really a trade-off between the cost and benefit of making those changes to a more secure, more best practice in the cloud or in their CI/CD pipeline or wherever it may be. And then, when you're a larger organization, and you have to make the trade-offs between all of that, and how it's impacting your developer experience, and how long are those deployed times now. And you might get fewer rates of errors and fewer rates of security vulnerabilities. But if it's taking three hours for your deployments to go out [laughs] because there's so many people, and there's so many checks to go through, then you have to consider where you can make some cuts and where there might be more efficiencies to be gained. So, it's really interesting. Everyone's on a different point in their journey. And starting with the basics, like you said, I love that you brought up the OWASP Top 10. We've been adopting the CIS Controls and just doing a basic internal security audit ourselves to get more ready and to be in a position where... What I'm familiar with as well from working in federal agencies, consulting, maintaining some of the older security frameworks can be a really high cost, not only in terms of auditing fees but what it impacts to your organization to, like, maintain those things [laughs] and the documentation required. And how do you do that in an agile way, in a way that really focuses on addressing the actual purpose of the requirements over needing to check a box? And how do we replicate that for our clients as well? RISHI: That is super helpful. And I think the checkbox aspect that you just discussed I think is key. It's a difficult position to be in when there are boxes that you have to check and don't necessarily actually add value when it comes to security or compliance or, you know, a decrease in risk for the company. And I think that one of the challenges industry-wide has always been that security and compliance in and of itself tends to move a little bit slower from a blue team or a protection perspective than the rest of the industry. And so, I mean, I can think of, you know, audits that I've been in where, you know, just even the fact that things were cloud-hosted just didn't make sense to the auditors. And it was a struggle to get them to understand that, you know, there is shared responsibility, and this kind of stuff exists, and AWS is taking care of some things, and we're taking care of some other things when they've just been developed with this on-premise kind of mentality. That is one of the big challenges that still exists kind of across the board is making sure that the security work that you're doing adds security value, adds business value. It isn't just checking the box for the sake of checking the box, even when that's sometimes necessary. VICTORIA: I am a pro box checker. RISHI: [laughs] VICTORIA: Like, I'll get the box checked. I'll use Trello and Confluence and any other tool besides Excel to do it, too. We'll make it happen with less pain, but I'd rather not do it [laughs] if we don't have to. RISHI: [laughs] VICTORIA: Let's make it easy. No, I love it. Is there anything else that you want to promote? RISHI: No, I don't think there's anything else I want to promote other than I'm going to go back to what I said just earlier, like, that culture. And if, you know, folks are out there and you have junior engineers, you have engineers that are asking "Why?", you have people that just want to do the right thing and get better, lean into that. Double down on those types of folks. Those are the ones that are going to make big differences in what you do as a business, and do what you can to help them out. I think that is something we don't see enough of in the industry still. And I would love for that to change. VICTORIA: I love that. Thank you so much, Rishi, for joining us. RISHI: Thanks for having me. This was a great conversation. I appreciate the time. VICTORIA: You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you can find me on Twitter @victori_ousg. WILL: And you could find me on Twitter @will23larry. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com. Special Guest: Rishi Malik.