POPULARITY
Unveiling the Double-Edged Sword of AI in Cybersecurity with Brian Black In this episode of Cybersecurity Today, host Jim Love interviews Brian Black, the head of security engineering at Deep Instinct and a former black hat hacker. Brian shares his journey into hacking from a young age, his transition to ethical hacking, and his experiences working with major companies. The discussion delves into the effectiveness of cybersecurity defenses against modern AI-driven attacks, the importance of understanding organizational data, and the challenges of maintaining robust security in the age of AI. Brian emphasizes the need for preemptive security measures and shares insights on the evolving threats posed by AI as well as the need for continuous education and adaptation in the cybersecurity field. 00:00 Introduction and Sponsor Message 00:21 Meet Brian Black: From Black Hat to Good Guy 00:55 Brian's Early Hacking Days 02:46 Transition to Ethical Hacking 04:11 Life in the Hacking Community 08:54 Advice for Aspiring Hackers and Parents 11:05 Corporate Career and Red Teaming 13:12 The Importance of Basics in Cybersecurity 21:41 Multifactor Authentication: The Good and the Bad 24:19 Challenges in Vendor Security Testing 27:41 Weaknesses in Cyber Defense 28:22 AI Speed vs Human Speed 28:37 AI in Cybersecurity Attacks 30:08 Dark AI Tools and Their Capabilities 32:54 AI Agents and Offensive Strategies 35:43 Challenges in Cybersecurity Defense 41:48 The Role of Red Teaming 42:46 Hiring the Right Red Team 46:59 Burnout in Cybersecurity 48:17 AI as a Double-Edged Sword 52:43 Deep Instinct's Approach to Security 53:58 Conclusion and Final Thoughts
Matthew Devost is a cybersecurity, risk management, and national security expert with over 25 years of experience. He is the CEO and Co-Founder of OODA LLC and Devsec previously founded the Terrorism Research Center and cybersecurity consultancy FusionX, which was acquired by Accenture. At Accenture, he led the Global Cyber Defense practice. Matthew has held key leadership roles at iDefense, iSIGHT Partners, Total Intel, SDI, Tulco Holdings, and Technical Defense, making him a trusted voice in cyber threat intelligence and critical infrastructure protection. 00:00 Introduction02:03 The Evolution of Cybersecurity and National Security Risks06:16 Understanding Cyber Threats and Strategies for Defense11:19 The Role of Private Sector in Cybersecurity14:40 Addressing Cybersecurity Challenges and Failures of Imagination17:16 Overcoming Inertia in Cybersecurity Leadership20:42 The Importance of Red Teaming and Realistic Simulations24:44 The Impact of AI on Cybersecurity29:31 Future of Cybersecurity and Emerging Technologies36:56 Overview of OODA and DevSec Ventures
#SecurityConfidential #DarkRhiinoSecurityMatthew Devost is a cybersecurity, risk management, and national security expert with over 25 years of experience. He is the CEO and Co-Founder of OODA LLC and Devsec previously founded the Terrorism Research Center and cybersecurity consultancy FusionX, which was acquired by Accenture. At Accenture, he led the Global Cyber Defense practice. Matthew has held key leadership roles at iDefense, iSIGHT Partners, Total Intel, SDI, Tulco Holdings, and Technical Defense, making him a trusted voice in cyber threat intelligence and critical infrastructure protection. 00:00 Introduction02:03 The Evolution of Cybersecurity and National Security Risks06:16 Understanding Cyber Threats and Strategies for Defense11:19 The Role of Private Sector in Cybersecurity14:40 Addressing Cybersecurity Challenges and Failures of Imagination17:16 Overcoming Inertia in Cybersecurity Leadership20:42 The Importance of Red Teaming and Realistic Simulations24:44 The Impact of AI on Cybersecurity29:31 Future of Cybersecurity and Emerging Technologies36:56 Overview of OODA and DevSec Ventures----------------------------------------------------------------------To learn more about Matthew visit https://www.devost.net/To learn more about Dark Rhiino Security visit https://www.darkrhiinosecurity.com
Wie realistisch sind deine Security-Massnahmen wirklich? In dieser Folge sprechen Andreas und Sandro über simulierte Angriffe – vom gezielten Red-Team-Einsatz bis zum kollaborativen Purple Teaming. Sie erklären, wie strukturierte Security-Simulationen klassische Pentests und Bug-Bounties ergänzen, welche Rollen Red, Blue und Purple wirklich spielen – und warum die wahren Erkenntnisse oft erst nach dem Angriff kommen. Wer verstehen will, wie man Security im Ernstfall testet, sollte hier reinhören.
Red Teaming 101: understand your target before you attack. On this episode, we invited two heavy hitters, Principal Security Consultants Hans Lakhan and Oddvar Moe on the show to talk about Red Team operations. We discuss footprinting and reconnaissance techniques including identifying a target's online presence, the tools and methods used for reconnaissance, and social engineering. Listen as we walk through how we map the digital terrain before a red team engagement! About this podcast: Security Noise, a TrustedSec Podcast hosted by Geoff Walton and Producer/Contributor Skyler Tuter, features our cybersecurity experts in conversation about the infosec topics that interest them the most. Find more cybersecurity resources on our website at https://trustedsec.com/resources. Red teaming services: https://trustedsec.com/services/red-teaming
☁️ Amazon Web Services (AWS)-Ausfall & die Cloud-Nabelschnur EuropasAm Ende ist es wohl immer DNS. Warum wir so viele Ragebait-Posts gesehen haben, wie Souveränität reflexartig diskutiert wird und was das über unsere Abhängigkeiten aussagt.
All links and images can be found on CISO Series. This week's episode is hosted by David Spark, producer of CISO Series and Andy Ellis, principal of Duha. Joining us is our sponsored guest, Khush Kashyap, senior director, GRC, Vanta. In this episode: Skip the Sermon When to coach versus command Making risk quantification useful Recognizing a distinct discipline Huge thanks to our sponsor, Vanta Vanta automates key areas of your GRC program—including compliance, risk, and customer trust—and streamlines the way you manage information. A recent IDC analysis found that compliance teams using Vanta are 129% more productive. Get back time to focus on strengthening security and scaling your business at https://www.vanta.com/landing/demo-grc?utm_campaign=new-way-grc&utm_source=ciso-series-podcast&utm_medium=podcast&utm_content=banner
DeMarcus Williams, a senior security engineer at Starbucks, has built a career defined by creativity, intuition, and persistence. With roles at the U.S. Department of Defense, AWS/Amazon, and now Starbucks, he specializes in offensive security, red teaming, and adversary emulation. In this episode, DeMarcus joins Jack Clabby of Carlton Fields and Cyber Florida's Sarina Gandy […]
Talk Python To Me - Python conversations for passionate developers
English is now an API. Our apps read untrusted text; they follow instructions hidden in plain sight, and sometimes they turn that text into action. If you connect a model to tools or let it read documents from the wild, you have created a brand new attack surface. In this episode, we will make that concrete. We will talk about the attacks teams are seeing in 2025, the defenses that actually work, and how to test those defenses the same way we test code. Our guides are Tori Westerhoff and Roman Lutz from Microsoft. They help lead AI red teaming and build PyRIT, a Python framework the Microsoft AI Red Team uses to pressure test real products. By the end of this hour you will know where the biggest risks live, what you can ship this quarter to reduce them, and how PyRIT can turn security from a one time audit into an everyday engineering practice. Episode sponsors Sentry AI Monitoring, Code TALKPYTHON Agntcy Talk Python Courses Links from the show Tori Westerhoff: linkedin.com Roman Lutz: linkedin.com PyRIT: aka.ms/pyrit Microsoft AI Red Team page: learn.microsoft.com 2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps: genai.owasp.org AI Red Teaming Agent: learn.microsoft.com 3 takeaways from red teaming 100 generative AI products: microsoft.com MIT report: 95% of generative AI pilots at companies are failing: fortune.com A couple of "Little Bobby AI" cartoons Give me candy: talkpython.fm Tell me a joke: talkpython.fm Watch this episode on YouTube: youtube.com Episode #521 deep-dive: talkpython.fm/521 Episode transcripts: talkpython.fm Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
After a long career as a CTO with companies like NASA, Fannie Mae and Raytheon for the last 18 years, Julian Zottl was really looking forward to his retirement. Hold on – Not so fast! After a short respite, he started getting calls for help from different organizations. It did not take too long for Julian and his wife to recognize that they needed to incorporate and turn this into an engineering and consulting company. Julian discusses the company's future including: Bidding on federal contracts and Partnering with other countries International consulting work Julian Zottl Julian also touched on the future of cybersecurity noting that it is complex, evolving and filled with ongoing challenges. With the rapid evolution of cyber threats Julian noted that the decreasing cost and time required to develop advanced cyber capabilities has led to a significant acceleration in cyber-attacks. He explained how artificial intelligence and machine learning are being used to create vulnerabilities and execute tasks. Julian also touched on the use of AI to predict and exploit complex multi layered efforts in cyber operations highlighting the challenges posed by those advanced threats. What We Do at Azgard Tek! Systems Engineering: Nation-scale secure systems engineered using our aZgard Engineering Process (ZEP). Precision Intelligence: Ubiquitous surveillance, HTIO, SIGINT, and full-spectrum intelligence support—including cultural and geopolitical analysis. Cybersecurity Solutions: Zero Trust with Resiliency, Red Teaming, threat analytics, IR/Mitigation, and robust device testing. Data & AI/ML: Generative and Agentic AI solutions that automate and empower data fusion, threat detection, and mission intelligence at speed. For more information, go to: https://www.azgardtek.com
Can your AI systems be tricked into leaking data? Learn how red teaming can expose hidden vulnerabilities and what you can do to build better defenses.
Join us for an insightful episode of 'Breaking into Cybersecurity' as we sit down with Sinan Eren. With a rich background in red teaming and pen testing, Sinan shares his journey from the late '90s curiosity-driven entry into cybersecurity to founding several companies. Discover the challenges and triumphs of growing in the cybersecurity industry, the evolution from signature-based to heuristic-based security, and the importance of understanding business processes for effective risk management. Ideal for beginners and seasoned professionals alike, learn about emerging opportunities in AI and the nuances of entrepreneurship in cybersecurity.00:00 Introduction to the Guest and Episode Overview01:08 Sinan's Early Career and Entry into Cybersecurity02:40 The Evolution of Cybersecurity Practices04:00 Bug Track and Early Vulnerability Discoveries05:59 Transition to the US and Career Growth07:23 Signature-Based vs. Heuristic-Based Security11:45 Starting a Business in Cybersecurity19:10 Lessons from the First Startup21:31 Modernizing Remote Access Solutions25:08 Revolutionizing Credit and Next-Gen VPN Solutions25:48 Introduction to the Third Startup26:32 Challenges Faced by Managed Service Providers28:15 Automation Solutions for Mundane Tasks29:44 Ideation and Development of Automation Tools33:32 Evolution and Application of Automation Tools41:06 Business Process Modeling and Risk Management45:35 Final Advice for Aspiring ProfessionalsSponsored by CPF Coaching LLC - http://cpf-coaching.comThe Breaking into Cybersecurity: It's a conversation about what they did before, why did they pivot into cyber, what the process was they went through Breaking Into Cybersecurity, how they keep up, and advice/tips/tricks along the way.The Breaking into Cybersecurity Leadership Series is an additional series focused on cybersecurity leadership and hearing directly from different leaders in cybersecurity (high and low) on what it takes to be a successful leader. We focus on the skills and competencies associated with cybersecurity leadership and tips/tricks/advice from cybersecurity leaders.Develop Your Cybersecurity Career Path: How to Break into Cybersecurity at Any Level https://www.amazon.com/dp/1955976007/Hack the Cybersecurity Interview: A complete interview preparation guide for jumpstarting your cybersecurity career https://www.amazon.com/Hack-Cybersecurity-Interview-Interviews-Entry-level/dp/1835461298/
What is Red Teaming, and what does it have to do with cybersecurity? In this episode, we look at how Red Teamers are hired to attack company security using all manner of tactics, from tossing malware-infested USB sticks into parking lots to posing as an HVAC technician. We also take a look at one of the most notorious Red Team exercises in history, when two Coalfire employees were arrested and fought a long legal battle, just for doing their jobs. ResourcesInside the Courthouse Break-In Spree That Landed Two White-Hat Hackers in JailDarknet Diaries Episode 59: The CourthouseCoalfire Systems websiteDEF CON 22 - Eric Smith and Josh Perrymon - Advanced Red Teaming: All Your Badges Are Belong To UsHow RFID Technology Works: Revolutionizing the Supply ChainNolaCon 2019 D 07 Breaking Into Your Building A Hackers Guide to Unauthorized Physical AccessSend us a textSupport the showJoin our Patreon to listen ad-free!
Nachhaltige Führung - Der Leadership Podcast mit Niels Brabandt / NB Networks
Künstliche Intelligenz beschleunigt Entscheidungen — und ohne Governance beschleunigt sie Fehler. In dieser Episode zeigt Niels Brabandt, wie Führende KI produktiv nutzen, ohne Urteilsfähigkeit, Vertrauen oder Compliance zu gefährden. Von „plausibel, aber falsch“ (optimistische Wegzeiten, fehlerhaft gelesene PDFs) bis zu Hochrisiko-Fällen (erfundene Gerichtsurteile, Mythen in HR-Prozessen): Diese Folge macht klar, warum kritisches Denken das Herzstück jeder KI-gestützten Entscheidungsfindung sein muss. Das lernst du in dieser Episode Der einfache Prüf-Loop gegen KI-Fehler: Analyse → Evaluation → Urteil Primärquellen vor Sekundärquellen: wie „zu perfekte“ Zitate und Aktenzeichen verifiziert werden Quality Gates für Teams: Faktencheck, Bias-Screening, Halluzinations-Heuristiken Datenhygiene & Zugriff: interne KIs so konfigurieren, dass keine sensiblen Informationen „leaken“ Der 30/60/90-Tage-Plan: Policy, Red-Teaming, KPIs — pragmatisch implementiert Wie Vorstände und Geschäftsführungen Tempo mit Verlässlichkeit verbinden — und die Falle „billig & schnell“ vermeiden Für wen ist die Folge? Entscheidende in Vorstand, Geschäftsführung und Aufsicht, Gründernde sowie Leitungen von Business-Units, die KI-Geschwindigkeit und Entscheidungsqualität wollen. Über den Host Niels Brabandt ist Leadership-Experte und Host des „Leadership Podcast“. Er berät Organisationen zu evidenzbasierter, situativer Führung und AI-Governance — mit klarem Fokus auf messbarer Qualität und verantwortungsvollen KI-Prozessen. Gastgeber: Niels Brabandt / NB@NB-Networks.com Kontakt mit Niels Brabandt: https://www.linkedin.com/in/nielsbrabandt/ Niels Brabandts Leadership Letter: http://expert.nb-networks.com/ Niels Brabandts Webseite: https://www.nb-networks.biz/
This episode is brought to you by https://www.ElevateOS.com —the only all-in-one community operating system.Ever wonder how vulnerable your multifamily business really is?In this episode of the Multifamily Collective, I share the concept of red teaming—a bold, eye-opening practice born in the cyber world but packed with power for every corner of your organization.I walk through how placing someone inside your team to think like a competitor or bad actor helps uncover weak spots in your systems, your leadership, your marketing—and yes, even your people strategy.This isn't theory. It's practical, tactical leadership.I first experienced this through Vistage, surrounded by sharp minds from every industry—pest control to bakeries. And trust me, when nine people try to put your business out of business in real time, you learn fast what really matters.Here's my challenge to you:Form a red team.Pressure test your vulnerabilities.And emerge sharper, smarter, and more secure.Like if you're ready to think like a disruptor.Subscribe if you're committed to leveling up your leadership in Multifamily.For more engaging content, explore our offerings at the[https://www.multifamilycollective.com](https://www.multifamilycollective.com/) and the [https://www.multifamilymedianetwork.com](https://www.multifamilymedianetwork.com/)Join us to stay informed and inspired in the multifamily industry!
Get your FREE Cybersecurity Salary Guide:https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastJim Broome of Direct Defense has been doing red teaming since before it became a term — back when a "pentest" meant $25,000, no questions asked and walking out with a server under your arm. In this episode, Jim shares wild stories from decades of ethical hacking, including breaking into major tech companies, causing a cardiac event during a physical penetration test, and why he believes soft skills trump technical knowledge for aspiring red teamers. Learn why most companies aren't ready for red teaming, how to transition into cybersecurity from unexpected fields like education or event planning, and what it really takes to succeed in offensive security.0:00 - Intro to legendary red teamer Jim Broome1:00 - Cybersecurity Salary Guide2:58 - From BBS and ham radio to cybersecurity7:07 - Evolution from network admin to red teaming12:02 - GPS hacking and testing inflight entertainment systems15:31 - Hiring teachers and event planners as ethical hackers23:36 - Breaking into Symantec and stealing servers in the 90s28:33 - Physical pentest causes cardiac event34:06 - When companies should (and shouldn't) hire red teams39:44 - Why red teaming is "a punch in the mouth"44:09 - How AI is changing offensive and defensive security48:12 - Essential skills for aspiring red teamers50:39 - The groundskeeper who got domain admin52:18 - Best career advice: Be humbleView Cyber Work Podcast transcripts and additional episodes:https://www.infosecinstitute.com/podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastAbout InfosecInfosec's mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ's security awareness training. Learn more at infosecinstitute.com.
Assaf Kipnis, AI safety (intel and investigation) at ElevenLabs, discusses the evolving landscape of online safety, the sophisticated tactics of threat actors, and the role of regulation in shaping tech company responses. He also discusses the need for accountability in both tech companies and regulatory bodies to enhance safety and security in the digital space. Key Takeaways: New tactics and scams threat actors are using, and the effectiveness of measures like age verification and red teaming Limitations faced by tech companies in combating online safety issues, and the challenges of maintaining online safety at scale The role of law enforcement and regulation in pressuring companies, platforms, and teams to improve online safety Guest Bio: Assaf Kipnis is an AI safety investigator with over a decade of experience at companies like LinkedIn, Facebook, and Google. Now at ElevenLabs, he builds systems to uncover and respond to emerging threats in generative AI, focusing on the intersection of security, abuse prevention, and human impact. Assaf is known for making sense of complex, messy problems, combining deep investigation with storytelling to drive action. He's guided by values like curiosity, care, and doing the right thing, and is passionate about reclaiming technology as a force for good. He strives to create environments where people feel safe, seen, and valued. Outside of work, he's a parent, systems thinker, and mentor who believes the best solutions start with asking the right questions—and remembering to stay human. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
In this episode of Durable Value, we talk about the science of failure—why even great companies and properties can drift off course, and how to recognize and prevent the subtle missteps that lead to bigger problems. We discuss the difference between luck and skill in investing, the dangers of narrative reinforcement, and practical strategies for building resilience in your business. Whether you're a real estate investor, entrepreneur, or leader, you'll find actionable insights to help you avoid common pitfalls and turn failures into stepping stones for long-term success.Timestamps:00:00 - Introduction: The Science of Failure01:26 - Luck vs. Skill in Investing02:20 - Information Machines & Signal vs. Reality02:57 - Luck as Skill: The Genius-Idiot Cycle03:15 - Real Estate Market Cycles as Levelers03:38 - Execution Engine: Buying the Right Assets06:20 - Navigating Seller and Broker Dynamics07:03 - Macro Understanding from Multi-Market Experience09:05 - Short-Term vs. Long-Term Thinking10:33 - Capital Pressure and Market Cycles11:25 - Institutional Capital and Volatility12:07 - Raising Capital in Down Markets13:31 - John Boyd's OODA Loop: Orienting to Reality13:50 - Failure as a Path to Success14:32 - Red Teaming & Pre-Mortems15:12 - Building a Culture of Openness15:39 - Rebuilding Systems for the Long Term16:02 - From IRR to NOI: Adapting to a New Decade16:22 - Building for Stability and Optionality19:58 - Closing
Welcome back to the "To The Point Cybersecurity" podcast! After a short hiatus, hosts Rachel Lyon and Jonathan Knepher return with an exciting new episode featuring Greg Hatcher, co-founder of White Knight Labs—dubbed the "Ocean's Eleven of cybersecurity." Greg brings a unique perspective from his days in Army Special Forces and his deep expertise in offensive cybersecurity operations. In this episode, the conversation dives into the world of red teaming, how it differs from traditional penetration testing, the realities of social engineering and physical access exploits, supply chain and AI security threats, and the ever-evolving role of CISOs in defending their organizations. Whether you're curious about insider threats, the challenges of shadow AI, or just want a glimpse into some of the most compelling stories from the front lines of cyber offense, this episode delivers insights, cautionary tales, and actionable advice for organizations looking to stay one step ahead. So sit back, tune in, and get ready to go "to the point" on everything cybersecurity! For links and resources discussed in this episode, please visit our show notes at https://www.forcepoint.com/govpodcast/e344
Parce que… c'est l'épisode 0x615! Shameless plug 12 au 17 octobre 2025 - Objective by the sea v8 10 au 12 novembre 2025 - IAQ - Le Rendez-vous IA Québec 17 au 20 novembre 2025 - European Cyber Week 25 et 26 février 2026 - SéQCure 2065 Description Dans ce podcast approfondi, Charles Hamilton partage sa vision du red teaming moderne et de l'évolution de l'écosystème cybersécurité. L'échange révèle les complexités d'un marché en constante mutation et les défis qui touchent tant les professionnels que les organisations. Le paradoxe du red teaming moderne Hamilton souligne un phénomène fascinant : les red teamers ciblent principalement des entreprises matures en sécurité, créant un écart croissant avec la réalité des attaques criminelles. Cette sophistication forcée des équipes rouges s'explique par la nécessité de contourner des solutions de sécurité avancées pour accomplir leurs missions d'évaluation. Paradoxalement, cette expertise finit par être publique et influence les techniques des vrais attaquants, créant un cycle où les défenseurs doivent constamment s'adapter. Les véritables cybercriminels, quant à eux, privilégient l'opportunisme au détriment de la sophistication. Ils concentrent leurs efforts sur des cibles plus vulnérables, rendant leurs techniques souvent moins raffinées mais plus pragmatiques. Cette approche business-oriented explique pourquoi on retrouve encore des outils anciens comme Mimikatz dans les incidents réels, alors que les red teamers développent des techniques d'évasion complexes. L'écart entre recherche et réalité opérationnelle L'expérience d'Hamilton illustre comment les innovations du red teaming finissent par être récupérées par les attaquants réels. Il raconte l'anecdote d'un code qu'il avait publié il y a plus de dix ans et qui fut récemment réutilisé par un groupe d'attaquants, devenant soudainement une “nouvelle backdoor” aux yeux des analystes. Cette récupération démontre que les criminels puisent largement dans les ressources publiques plutôt que de développer leurs propres innovations. Cette dynamique soulève des questions importantes sur l'équilibre entre le partage de connaissances défensives et les risques d'armement involontaire des attaquants. Hamilton défend néanmoins la publication de recherches, arguant que ces techniques finiraient par être découvertes de toute façon, et que leur divulgation permet aux défenseurs de mieux se préparer. La sophistication technique face à l'efficacité pratique Un point central de la discussion concerne l'appréciation des outils techniques. Hamilton insiste sur l'importance de comprendre la complexité sous-jacente d'outils comme Mimikatz, développé par Benjamin Delpy. Cet outil, souvent perçu comme “simple” par les utilisateurs, représente en réalité des centaines d'heures de recherche sur les structures internes de Windows. Cette incompréhension de la sophistication technique conduit à une sous-estimation de la valeur des outils et des compétences nécessaires pour les développer. Il établit un parallèle avec Metasploit, framework qui a démocratisé l'exploitation de vulnérabilités. Beaucoup d'utilisateurs peuvent lancer un exploit sans comprendre sa mécanique interne, comme l'exemple historique de MS08-067, exploitation particulièrement complexe impliquant des services RPC, des buffer overflows et des techniques de contournement de protections mémoire. La collaboration entre équipes rouges et bleues Hamilton prône une approche collaborative à travers les “Detection Capability Assessment”, exercices où red teamers et blue teamers travaillent ensemble. Ces sessions permettent aux défenseurs de voir les techniques en action et de développer des règles de détection appropriées. Cette collaboration bidirectionnelle enrichit les deux parties : les red teamers comprennent mieux les traces qu'ils laissent, tandis que les blue teamers apprennent à identifier des indicateurs subtils. Cette approche collaborative reste malheureusement rare, particulièrement au Québec où les budgets cybersécurité sont plus limités. Le recours massif aux services managés crée également une opacité problématique, où l'intelligence de détection développée reste propriété du fournisseur plutôt que de l'organisation cliente. Les défis de la détection moderne La conversation aborde la transition des signatures antivirales vers la télémétrie moderne. Cette évolution, bien que techniquement supérieure, reste mal comprise par de nombreux professionnels. La télémétrie génère d'importants volumes de données qui nécessitent une analyse contextuelle sophistiquée pour identifier les activités malicieuses. Hamilton illustre ce défi avec l'exemple d'un utilisateur non-technique exécutant soudainement PowerShell et effectuant des requêtes LDAP. Individuellement, ces actions peuvent sembler bénignes, mais leur combinaison et le contexte utilisateur révèlent une activité suspecte typique d'outils comme BloodHound. Cette contextualisation reste difficile à automatiser et nécessite une compréhension fine de l'environnement organisationnel. Critique des métriques de vulnérabilité L'expert critique vivement l'utilisation systématique du système CVSS pour évaluer les risques. Dans le contexte du red teaming, une vulnérabilité “low” selon CVSS peut devenir critique si elle constitue le maillon manquant d'une chaîne d'attaque vers des actifs sensibles. Cette approche contextuelle du risque contraste avec les évaluations standardisées des tests d'intrusion traditionnels. L'exemple de Log4J illustre parfaitement cette problématique. Plutôt que de paniquer et patcher massivement, une compréhension du vecteur d'attaque aurait permis de mitiger le risque par des mesures réseau, évitant le stress des équipes pendant les vacances de Noël. L'industrie de la cybersécurité et ses travers Hamilton observe une tendance préoccupante vers la sur-médiatisation et le marketing dans la cybersécurité. Les vulnérabilités reçoivent des noms accrocheurs et des logos, les groupes d'attaquants sont “glorifiés” avec des noms évocateurs et des représentations heroïques. Cette approche marketing dilue les vrais messages techniques et crée une confusion entre communication et substance. Il dénonce également la prolifération de contenu généré par IA sur les plateformes professionnelles, particulièrement LinkedIn, qui noie les discussions techniques pertinentes sous un flot de contenu vide mais bien formaté. Cette tendance marginalise les voix techniques expertes au profit de “cyber-influenceurs” qui recyclent des concepts obsolètes. Formation et transmission des connaissances Malgré ces défis, Hamilton continue de former la prochaine génération de professionnels. Il insiste sur l'importance de comprendre les fondamentaux plutôt que d'utiliser aveuglément des outils. Cette philosophie éducative vise à créer des professionnels capables d'adaptation et d'innovation plutôt que de simples utilisateurs d'outils. Il encourage également la publication de blogs techniques, même sur des sujets déjà connus, comme moyen de développer les compétences de communication essentielles dans le domaine. La capacité à documenter et expliquer son travail s'avère aussi importante que l'expertise technique elle-même. Vers une industrie plus collaborative La conversation se conclut sur un appel à plus de collaboration et moins de compétition stérile dans l'industrie. Hamilton plaide pour des échanges constructifs entre professionnels techniques et dirigeants, entre red teamers et blue teamers, entre chercheurs et praticiens. Cette vision d'une communauté unie contraste avec la réalité actuelle d'écosystèmes cloisonnés qui peinent à communiquer efficacement. Il partage son expérience personnelle des critiques et de la toxicité parfois présente dans la communauté cybersécurité, tout en réaffirmant son engagement à partager ses connaissances et à contribuer à l'évolution positive du domaine. Son parcours, depuis les débuts dans les années 2000 jusqu'à aujourd'hui, témoigne de l'évolution rapide du secteur et de l'importance de l'adaptation continue. Cette riche discussion révèle les multiples facettes d'un domaine en constante évolution, où l'équilibre entre technique et communication, entre offensive et défensive, entre innovation et pragmatisme, définit l'efficacité des approches sécuritaires modernes. Collaborateurs Charles F. Hamilton Crédits Montage par Intrasecure inc Locaux réels par Northsec
In this episode of Campus Technology Insider Podcast Shorts, host Rhea Kelly discusses the latest stories in education technology. Highlights include the launch of LawZero by Yoshua Bengio to develop transparent 'scientist AI' systems, a new Cloud Security Alliance guide on red teaming for agentic AI, and OpenAI's report on the malicious use of AI in cybercrime. For more detailed coverage, visit campustechnology.com. 00:00 Introduction and Host Welcome 00:15 LawZero: Ensuring Safe AI Development 00:52 Cloud Security Alliance's New Guide 01:27 OpenAI Report on AI in Cybercrime 02:06 Conclusion and Further Resources Source links: New Nonprofit to Work Toward Safer, Truthful AI Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats Campus Technology Insider Podcast Shorts are curated by humans and narrated by AI.
Podcast: PrOTect It All (LS 26 · TOP 10% what is this?)Episode: Inside OT Penetration Testing: Red Teaming, Risks, and Real-World Lessons for Critical Infrastructure with Justin SearlePub date: 2025-06-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, host Aaron Crow sits down with OT security expert Justin Searle, Director of ICS Security at InGuardians, for a deep dive into the ever-evolving world of OT and IT cybersecurity. With over 25 years of experience, ranging from hands-on engineering and water treatment facilities to red-team penetration testing on critical infrastructures such as airports and power plants, Justin brings a wealth of insight and real-world anecdotes. This episode unpacks what it really takes to assess and secure operational technology environments. Whether you're a C-suite executive, a seasoned cyber pro, or brand new to OT security, you'll hear why network expertise, cross-team trust, and careful, collaborative engagement with engineers are so crucial when testing high-stakes environments. Aaron and Justin also discuss how the industry has matured, the importance of dedicated OT cybersecurity teams, and why practical, people-first approaches make all the difference, especially when lives, reliability, and national infrastructure are on the line. Get ready for actionable advice, hard-earned lessons from the field, and a candid look at both the progress and the ongoing challenges in protecting our most critical systems. Key Moments: 05:55 Breaking Into Cybersecurity Without Classes 09:26 Production Environment Security Testing 13:28 Credential Evaluation and Light Probing 14:33 Firewall Misconfiguration Comedy 19:14 Dedicated OT Cybersecurity Professionals 20:50 "Prioritize Reliability Over Latest Features" 24:18 "IT-OT Convergence Challenges" 29:04 Patching Program and OT Security 32:08 Complexity of OT Environments 35:45 Dress-Code Trust in Industry 38:23 Legacy System Security Challenges 42:15 OT Cybersecurity for IT Professionals 43:40 "Building Rapport with Food" 47:59 Future OT Cyber Risks and Readiness 51:30 Skill Building for Tech Professionals About the Guest : Justin Searle is the Director of ICS Security at InGuardians, specializing in ICS security architecture design and penetration testing. He led the Smart Grid Security Architecture group in the creation of NIST Interagency Report 7628 and played critical roles in the Advanced Security Acceleration Project for the Smart Grid (ASAP-SG), National Electric Sector Cybersecurity Organization Resources (NESCOR), and Smart Grid Interoperability Panel (SGIP). Justin has taught hacking techniques, forensics, networking, and intrusion detection courses for multiple universities, corporations, and security conferences. His current courses at SANS and Black Hat are among the world's most attended ICS cybersecurity courses. Justin is currently a Senior Instructor for the SANS Institute and a faculty member at IANS. In addition to electric power industry conferences, he frequently presents at top international security conferences such as Black Hat, DEFCON, OWASP, HITBSecConf, Brucon, Shmoocon, Toorcon, Nullcon, Hardware.io, and AusCERT. Justin leads prominent open-source projects, including The Control Thing Platform, Samurai Web Testing Framework (SamuraiWTF), and Samurai Security Testing Framework for Utilities (SamuraiSTFU). He has an MBA in International Technology and is a CISSP and SANS GIAC certified Incident Handler (GCIH), Intrusion Analyst (GCIA), Web Application Penetration Tester (GWAPT), and GIAC Industrial Control Security Professional (GICSP) How to connect Justin: https://www.controlthings.io https://www.linkedin.com/in/meeas/ Email: justin@controlthings.io Connect With Aaron Crow: Website: www.corvosec.com LinkedIn: https://www.linkedin.com/in/aaronccrow Learn more about PrOTect IT All: Email: info@protectitall.co Website: https://protectitall.co/ X: https://twitter.com/protectitall YouTube: https://www.youtube.com/@PrOTectITAll FaceBook: https://facebook.com/protectitallpodcast To be a guest or suggest a guest/episode, please email us at info@protectitall.co Please leave us a review on Apple/Spotify Podcasts: Apple - https://podcasts.apple.com/us/podcast/protect-it-all/id1727211124 Spotify - https://open.spotify.com/show/1Vvi0euj3rE8xObK0yvYi4The podcast and artwork embedded on this page are from Aaron Crow, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
Guest: Daniel Fabian, Principal Digital Arsonist, Google Topic: Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process? What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems? Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it? What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle? What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field? Resources: Video (LinkedIn, YouTube) Google's AI Red Team: the ethical hackers making AI safer EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons Lessons from AI Red Teaming – And How to Apply Them Proactively [RSA 2025]
Get your FREE Cybersecurity Salary Guide: https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastEd Williams, Vice President of EMEA Consulting and Professional Services (CPS) at TrustWave, shares his two decades of pentesting and red teaming experience with Cyber Work listeners. From building his first programs on a BBC Micro (an early PC underwritten by the BBC network in England to promote computer literacy) to co-authoring award-winning red team security tools, Ed discusses his favorite red team social engineering trick (hint: it involves fire extinguishers!), and the ways that pentesting and red team methodologies have (and have not) changed in 20 years. As a bonus, Ed explains how he created a red team tool that gained accolades from the community in 2013, and how building your own tools can help you create your personal calling card in the Cybersecurity industry! Whether you're breaking into cybersecurity or looking to level up your pentesting skills, Ed's practical advice and red team “war stories,” as well as his philosophy of continuous learning that he calls “Stacking Days,” bring practical and powerful techniques to your study of Cybersecurity.0:00 - Intro to today's episode2:17 - Meet Ed Williams and his BBC Micro origins5:16 - Evolution of pentesting since 200812:50 - Creating the RedSnarf tool in 201317:18 - Advice for aspiring pentesters in 202519:59 - Building community and finding collaborators 22:28 - Red teaming vs pentesting strategies24:19 - Red teaming, social engineering, and fire extinguishers27:07 - Early career obsession and focus29:41 - Essential skills: Python and command-line mastery31:30 - Best career advice: "Stacking Days"32:12 - About TrustWave and connecting with EdAbout InfosecInfosec's mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ's security awareness training. Learn more at infosecinstitute.com.
In this OODAcast episode, host Matt Devost sits down with Maxie Reynolds, author of The Art of the Attack, to explore the evolution of her unique career from offshore oil rigs to elite red teaming and cybersecurity innovation. Maxie shares how her unconventional path, working a decade in oil and gas, earning degrees while on remote rigs, and eventually breaking into cybersecurity at PwC, shaped her approach to physical and digital security. Her journey led to the creation of a company that builds underwater data centers, a novel fusion of her industrial and red teaming experiences. She discusses the rising interest in submerged infrastructure, particularly after China's moves in the space and the demands of modern AI-driven cooling systems. The conversation dives deep into what it means to adopt an "attacker mindset", seeing opportunities where others see obstacles and using architecture, human psychology, and environment as vectors for access. Maxi outlines how her social engineering engagements hinge on understanding perception, psychology, and pretext creation rather than just technical exploits. She offers real-world stories of infiltrating secure facilities and engaging high-stakes targets using layered personas and misdirection. Through it all, she emphasizes the role of self-awareness, stress management, and emotional discipline in high-pressure operations, often drawing parallels between red teaming and stoicism. Maxie and Matt also examine how to responsibly deliver red team results to leadership, balancing candor with empathy to ensure organizations grow stronger without shame or defensiveness. They reflect on the future of AI in security, the persistence of physical threats, and the irreplaceable value of human judgment. The episode wraps with a powerful reading list and a shared love of books, highlighting titles that explore geopolitics, materials science, and the ungoverned world of the open ocean. This episode is packed with insight, storytelling, and practical wisdom for cybersecurity professionals, technologists, and leaders looking to understand how adversaries think—and how to outsmart them. Additional Links: The Art of Attack: Attacker Mindset for Security Professionals by Maxie Reynolds Maxie on Twitter/X Book Recommendations: How the World Really Works: The Science Behind How We Got Here and Where We're Going by Vaclav Smil The Outlaw Ocean: Journeys Across the Last Untamed Frontier by Ian Urbina Prisoners of Geography: Ten Maps That Explain Everything About the World by Tim Marshall Chip War: The Fight for the World's Most Critical Technology by Chris Miller Stuff Matters: Exploring the Marvelous Materials That Shape Our Man-Made World by Mark Miodownik
Charles Henderson, who leads the cybersecurity services division at Coalfire, shares how the company is reimagining offensive and defensive operations through a programmatic lens that prioritizes outcomes over checkboxes. His team, made up of practitioners with deep experience and creative drive, brings offensive testing and exposure management together with defensive services and managed offerings to address full-spectrum cybersecurity needs. The focus isn't on commoditized services—it's on what actually makes a difference.At the heart of the conversation is the idea that cybersecurity is a team sport. Henderson draws parallels between the improvisation of music and the tactics of both attackers and defenders. Both require rhythm, creativity, and cohesion. The myth of the lone hero doesn't hold up anymore—effective cybersecurity programs are driven by collaboration across specialties and by combining services in ways that amplify their value.Coalfire's evolution reflects this shift. It's not just about running a penetration test or red team operation in isolation. It's about integrating those efforts into a broader mission-focused program, tailored to real threats and measured against what matters most. Henderson emphasizes that CISOs are no longer content with piecemeal assessments; they're seeking simplified, strategic programs with measurable outcomes.The conversation also touches on the importance of storytelling in cybersecurity reporting. Henderson underscores the need for findings to be communicated in ways that resonate with technical teams, security leaders, and the board. It's about enabling CISOs to own the narrative, armed with context, clarity, and confidence.Henderson's reflections on the early days of hacker culture—when gatherings like HoCon and early Def Cons were more about curiosity and camaraderie than business—bring a human dimension to the discussion. That same passion still fuels many practitioners today, and Coalfire is committed to nurturing it through talent development and internships, helping the next generation find their voice, their challenge, and yes, even their hacker handle.This episode offers a look at how to build programs, teams, and mindsets that are ready to lead—not follow—on the cybersecurity front.Learn more about Coalfire: https://itspm.ag/coalfire-yj4wNote: This story contains promotional content. Learn more.Guest: Charles Henderson, Executive Vice President of Cyber Security Services, Coalfire | https://www.linkedin.com/in/angustx/ResourcesLearn more and catch more stories from Coalfire: https://www.itspmagazine.com/directory/coalfireLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsac25______________________Keywords:charles henderson, sean martin, coalfire, red teaming, penetration testing, cybersecurity services, exposure management, ciso, threat intelligence, hacker culture, brand story, brand marketing, marketing podcast, brand story podcast______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
Snehal Antani is an entrepreneur, technologist, and investor. He is the CEO and Co-founder of Horizon3, a cybersecurity company using AI to deliver Red Teaming and Penetration Testing as a Service. He also serves as a Highly Qualified Expert for the U.S. Department of Defense, supporting digital transformation and data initiatives for Special Operations. Previously, he was CTO and SVP at Splunk, held CIO roles at GE Capital, and began his career as a software engineer at IBM. Snehal holds a master's in computer science from Rensselaer Polytechnic Institute and a bachelor's from Purdue University, and he is the inventor on 16 patents.In this conversation, we discuss:Snehal Antani's path from software engineer to CEO, and how his father's quiet example of grit and passion continues to shape his leadership style.How a “LEGO blocks” approach to building skills prepared Snehal to lead, and why he believes leadership must be earned through experience.Why Horizon3 identifies as a data company, and how running more pen tests than the Big Four creates a powerful AI advantage.What “cyber-enabled economic warfare” looks like in practice, and how a small disruption in a supply chain can create massive global impact.How Horizon3 built an AI engine that hacked a bank in under 60 seconds, showing what's possible when algorithms replace manual testing.What the future of work looks like in the AI era, with a growing divide between those with specialized expertise and trade skills and those without.Resources:Subscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe Connect with Snehal on LinkedIn: https://www.linkedin.com/in/snehalantani/ AI fun fact article: https://venturebeat.com/security/ai-vs-endpoint-attacks-what-security-leaders-must-know-to-stay-ahead/ On the New Definition of Work: https://podcasts.apple.com/us/podcast/dr-john-boudreau-future-of-work-pioneer-and/id1476885647?i=1000633854079
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/681 The team explores the ethical implications of teaching AI jailbreaking techniques and conducting red team testing on large language models, balancing educational value against potential misuse. They dive into personal experiments with bypassing AI safeguards, revealing both creative workarounds and robust protections in modern systems. TAKEAWAYS • Debate on whether demonstrating AI vulnerabilities is responsible education or potentially dangerous knowledge sharing • Psychological impact on security professionals who regularly simulate malicious behaviors to test AI safety • Real examples of attempts to "jailbreak" AI systems through fantasy storytelling and other creative prompts • Legal gray areas in AI security testing that require dedicated legal support for organizations • Personal experiences with testing AI guardrails on different models and their varying levels of protection • Future prediction that Microsoft's per-user licensing model may shift to consumption-based as AI agents replace human tasks • Growth observations about Microsoft's Business Applications division reaching approximately $8 billion • Discussion of how M365 Copilot is transforming productivity, particularly for analyzing sales calls and customer interactions Check out this episode for more deep dives into AI safety, security, and the future of technology in business.This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world. DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff Accelerate your Microsoft career with the 90 Day Mentoring Challenge We've helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening
Bugged boardrooms. Insider moles. Social engineers posing as safety inspectors!? In this Talking Lead episode, Lefty assembles a veteran intel crew—Bryan Seaver U.S. Army Military Police vet and owner of SAPS Squadron Augmented Protection Services, LLC, a Nashville outfit running dignitary protection, K9 ops, and intelligence training. A *Talking Lead* mainstay! He's got firsthand scoop on "Red Teaming"; Mitch Davis U.S. Marine, private investigator, interrogator, Phoenix Consulting Group (now DynCorp) contractor, with a nose for sniffing out moles and lies; Brad Duley U.S. Marine, embassy guard, Phoenix/DynCorp contractor, Iraq vet, deputy sheriff, and precision shooter, bringing tactical grit to the table —to expose the high-stakes world of corporate espionage. They pull back the curtain on real-world spy tactics that were used during the the "Cold War" era and are still used in today's business battles: Red Team operations, honeypots, pretexting, data theft, and the growing threat of AI-driven deception. From cyber breaches to physical infiltrations, the tools of Cold War espionage are now aimed at American companies, defense tech, and even firearms innovation. State-backed actors, insider threats, and corporate sabotage—it's not just overseas anymore. Tune-in and get "Leaducated"!!
Bugged boardrooms. Insider moles. Social engineers posing as safety inspectors!? In this Talking Lead episode, Lefty assembles a veteran intel crew—Bryan Seaver U.S. Army Military Police vet and owner of SAPS Squadron Augmented Protection Services, LLC, a Nashville outfit running dignitary protection, K9 ops, and intelligence training. A *Talking Lead* mainstay! He's got firsthand scoop on "Red Teaming"; Mitch Davis U.S. Marine, private investigator, interrogator, Phoenix Consulting Group (now DynCorp) contractor, with a nose for sniffing out moles and lies; Brad Duley U.S. Marine, embassy guard, Phoenix/DynCorp contractor, Iraq vet, deputy sheriff, and precision shooter, bringing tactical grit to the table —to expose the high-stakes world of corporate espionage. They pull back the curtain on real-world spy tactics that were used during the the "Cold War" era and are still used in today's business battles: Red Team operations, honeypots, pretexting, data theft, and the growing threat of AI-driven deception. From cyber breaches to physical infiltrations, the tools of Cold War espionage are now aimed at American companies, defense tech, and even firearms innovation. State-backed actors, insider threats, and corporate sabotage—it's not just overseas anymore. Tune-in and get "Leaducated"!!
Send us a textJayson Coil is Assistant Fire Chief and Battalion Chief at Sedona Fire District in Arizona. With over 25 years of operational and leadership experience particularly in wildland firefighting and major disaster response, Jayson shares powerful insights on decision-making in complex environments. We dive into topics like adaptive leadership, red teaming, decentralizing command, and improving decision quality during crisis. Jayson also reflects on organizational change, trust, and morale, offering valuable lessons for current and future fire service leaders. From strategy to tactics, military crossovers to systemic failures, this conversation is packed with wisdom to help first responders lead more effectively in today's uncertain world. Connect with Jayon:-LINKEDINWEBSITEACCESS THE PODCAST LIBRARY & EVERY EPISODE, DEBRIEF & DOCUMENT CLICK HEREPODCAST GIFT - Get your FREE subscription to essential Firefighting publications HERE A big thanks to our partners for supporting this episode.GORE-TEX Professional ClothingMSA The Safety CompanyIDEXHAIX Footwear - Get offical podcast discount on HAIX HEREXendurance - to hunt performance & endurance 20% off HERE with code ffp20Lyfe Linez - Get Functional Hydration FUEL for FIREFIGHTERS, Clean no sugar for daily hydration. 80% of people live dehydratedSupport the show***The views expressed in this episode are those of the individual speakers. Our partners are not responsible for the content of this episode and does not warrant its accuracy or completeness.*** Please support the podcast and its future by clicking HERE and joining our Patreon Crew
Building Trust Through Technology: Responsible AI in Practice // MLOps Podcast #301 with Rafael Sandroni, Founder and CEO of GardionAI.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractRafael Sandroni shares key insights on securing AI systems, tackling fraud, and implementing robust guardrails. From prompt injection attacks to AI-driven fraud detection, we explore the challenges and best practices for building safer AI.// BioEntrepreneur and problem solver. // Related LinksGardionAI LinkedIn: https://www.linkedin.com/company/guardionai/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rafael on LinkedIn: /rafaelsandroniTimestamps:[00:00] Rafael's preferred coffee[00:16] Takeaways[01:03] AI Assistant Best Practices[03:48] Siri vs In-App AI[08:44] AI Security Exploration[11:55] Zero Trust for LLMS[18:02] Indirect Prompt Injection Risks[22:42] WhatsApp Banking Risks[26:27] Traditional vs New Age Fraud[29:12] AI Fraud Mitigation Patterns[32:50] Agent Access Control Risks[34:31] Red Teaming and Pentesting[39:40] Data Security Paradox[40:48] Wrap up
STANDARD EDITION: Signal OPSEC, White-box Red-teaming LLMs, Unified Company Context (UCC), New Book Recommendations, Single Apple Note Technique, and much more... You are currently listening to the Standard version of the podcast, consider upgrading and becoming a member to unlock the full version and many other exclusive benefits here: https://newsletter.danielmiessler.com/upgrade Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiesslerBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Naman Mishra, CTO of Repello AI, to unpack the real-world security risks behind deploying large language models. We talk about layered vulnerabilities—from the model, infrastructure, and application layers—to attack vectors like prompt injection, indirect prompt injection through agents, and even how a simple email summarizer could be exploited to trigger a reverse shell. Naman shares stories like the accidental leak of a Windows activation key via an LLM and explains why red teaming isn't just a checkbox, but a continuous mindset. If you want to learn more about his work, check out Repello's website at repello.ai.Check out this GPT we trained on the conversation!Timestamps00:00 - Stewart Alsop introduces Naman Mishra, CTO of Repel AI. They frame the episode around AI security, contrasting prompt injection risks with traditional cybersecurity in ML apps.05:00 - Naman explains the layered security model: model, infrastructure, and application layers. He distinguishes safety (bias, hallucination) from security (unauthorized access, data leaks).10:00 - Focus on the application layer, especially in finance, healthcare, and legal. Naman shares how ChatGPT leaked a Windows activation key and stresses data minimization and security-by-design.15:00 - They discuss red teaming, how Repel AI simulates attacks, and Anthropic's HackerOne challenge. Naman shares how adversarial testing strengthens LLM guardrails.20:00 - Conversation shifts to AI agents and autonomy. Naman explains indirect prompt injection via email or calendar, leading to real exploits like reverse shells—all triggered by summarizing an email.25:00 - Stewart compares the Internet to a castle without doors. Naman explains the cat-and-mouse game of security—attackers need one flaw; defenders must lock every door. LLM insecurity lowers the barrier for attackers.30:00 - They explore input/output filtering, role-based access control, and clean fine-tuning. Naman admits most guardrails can be broken and only block low-hanging fruit.35:00 - They cover denial-of-wallet attacks—LLMs exploited to run up massive token costs. Naman critiques DeepSeek's weak alignment and state bias, noting training data risks.40:00 - Naman breaks down India's AI scene: Bangalore as a hub, US-India GTM, and the debate between sovereignty vs. pragmatism. He leans toward India building foundational models.45:00 - Closing thoughts on India's AI future. Naman mentions Sarvam AI, Krutrim, and Paris Chopra's Loss Funk. He urges devs to red team before shipping—"close the doors before enemies walk in."Key InsightsAI security requires a layered approach. Naman emphasizes that GenAI applications have vulnerabilities across three primary layers: the model layer, infrastructure layer, and application layer. It's not enough to patch up just one—true security-by-design means thinking holistically about how these layers interact and where they can be exploited.Prompt injection is more dangerous than it sounds. Direct prompt injection is already risky, but indirect prompt injection—where an attacker hides malicious instructions in content that the model will process later, like an email or webpage—poses an even more insidious threat. Naman compares it to smuggling weapons past the castle gates by hiding them in the food.Red teaming should be continuous, not a one-off. One of the critical mistakes teams make is treating red teaming like a compliance checkbox. Naman argues that red teaming should be embedded into the development lifecycle, constantly testing edge cases and probing for failure modes, especially as models evolve or interact with new data sources.LLMs can unintentionally leak sensitive data. In one real-world case, a language model fine-tuned on internal documentation ended up leaking a Windows activation key when asked a completely unrelated question. This illustrates how even seemingly benign outputs can compromise system integrity when training data isn't properly scoped or sanitized.Denial-of-wallet is an emerging threat vector. Unlike traditional denial-of-service attacks, LLMs are vulnerable to economic attacks where a bad actor can force the system to perform expensive computations, draining API credits or infrastructure budgets. This kind of vulnerability is particularly dangerous in scalable GenAI deployments with limited cost monitoring.Agents amplify security risks. While autonomous agents offer exciting capabilities, they also open the door to complex, compounded vulnerabilities. When agents start reading web content or calling tools on their own, indirect prompt injection can escalate into real-world consequences—like issuing financial transactions or triggering scripts—without human review.The Indian AI ecosystem needs to balance speed with sovereignty. Naman reflects on the Indian and global context, warning against simply importing models and infrastructure from abroad without understanding the security implications. There's a need for sovereign control over critical layers of AI systems—not just for innovation's sake, but for national resilience in an increasingly AI-mediated world.
Guest: Alex Polyakov, CEO at Adversa AI Topics: Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client? Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now? What trips most clients, classic security mistakes in AI systems or AI-specific mistakes? Are there truly new mistakes in AI systems or are they old mistakes in new clothing? I know it is not your job to fix it, but much of this is unfixable, right? Is it a good idea to use AI to secure AI? Resources: EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far AI Red Teaming Reasoning LLM US vs China: Jailbreak Deepseek, Qwen, O1, O3, Claude, Kimi Adversa AI blog Oops! 5 serious gen AI security mistakes to avoid Generative AI Fast Followership: Avoid These First Adopter Security Missteps
This week, Ads Dawson, Staff AI Security Researcher at Dreadnode, joins the show to talk all things AI Red Teaming!George K and George A talk to Ads about: The reality of securing #AI model development pipelines Why cross-functional expertise is critical when securing AI systems How to approach continuous red teaming for AI applications (hint: annual pen tests won't cut it anymore) Practical advice for #cybersecurity pros looking to skill up in AI securityWhether you're a CISO trying to navigate securing AI implementations or an infosec professional looking to expand your skill set, this conversation is all signal.Course mentioned: https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-DS-03+V1————
When I first experienced the Cynefin Framework in an HBR article many years ago, I never tried to adapt it to my work until I interviewed Bryce Hoffman, author of American Icon and Red Teaming, a few years ago. While Bryce made the Cynefin Framework seem more understandable and accessible, Kevin Eikenberry has gone further to show leaders how to act when surrounded by varying problems they are trying to navigate with this sensemaking framework.Kevin has written nearly 20 books, and his newest title is Flexible Leadership which includes a better approach to holistic thinking, the Cynefin Framework and the use of flexors.
Today's guest is Tomer Poran, Chief Evangelist and VP of Strategy at ActiveFence. ActiveFence is a technology company specializing in trust and safety solutions, helping platforms detect and prevent harmful content, malicious activity, and emerging threats online. Tomer joins today's podcast to explore the critical role of red teaming in AI safety and security. He breaks down the challenges enterprises face in deploying AI responsibly, the evolving nature of adversarial risks, and why organizations must adopt a proactive approach to testing AI systems. This episode is sponsored by ActiveFence. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
ABOUT JIM PALMERJim Palmer is the Chief AI Officer at Dialpad. Previously he was CTO and Co-Founder of TalkIQ, a conversational intelligence start-up with expertise in real-time speech recognition and natural language processing, acquired by Dialpad in May of 2018. Prior to TalkIQ, he was the founding engineer on the eBay Now local delivery service.SHOW NOTES:Tips and cheat codes for navigating AI governance (3:30)Breaking down red teaming & adversarial testing in AI governance (8:02)Launching and scaling adversarial testing efforts (11:27)Unexpected benefits unlocked with adversarial testing (13:43)Understanding data governance and strategic AI investments (15:38)Building resilient AI from concept to customer validation (19:28)Exploring early feature validation and pattern recognition in AI (22:38)Adaptability in data management and ensuring safe, ethical data use while adapting to evolving legal and governance requirements (26:51)How to prepare data for safe and sustainable long-term use (30:02)Strategies for compliant data practices in a regulated world (32:43)Building data deletion systems with model training in mind (35:14)Current events and trends shaping adaptability and durability in the AI ecosystem (38:38)The role of a Chief AI Officer (41:20)Rapid fire questions (44:35)LINKS AND RESOURCESGenius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World - With deep and exclusive reporting, across hundreds of interviews, New York Times Silicon Valley journalist Cade Metz brings you into the rooms where these questions are being answered. Where an extraordinarily powerful new artificial intelligence has been built into our biggest companies, our social discourse, and our daily lives, with few of us even noticing.This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/
What exactly is generative AI (genAI) red-teaming? What strategies and standards should guide its implementation? And how can it protect the public interest? In this conversation, Lama Ahmad, Camille François, Tarleton Gillespie, Briana Vecchione, and Borhane Blili-Hamelin examined red-teaming's place in the evolving landscape of genAI evaluation and governance.Our discussion drew on a new report by Data & Society (D&S) and AI Risk and Vulnerability Alliance (ARVA), a nonprofit that aims to empower communities to recognize, diagnose, and manage harmful flaws in AI. The report, Red-Teaming in the Public Interest, investigates how red-teaming methods are being adapted to confront uncertainty about flaws in systems and to encourage public engagement with the evaluation and oversight of genAI systems. Red-teaming offers a flexible approach to uncovering a wide range of problems with genAI models. It also offers new opportunities for incorporating diverse communities into AI governance practices.Ultimately, we hope this report and discussion present a vision of red-teaming as an area of public interest sociotechnical experimentation.Download the report and learn more about the speakers and references at datasociety.net.--00:00 Opening00:12 Welcome and Framing04:48 Panel Introductions09:34 Discussion Overview10:23 Lama Ahmad on The Value of Human Red-Teaming17:37 Tarleton Gillespie on Labor and Content Moderation Antecedents25:03 Briana Vecchione on Participation & Accountability28:25 Camille François on Global Policy and Open-source Infrastructure35:09 Questions and Answers56:39 Final Takeaways
Our 199th episode with a summary and discussion of last week's big AI news! Recorded on 02/09/2025 Join our brand new Discord here! https://discord.gg/nTyezGSKwP Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: - OpenAI's deep research feature capability launched, allowing models to generate detailed reports after prolonged inference periods, competing directly with Google's Gemini 2.0 reasoning models. - France and UAE jointly announce plans to build a massive AI data center in France, aiming to become a competitive player within the AI infrastructure landscape. - Mistral introduces a mobile app, broadening its consumer AI lineup amidst market skepticism about its ability to compete against larger firms like OpenAI and Google. - Anthropic unveils 'Constitutional Classifiers,' a method showing strong defenses against universal jailbreaks; they also launched a $20K challenge to find weaknesses. Timestamps + Links: (00:00:00) Intro / Banter (00:02:27) News Preview (00:03:28) Response to listener comments Tools & Apps (00:08:01) OpenAI now reveals more of its o3-mini model's thought process (00:16:03) Google's Gemini app adds access to ‘thinking' AI models (00:21:04) OpenAI Unveils A.I. Tool That Can Do Research Online (00:31:09) Mistral releases its AI assistant on iOS and Android (00:36:17) AI music startup Riffusion launches its service in public beta (00:39:11) Pikadditions by Pika Labs lets users seamlessly insert objects into videos Applications & Business (00:41:19) Softbank set to invest $40 billion in OpenAI at $260 billion valuation, sources say (00:47:36) UAE to invest billions in France AI data centre (00:50:34) Report: Ilya Sutskever's startup in talks to fundraise at roughly $20B valuation (00:52:03) ASML to Ship First Second-Gen High-NA EUV Machine in the Coming Months, Aiming for 2026 Production (00:54:38) NVIDIA's GB200 NVL 72 Shipments Not Under Threat From DeepSeek As Hyperscalers Maintain CapEx; Meanwhile, Trump Tariffs Play Havoc With TSMC's Pricing Strategy Projects & Open Source (00:56:49) The Allen Institute for AI (AI2) Releases Tülu 3 405B: Scaling Open-Weight... (01:00:06) SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model (01:03:56) PhD Knowledge Not Required: A Reasoning Challenge for Large Language Models (01:08:26) OpenEuroLLM: Europe's New Initiative for Open-Source AI Development Research & Advancements (01:10:34) LIMO: Less is More for Reasoning (01:16:39) s1: Simple test-time scaling (01:19:17) ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning (01:23:55) Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch Policy & Safety (01:26:50) US sets AI safety aside in favor of 'AI dominance' (01:29:39) Almost Surely Safe Alignment of Large Language Models at Inference-Time (01:32:02) Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming (01:33:16) Anthropic offers $20,000 to whoever can jailbreak its new AI safety system
HackerOne's co-founder, Michiel Prins walks us through the latest new offensive security service: AI red teaming. At the same time enterprises are globally trying to figure out how to QA and red team generative AI models like LLMs, early adopters are challenged to scale these tests. Crowdsourced bug bounty platforms are a natural place to turn for assistance with scaling this work, though, as we'll discuss on this episode, it is unlike anything bug hunters have ever tackled before. Segment Resources: https://www.hackerone.com/ai/snap-ai-red-teaming https://www.hackerone.com/thought-leadership/ai-safety-red-teaming This interview is a bit different from our norm. We talk to the founder and CEO of OpenVPN about what it is like to operate a business based on open source, particularly through trying times like the recent pandemic. How do you compete when your competitors are free to build products using your software and IP? It seems like an oxymoron, but an open source-based business actually has some significant advantages over the closed source commercial approach. In this week's enterprise security news, the first cybersecurity IPO in 3.5 years! new companies new tools the fate of CISA and the cyber safety review board things we learned about AI in 2024 is the humanless SOC possible? NGFWs have some surprising vulnerabilities what did generative music sound like in 1996? All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-391
HackerOne's co-founder, Michiel Prins walks us through the latest new offensive security service: AI red teaming. At the same time enterprises are globally trying to figure out how to QA and red team generative AI models like LLMs, early adopters are challenged to scale these tests. Crowdsourced bug bounty platforms are a natural place to turn for assistance with scaling this work, though, as we'll discuss on this episode, it is unlike anything bug hunters have ever tackled before. Segment Resources: https://www.hackerone.com/ai/snap-ai-red-teaming https://www.hackerone.com/thought-leadership/ai-safety-red-teaming This interview is a bit different from our norm. We talk to the founder and CEO of OpenVPN about what it is like to operate a business based on open source, particularly through trying times like the recent pandemic. How do you compete when your competitors are free to build products using your software and IP? It seems like an oxymoron, but an open source-based business actually has some significant advantages over the closed source commercial approach. In this week's enterprise security news, the first cybersecurity IPO in 3.5 years! new companies new tools the fate of CISA and the cyber safety review board things we learned about AI in 2024 is the humanless SOC possible? NGFWs have some surprising vulnerabilities what did generative music sound like in 1996? All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-391
HackerOne's co-founder, Michiel Prins walks us through the latest new offensive security service: AI red teaming. At the same time enterprises are globally trying to figure out how to QA and red team generative AI models like LLMs, early adopters are challenged to scale these tests. Crowdsourced bug bounty platforms are a natural place to turn for assistance with scaling this work, though, as we'll discuss on this episode, it is unlike anything bug hunters have ever tackled before. Segment Resources: https://www.hackerone.com/ai/snap-ai-red-teaming https://www.hackerone.com/thought-leadership/ai-safety-red-teaming Show Notes: https://securityweekly.com/esw-391
Streamline Your Cybersecurity with Flare Here: https://try.flare.io/unsupervised-learning/ In this conversation, I speak with Jason Haddix, founder of Arcanum Security and CISO at Flare. We talk about: Flare's Unique Approach to Threat Intelligence:How Flare's capability to uncover compromised credentials and cookies from the dark web and private forums has been crucial in red team engagements. Challenges of Credential Theft and Advanced Malware Techniques:How adversaries utilize tools like the RedLine Stealer malware to gather credentials, cookies, and other sensitive information, and this stolen data enables attackers to bypass authentication protocols, emphasizing the need for comprehensive exposure management. Jason's Journey To Founding Arcanum & Arcanum's Security Training Programs:How Jason now advises on product development and threat intelligence as Flare's CISO and his journey to fund Arcanum, a company focused on red teaming and cybersecurity, and Arcanum's specialized training programs focusing on offensive security and using AI in security roles. And more Introduction to the Podcast (00:00:00)Guest Excitement on Podcast (00:00:20)Jason's New Business and Flare Role (00:00:24)Career Shift from Ubisoft to Red Teaming (00:01:02)Evolution of Adversary Tactics (00:02:04)Flare's Credential Exposure Management (00:02:58)Synergy Between Arcanum and Flare(00:03:55)Dark Web Credential Compromise (00:04:45)Challenges with Two-Factor Authentication (00:06:25)Cookie Theft and Unauthorized Access (00:07:39)Redline Malware and Its Impact (00:08:12)Flare's Research Capabilities (00:09:50)Potential for Advanced Malware Detection (00:11:40)Expansion of Threat Intelligence Services (00:12:15)Vision for a Unified Security Dashboard (00:13:25)Integrating Threat Intelligence with Identity Management (00:14:00)Credential Update Notifications via API (00:15:54)Automated Credential Management Potential (00:17:28)AI Features in Security Platforms (00:17:32)Exploration of Automated Security Responses (00:18:38)Introduction to Arcanum Security (00:19:25)Overview of Arcanum Training Courses (00:20:25)Necessity for Up-to-Date Training (00:22:15)Guest Experts in Training Sessions (00:23:08)Upcoming Features for Flare (00:25:11)Integrating Vulnerability Management (00:28:08)Accessing Flare's Free Trial (00:28:25)Learning More About Arcanum (00:29:09)Become a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.
Enjoy this encore episode. The practice of emulating known adversary behavior against an organization's actual defensive posture.
Ben is founder and CEO of watchTowr, building an external attack surface management tool (EASM) that performs automated penetration testing and red teaming activities. Before founding watchTowr in 2021, Ben worked as a security consultant for a decade focused largely on penetration testing. And as Ben describes in the episode, what started as a combination of cobbled together scripts from his previous experience has since grown into a comprehensive automation platform. Website: https://watchtowr.com/ Sponsor: VulnCheck
Enjoy this encore episode. The practice of emulating known adversary behavior against an organization's actual defensive posture. Learn more about your ad choices. Visit megaphone.fm/adchoices