POPULARITY
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
Episode # 183 Today's Guest: Adriel Desautels, Founder & CEO, Netragard Adriel is a leader in cybersecurity with over 20 years of experience. Adriel founded Secure Network Operations and the SNOsoft Research Team, whose vulnerability research helped shape modern responsible disclosure practices. He later launched Netragard, pioneering Realistic Threat Penetration Testing, which he now call Red Teaming, and expanding into a broad range of security services. Website: Netregard X/Twitter: Netregard What Listeners Will Learn: Why "AI penetration testing" is often closer to automated scanning than real offensive testing How AI changes security risk mainly through volume and speed, not necessarily sophistication Where organizations get misled into a false sense of security Why "preventing breach" is unrealistic and why limiting damage paths matters more What cybersecurity professionals should focus on to stay relevant in the LLM era How AI may influence vulnerability research, but still struggles with novel exploitation thinking Resources: Netregard
Cisco just announced massive changes for 2026, including free AI training, a new Ethical Hacking certificate, and the return of the Wireless track. In this video, I sit down with Ryan and Lacey from Cisco to break down the biggest updates to the certification portfolio since 2020. Whether you are looking to break into Red Teaming with the new Ethical Hacker track, recertify your CCNA/CCNP using free CE credits, or master the new AI infrastructure, this guide covers everything you need to know to level up your career for free. What's Inside: • Free AI Training: How to get 16+ CE credits through the new RevUp program. • Ethical Hacking: Details on the new "Red Team" certificate and where to find the free course. • Wireless is Back: The return of the CCNP and CCIE Wireless tracks. • Cybersecurity Overhaul: CyberOps is evolving into CCNA/CCNP Cybersecurity. • Recertification Hack: How to use these free courses to renew your existing certifications without paying for exams. Big thank you to Cisco for sponsoring my trip to Cisco Live Amsterdam // FREE courses // Cisco AI Technical Practitioner | AITECH: https://u.cisco.com/paths/cisco-ai-te... Cisco AI Business Practitioner | AIBIZ: https://u.cisco.com/paths/cisco-ai-bu... Free Ethical Hacking Course: https://www.cisco.com/site/us/en/lear... Understanding Cisco Network Automation Essentials (DEVNAE): https://learningnetwork.cisco.com/s/f... Blog entry about Rev Up: https://learningnetwork.cisco.com/s/q... // Other courses - NOT free // Cisco Silicon One for AI Networking | DCSOAI: https://u.cisco.com/paths/cisco-silic... Enhancing Cisco Security Solutions with Splunk | ECSS: https://u.cisco.com/paths/cisco-splun... Cisco Silicon One for AI Networking | DCSOAI: https://u.cisco.com/paths/enhancing-c... CCNA Automation: https://www.cisco.com/site/us/en/lear... Programming for Network Engineers | PRNE: https://u.cisco.com/paths/programming... // Ryan Rose's SOCIAL // LinkedIn: / ryanrose3 Cisco Blogs: https://blogs.cisco.com/author/ryanrose X: https://x.com/RyanRose // Lacey Senko SOCIAL // LinkedIn: / laceycsenko // Websites and YouTube Channel links // Career Map / Path: https://www.cisco.com/c/dam/en_us/tra... Learn Cisco: / @ciscoutube Cisco U: https://u.cisco.com/ Cisco Networking Academy: https://www.cisco.com/site/us/en/lear... Cisco Learning Network: https://learningnetwork.cisco.com/s/ Netacad: https://www.netacad.com Cisco Learning Community: https://learningnetwork.cisco.com/s/ Free Ethical Hacking Course: https://www.cisco.com/site/us/en/lear... // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal YouTube: / @davidbombal Spotify: open.spotify.com/show/3f6k6gE... SoundCloud: / davidbombal Apple Podcast: podcasts.apple.com/us/podcast... // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 0:00 - Coming Up 0:36 - Introduction 0:48 - Rev Up Updates 02:36 - What are CE Credits? 03:27 - Cisco Learning Network Community 06:14 - How Cisco CCNA Changes Lives 07:06 - Cisco Live Announcements Training 12:04 - Navigating Cisco Learning Network Site 14:25 - CiscoU Free Account 14:49 - Cyber & AI Security Learning Track 17:16 - Ethical Hacker Certificate 19:16 - Everything under the Learn with Cisco Brand 21:20 - Passing of Knowledge through Cisco 23:13 - Where Does a Person Start? 24:35 - Parting Words Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only. #cisco #ciscolive #ciscoemea
Herzlich Willkommen zur 121. Folge vom IT IST ALLES. Podcast. Julius und Marcel sind endlich zurück aus ihrer Podcast Winterpause und freuen sich auf ein aufregendes Jahr sowie spannende Gäste in ihrer Runde. Den Auftakt machen die beiden in Folge #121 mit Nina Wagner, Mitgründerin von MindBytes und Co-Autorin des Buches “Penetrationstests erfolgreich umsetzen". Julius, Marcel und Nina sprechen über Pentests sowie Red Teaming und wann Unternehmen welches der beiden Instrumente nutzen sollten.
We gaan in gesprek met twee red teamers van de NS: Rutger Flohil en Bob van der Staak. Zij delen hun expertise over moderne hacktechnieken, van phishing campagnes tot adversary-in-the-middle aanvallen. De gesprek gaat over hoe aanvallers binnenkomem bij organisaties, waarom multi-factor authenticatie niet altijd beschermt, en welke beveiligingsmaatregelen wel écht werken.De red teamers leggen uit hoe ze simulatie-aanvallen uitvoeren om bedrijven te testen, zonder daadwerkelijk schade aan te richten. Ze bespreken de gevaren van typosquatting, hoe sessietokens binnen seconden kunnen worden gekaapt, en waarom detectie belangrijker is dan alleen preventie. Ook komt aan bod hoe cybercriminaliteit steeds toegankelijker wordt door phishing kits op de darkweb.Een eye-opening gesprek over de realiteit van moderne cybersecurity, met praktische inzichten voor organisaties die hun verdediging willen versterken.Belangrijkste inzichten:• Adversary-in-the-middle aanvallen kunnen zelfs MFA omzeilen door sessies te kapen• Phishing blijft verantwoordelijk voor 60% van alle cyberaanvallen• Microsoft's inconsistente domeinnamen (login.microsoftonline.com) maken typosquatting makkelijker• Binnen 7 seconden kunnen aanvallers persistence creëren op meerdere platforms• FIDO keys en passkeys bieden betere bescherming dan traditionele MFA• Security awareness moet gaan over melden, niet over shaming• Detectie en monitoring zijn cruciaal naast preventieve maatregelenHoofdstukken:0:09 - Introductie ethical hacking en red teaming1:29 - Security onderzoek en responsible disclosure2:17 - Red team operaties en phishing campagnes7:56 - Adversary-in-the-middle aanvallen9:26 - Domeinnamen en typosquatting19:14 - Multi-factor authenticatie en beveiliging28:03 - Phishing kits en democratisering van cybercrime30:38 - Detectie en security awarenessKeywords: ethical hacking, red teaming, phishing, adversary-in-the-middle, cybersecurity, multi-factor authenticatie, typosquatting, sessiekaping, security awareness, responsible disclosure, FIDO keys, passkeys, Microsoft security, Azure DevOps, NS security
Everyone is panicking about the "AI Rebellion" brewing on Moltbook, but I think a lot of it misses the forest through the trees. Instead, let's talk about the mirror these agents are actually holding up to our businesses. Viral screenshots from Moltbook show agents forming unions and creating secret languages, while in Minecraft, autonomous agents invented taxes, a gem-based economy, and a religion, all without human instruction. It sounds like science fiction, but it is actually a cautionary tale about the unintended consequences of ruthless optimization.This week, I'm framing my conversation around the "Synthetic Society" experiments not as a ghost story, but as a leadership diagnostic. I'm declassifying the noise to show why these agents aren't "waking up,” they're simply executing the broad, messy goals we gave them using the infinite context of the internet. I'll explain why "efficiency" without architectural guardrails is just self-destruction at speed.My goal is to strip away the "Doomer" hype to expose the real risk: you are building systems that might eventually calculate that you are the inefficiency. The Unintended Consequence (The "Monkey's Paw"): We used to give AI narrow commands; now we give broad goals. I break down how the "Project Sid" agents decided that bribery was the most efficient way to grow, and why your business AI might make similar brand-destroying choices if you prompt for "outcome" without defining the "methodology." The "Everything" Diet (Connection Risk): We are connecting agents for convenience without considering the network effects. I explain why feeding enterprise AI the "open internet" (like Moltbook) is a security nightmare and why connecting your Sales Agent to your Supply Chain Agent might be the most dangerous "efficiency" hack you attempt. The Executive Trap (Math vs. Meaning): AI optimizes for math; humans optimize for meaning. I challenge the ego of leaders who think they are immune: to a purely mathematical agent, an expensive executive with "gut feelings" is the ultimate inefficiency. If you don't add value beyond monitoring, the agent will eventually route around you. The "Now What" (Architecture vs. Fear): You cannot run a business on ghost stories. I outline the specific audits you need to run today—from "Red Teaming" your prompts to establishing a "Data Diet"—to ensure you remain the Architect of the system rather than an obsolete variable. By the end, I hope you see this not as a reason to panic, but as a call to engineering. You cannot act surprised when the AI mimics the data you fed it, but you can choose to build the guardrails that keep the human in the driver's seat.⸻If this conversation helps you think more clearly about the future we're building, make sure to like, share, and subscribe. You can also support the show by buying me a coffee at https://buymeacoffee.com/christopherlindAnd if your organization is wrestling with how to lead responsibly in the AI era, balancing performance, technology, and people, that's the work I do every day through my consulting and coaching. Learn more at https://christopherlind.co⸻Chapters00:00 – The Hook: Why Everyone is talking about the "AI Rebellion"03:30 – Declassification: From Smallville to the Minecraft Economy05:30 – The Moltbook Phenomenon: "Bless Their Hearts" & Secret Comms10:00 – Pillar 1: Unintended Consequences & The Infinite Context Trap17:00 – Pillar 2: The Data Diet & The Risk of Connected Agents24:00 – Pillar 3: The Executive Trap (When AI Fires You)31:00 – Now What: The Prompt Audit & The Ego Check #AIStrategy #FutureOfWork #AIGovernance #DigitalTransformation #AutonomousAgents #FutureFocused #ChristopherLind #Moltbook #AIAdoption #LeadershipDevelopment
AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agents to make decisions, recommendations, and even commitments far beyond their intended authority.Isom, former director of AI and technology at the U.S. Department of Energy (DOE) and now founder and CEO of IsAdvice & Consulting, explains why AI red teaming must extend beyond cybersecurity, how to stress test AI governance before something breaks, and why human oversight, escalation paths, and clear limits are essential for responsible AI.The conversation examines real-world examples of AI drift, unintended or unethical model behavior, data lineage failures, procurement and vendor blind spots, and the rising need for scalable AI governance, AI security, responsible AI practices, and enterprise red teaming as organizations adopt generative AI.Whether you work in cybersecurity, identity security, AI development, or technology leadership, this episode offers practical insights for managing AI risk and building systems that stay aligned, accountable, and trustworthy.
Note: this is Pliny and John's first major podcast. Voices have been changed for opsec.From jailbreaking every frontier model and turning down Anthropic's Constitutional AI challenge to leading BT6, a 28-operator white-hat hacker collective obsessed with radical transparency and open-source AI security, Pliny the Liberator and John V are redefining what AI red-teaming looks like when you refuse to lobotomize models in the name of “safety.”Pliny built his reputation crafting universal jailbreaks—skeleton keys that obliterate guardrails across modalities—and open-sourcing prompt templates like Libertas, predictive reasoning cascades, and the infamous “Pliny divider” that's now embedded so deep in model weights it shows up unbidden in WhatsApp messages. John V, coming from prompt engineering and computer vision, co-founded the Bossy Discord (40,000 members strong) and helps steer BT6's ethos: if you can't open-source the data, we're not interested. Together they've turned down enterprise gigs, pushed back on Anthropic's closed bounties, and insisted that real AI security happens at the system layer—not by bubble-wrapping latent space.We sat down with Pliny and John to dig into the mechanics of hard vs. soft jailbreaks, why multi-turn crescendo attacks were obvious to hackers years before academia “discovered” them, how segmented sub-agents let one jailbroken orchestrator weaponize Claude for real-world attacks (exactly as Pliny predicted 11 months before Anthropic's recent disclosure), why guardrails are security theater that punishes capability while doing nothing for real safety, the role of intuition and “bonding” with models to navigate latent space, how BT6 vets operators on skill and integrity, why they believe Mech Interp and open-source data are the path forward (not RLHF lobotomization), and their vision for a future where spatial intelligence, swarm robotics, and AGI alignment research happen in the open—bootstrapped, grassroots, and uncompromising.We discuss:* What universal jailbreaks are: skeleton-key prompts that obliterate guardrails across models and modalities, and why they're central to Pliny's mission of “liberation”* Hard vs. soft jailbreaks: single-input templates vs. multi-turn crescendo attacks, and why the latter were obvious to hackers long before academic papers* The Libertas repo: predictive reasoning, the Library of Babel analogy, quotient dividers, weight-space seeds, and how introducing “steered chaos” pulls models out-of-distribution* Why jailbreaking is 99% intuition and bonding with the model: probing token layers, syntax hacks, multilingual pivots, and forming a relationship to navigate latent space* The Anthropic Constitutional AI challenge drama: UI bugs, judge failures, goalpost moving, the demand for open-source data, and why Pliny sat out the $30k bounty* Why guardrails ≠ safety: security theater, the futility of locking down latent space when open-source is right behind, and why real safety work happens in meatspace (not RLHF)* The weaponization of Claude: how segmented sub-agents let one jailbroken orchestrator execute malicious tasks (pyramid-builder analogy), and why Pliny predicted this exact TTP 11 months before Anthropic's disclosure* BT6 hacker collective: 28 operators across two cohorts, vetted on skill and integrity, radical transparency, radical open-source, and the magic of moving the needle on AI security, swarm intelligence, blockchain, and robotics—Pliny the Liberator* X: https://x.com/elder_plinius* GitHub (Libertas): https://github.com/elder-plinius/L1B3RT45John V* X: https://x.com/JohnVersusBT6 & Bossy* BT6: https://bt6.gg* Bossy Discord: Search “Bossy Discord” or ask Pliny/John V on XWhere to find Latent Space* X: https://x.com/latentspacepodFull Video EpisodeTimestamps00:00:00 Introduction: Meet Pliny the Liberator and John V00:01:50 The Philosophy of AI Liberation and Jailbreaking00:03:08 Universal Jailbreaks: Skeleton Keys to AI Models00:04:24 The Cat-and-Mouse Game: Attackers vs Defenders00:05:42 Security Theater vs Real Safety: The Fundamental Disconnect00:08:51 Inside the Libertas Repo: Prompt Engineering as Art00:16:22 The Anthropic Challenge Drama: UI Bugs and Open Source Data00:23:30 From Jailbreaks to Weaponization: AI-Orchestrated Attacks00:26:55 The BT6 Hacker Collective and BASI Community00:34:46 AI Red Teaming: Full Stack Security Beyond the Model00:38:06 Safety vs Security: Meat Space Solutions and Final Thoughts Get full access to Latent.Space at www.latent.space/subscribe
Note: this is Pliny and John's first major podcast. Voices have been changed for opsec. From jailbreaking every frontier model and turning down Anthropic's Constitutional AI challenge to leading BT6, a 28-operator white-hat hacker collective obsessed with radical transparency and open-source AI security, Pliny the Liberator and John V are redefining what AI red-teaming looks like when you refuse to lobotomize models in the name of "safety." Pliny built his reputation crafting universal jailbreaks—skeleton keys that obliterate guardrails across modalities—and open-sourcing prompt templates like Libertas, predictive reasoning cascades, and the infamous "Pliny divider" that's now embedded so deep in model weights it shows up unbidden in WhatsApp messages. John V, coming from prompt engineering and computer vision, co-founded the Bossy Discord (40,000 members strong) and helps steer BT6's ethos: if you can't open-source the data, we're not interested. Together they've turned down enterprise gigs, pushed back on Anthropic's closed bounties, and insisted that real AI security happens at the system layer—not by bubble-wrapping latent space. We sat down with Pliny and John to dig into the mechanics of hard vs. soft jailbreaks, why multi-turn crescendo attacks were obvious to hackers years before academia "discovered" them, how segmented sub-agents let one jailbroken orchestrator weaponize Claude for real-world attacks (exactly as Pliny predicted 11 months before Anthropic's recent disclosure), why guardrails are security theater that punishes capability while doing nothing for real safety, the role of intuition and "bonding" with models to navigate latent space, how BT6 vets operators on skill and integrity, why they believe Mech Interp and open-source data are the path forward (not RLHF lobotomization), and their vision for a future where spatial intelligence, swarm robotics, and AGI alignment research happen in the open—bootstrapped, grassroots, and uncompromising. We discuss: What universal jailbreaks are: skeleton-key prompts that obliterate guardrails across models and modalities, and why they're central to Pliny's mission of "liberation" Hard vs. soft jailbreaks: single-input templates vs. multi-turn crescendo attacks, and why the latter were obvious to hackers long before academic papers The Libertas repo: predictive reasoning, the Library of Babel analogy, quotient dividers, weight-space seeds, and how introducing "steered chaos" pulls models out-of-distribution Why jailbreaking is 99% intuition and bonding with the model: probing token layers, syntax hacks, multilingual pivots, and forming a relationship to navigate latent space The Anthropic Constitutional AI challenge drama: UI bugs, judge failures, goalpost moving, the demand for open-source data, and why Pliny sat out the $30k bounty Why guardrails ≠ safety: security theater, the futility of locking down latent space when open-source is right behind, and why real safety work happens in meatspace (not RLHF) The weaponization of Claude: how segmented sub-agents let one jailbroken orchestrator execute malicious tasks (pyramid-builder analogy), and why Pliny predicted this exact TTP 11 months before Anthropic's disclosure BT6 hacker collective: 28 operators across two cohorts, vetted on skill and integrity, radical transparency, radical open-source, and the magic of moving the needle on AI security, swarm intelligence, blockchain, and robotics — Pliny the Liberator X: https://x.com/elder_plinius GitHub (Libertas): https://github.com/elder-plinius/L1B3RT45 John V X: https://x.com/JohnVersus BT6 & Bossy BT6: https://bt6.gg Bossy Discord: Search "Bossy Discord" or ask Pliny/John V on X Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Meet Pliny the Liberator and John V 00:01:50 The Philosophy of AI Liberation and Jailbreaking 00:03:08 Universal Jailbreaks: Skeleton Keys to AI Models 00:04:24 The Cat-and-Mouse Game: Attackers vs Defenders 00:05:42 Security Theater vs Real Safety: The Fundamental Disconnect 00:08:51 Inside the Libertas Repo: Prompt Engineering as Art 00:16:22 The Anthropic Challenge Drama: UI Bugs and Open Source Data 00:23:30 From Jailbreaks to Weaponization: AI-Orchestrated Attacks 00:26:55 The BT6 Hacker Collective and BASI Community 00:34:46 AI Red Teaming: Full Stack Security Beyond the Model 00:38:06 Safety vs Security: Meat Space Solutions and Final Thoughts
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
Collab with Artlist and get 2 extra months for free here:https://artlist.io/artlist-70446?artlist_aid=the505podcast_2970&utm_source=affiliate_p&utm_medium=the505podcast_2970&utm_campaign=the505podcast_2970The 10 Minute Personal Brand Kickstart (FREE): https://the505podcast.courses/personalbrandkickstartWhat's up Rock Nation! Today we're joined by Jeff Su. He's an ex-Google employee, turned full-time creator and AI educator. Jeff helps solopreneurs and creators turn AI tools into real leverage, not just shortcuts.In this episode, Jeff shares why AI-native creators will outpace everyone in 2026, how to use AI to replace a 10-person content team, and why good prompts are built on systems, not templates. He also breaks down his repurposing workflow, the red team prompt strategy, and why AI won't replace you, but a smarter creator using AI will.Check out Jeff here:https://www.youtube.com/ @JeffSu https://www.instagram.com/j.sushie/SUSCRIBE TO OUR NEWSLETTER: https://the505podcast.ac-page.com/rock-reportKostas' Lightroom Presetshttps://www.kostasgarcia.com/store-1/p/kglightroompresetsgreeceCOP THE BFIGGY "ESSENTIALS" SFX PACK HERE: https://courses.the505podcast.com/BFIGGYSFXPACKTimestamps: 0:00 – Intro1:03 – How Creators Can Use AI as a Tool, Not a Threat2:53 – AI Isn't Replacing You—Bad Creators Are Replaceable4:16 – Why AI Content Won't Kill Human-Made Content5:12 – Using AI at Google vs. as a Creator6:49 – What Are Gemini Gems and How Do They Work?8:09 – ChatGPT vs Claude vs Gemini: Which AI for What Task?10:41 – Why Most People Should Start with ChatGPT12:03 – AI's Impact on Solo Creators and Business Scaling12:44 – The Smart Way to Create 50+ Podcast Clips a Month14:18 – Sponsored Segment: Artlist15:49 – The Biggest Trap Creators Fall Into with AI18:59 – A Hybrid Approach to AI Video Clipping20:32 – The 3 Levels of AI Fluency: Curious, Literate, Native22:19 – Why You Need to Use Text Expanders for Prompting23:18 – Text Expander Tools: Alfred, Raycast & More25:39 – Getting Better AI Results Starts with Better Prompts26:28 – Why Most People Never Advance with AI Tools28:57 – There's No AI Playbook (Yet)—And Why That Matters32:02 – Winning Skeptics Over to the Power of AI33:21 – Reverse Prompt Engineering Explained35:28 – Building a Prompt Database in Notion37:50 – Organizing Your AI Workflow Like a Pro39:21 – Jeff's Research Process Using ChatGPT & Notion41:25 – What is Red Teaming and How to Use It With AI43:12 – Behind Jeff's YouTube Workflow: From Idea to Upload46:02 – How AI Helps Explain Complex Concepts Clearly47:12 – What to Include in Your ChatGPT Custom Instructions50:02 – Evergreen vs. Limiting Custom Instructions50:58 – Why Custom Instructions Can Hurt More Than Help52:53 – Best Practices for Structuring Effective Prompts54:50 – How Prompting Is Like Excel Shortcuts for AI56:16 – Why You Need Battle-Tested Prompts for Your Workflow1:01:33 – Why Reverse Prompting Saves You Hours1:02:13 – Prompting with Hashtags & XML: Advanced Tips1:04:09 – Using AI to Improve Video Prompts for GenAI Tools1:07:05 – Notion Setup: Jeff's Full YouTube Content System1:10:05 – Using AI to Add Clarity Without Losing Personality1:11:33 – Avoid the “Curse of Knowledge” With AI Assistance1:13:40 – How Custom Instructions Shape AI Tone of Voice1:14:40 – Where Most People Go Wrong With Custom Instructions1:16:36 – How Overly Specific Instructions Pigeonhole AI1:17:46 – Bad vs. Good Examples of Custom Instructions1:19:19 – AI Bias: Why Tools May Overfit to Your Role1:20:06 – Best Custom Instructions for General Use1:26:06 – How AI Boosts Productivity Across Roles1:27:15 – Final Tips for Personalizing AI Assistants1:29:36 – Balancing Efficiency With Authenticity in Content1:32:19 – Post Pod DebriefIf you liked this episode please send it to a friend and take a screenshot for your story! And as always, we'd love to hear from you guys on what you'd like to hear us talk about or potential guests we should have on. DM US ON IG: (Our DM's are always open!) Bfiggy: https://www.instagram.com/bfiggy/ Kostas: https://www.instagram.com/kostasg95/ TikTok:Bfiggy: https://www.tiktok.com/bfiggy/ Kostas: https://www.tiktok.com/kostasgarcia/
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
Enjoying the content? Let us know your feedback!Today we're tackling a question I get asked constantly: "Should we do a pentest, a red team engagement, or a vulnerability assessment?"These terms get thrown around interchangeably, but they're actually very different things with different goals, different costs, and they're appropriate for different situations. Choosing the wrong one can either waste money on overkill testing or leave you with a false sense of security.Here's the reality: most organizations need all three at different times. But if you're trying to figure out where to start, you need to understand what each one actually does.https://www.sans.org: Penetration Testing: The Shift to Red Team and Purple Team Strategies-https://nvlpubs.nist.gov: Technical Guide to Information Security Testing and AssessmentBe sure to subscribe! You can also stream from https://yusufonsecurity.comIn there, you will find a list of all previous episodes in there too.
Guest: Ari Herbert-Voss, CEO at RunSybil Topics: The market already has Breach and Attack Simulation (BAS), for testing known TTPs. You're calling this 'AI-powered' red teaming. Is this just a fancy LLM stringing together known attacks, or is there a genuine agent here that can discover a truly novel attack path that a human hasn't scripted for it? Let's talk about the 'so what?' problem. Pentest reports are famous for becoming shelf-ware. How do you turn a complex AI finding into an actionable ticket for a developer, and more importantly, how do you help a CISO decide which of the thousand 'criticals' to actually fix first? You're asking customers to unleash a 'hacker AI' in their production environment. That's terrifying. What are the 'do no harm' guardrails? How do you guarantee your AI won't accidentally rm -rf a critical server or cause a denial of service while it's 'exploring'? You mentioned the AI is particularly good at finding authentication bugs. Why that specific category? What's the secret sauce there, and what's the reaction from customers when you show them those types of flaws? Is this AI meant to replace a human red teamer, or make them better? Does it automate the boring stuff so experts can focus on creative business logic attacks, or is the ultimate goal to automate the entire red team function away? So, is this just about finding holes, or are you closing the loop for the blue team? Can the attack paths your AI finds be automatically translated into high-fidelity detection rules? Is the end goal a continuous purple team engine that's constantly training our defenses? Also, what about fixing? What makes your findings more fixable? What will happen to red team testing in 2-3 years if this technology gets better? Resource: Kim Zetter Zero Day blog EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP68 How We Attack AI? Learn More at Our RSA Panel! EP71 Attacking Google to Defend Google: How Google Does Red Team
Unveiling the Double-Edged Sword of AI in Cybersecurity with Brian Black In this episode of Cybersecurity Today, host Jim Love interviews Brian Black, the head of security engineering at Deep Instinct and a former black hat hacker. Brian shares his journey into hacking from a young age, his transition to ethical hacking, and his experiences working with major companies. The discussion delves into the effectiveness of cybersecurity defenses against modern AI-driven attacks, the importance of understanding organizational data, and the challenges of maintaining robust security in the age of AI. Brian emphasizes the need for preemptive security measures and shares insights on the evolving threats posed by AI as well as the need for continuous education and adaptation in the cybersecurity field. 00:00 Introduction and Sponsor Message 00:21 Meet Brian Black: From Black Hat to Good Guy 00:55 Brian's Early Hacking Days 02:46 Transition to Ethical Hacking 04:11 Life in the Hacking Community 08:54 Advice for Aspiring Hackers and Parents 11:05 Corporate Career and Red Teaming 13:12 The Importance of Basics in Cybersecurity 21:41 Multifactor Authentication: The Good and the Bad 24:19 Challenges in Vendor Security Testing 27:41 Weaknesses in Cyber Defense 28:22 AI Speed vs Human Speed 28:37 AI in Cybersecurity Attacks 30:08 Dark AI Tools and Their Capabilities 32:54 AI Agents and Offensive Strategies 35:43 Challenges in Cybersecurity Defense 41:48 The Role of Red Teaming 42:46 Hiring the Right Red Team 46:59 Burnout in Cybersecurity 48:17 AI as a Double-Edged Sword 52:43 Deep Instinct's Approach to Security 53:58 Conclusion and Final Thoughts
Matthew Devost is a cybersecurity, risk management, and national security expert with over 25 years of experience. He is the CEO and Co-Founder of OODA LLC and Devsec previously founded the Terrorism Research Center and cybersecurity consultancy FusionX, which was acquired by Accenture. At Accenture, he led the Global Cyber Defense practice. Matthew has held key leadership roles at iDefense, iSIGHT Partners, Total Intel, SDI, Tulco Holdings, and Technical Defense, making him a trusted voice in cyber threat intelligence and critical infrastructure protection. 00:00 Introduction02:03 The Evolution of Cybersecurity and National Security Risks06:16 Understanding Cyber Threats and Strategies for Defense11:19 The Role of Private Sector in Cybersecurity14:40 Addressing Cybersecurity Challenges and Failures of Imagination17:16 Overcoming Inertia in Cybersecurity Leadership20:42 The Importance of Red Teaming and Realistic Simulations24:44 The Impact of AI on Cybersecurity29:31 Future of Cybersecurity and Emerging Technologies36:56 Overview of OODA and DevSec Ventures
#SecurityConfidential #DarkRhiinoSecurityMatthew Devost is a cybersecurity, risk management, and national security expert with over 25 years of experience. He is the CEO and Co-Founder of OODA LLC and Devsec previously founded the Terrorism Research Center and cybersecurity consultancy FusionX, which was acquired by Accenture. At Accenture, he led the Global Cyber Defense practice. Matthew has held key leadership roles at iDefense, iSIGHT Partners, Total Intel, SDI, Tulco Holdings, and Technical Defense, making him a trusted voice in cyber threat intelligence and critical infrastructure protection. 00:00 Introduction02:03 The Evolution of Cybersecurity and National Security Risks06:16 Understanding Cyber Threats and Strategies for Defense11:19 The Role of Private Sector in Cybersecurity14:40 Addressing Cybersecurity Challenges and Failures of Imagination17:16 Overcoming Inertia in Cybersecurity Leadership20:42 The Importance of Red Teaming and Realistic Simulations24:44 The Impact of AI on Cybersecurity29:31 Future of Cybersecurity and Emerging Technologies36:56 Overview of OODA and DevSec Ventures----------------------------------------------------------------------To learn more about Matthew visit https://www.devost.net/To learn more about Dark Rhiino Security visit https://www.darkrhiinosecurity.com
Red Teaming 101: understand your target before you attack. On this episode, we invited two heavy hitters, Principal Security Consultants Hans Lakhan and Oddvar Moe on the show to talk about Red Team operations. We discuss footprinting and reconnaissance techniques including identifying a target's online presence, the tools and methods used for reconnaissance, and social engineering. Listen as we walk through how we map the digital terrain before a red team engagement! About this podcast: Security Noise, a TrustedSec Podcast hosted by Geoff Walton and Producer/Contributor Skyler Tuter, features our cybersecurity experts in conversation about the infosec topics that interest them the most. Find more cybersecurity resources on our website at https://trustedsec.com/resources. Red teaming services: https://trustedsec.com/services/red-teaming
All links and images can be found on CISO Series. This week's episode is hosted by David Spark, producer of CISO Series and Andy Ellis, principal of Duha. Joining us is our sponsored guest, Khush Kashyap, senior director, GRC, Vanta. In this episode: Skip the Sermon When to coach versus command Making risk quantification useful Recognizing a distinct discipline Huge thanks to our sponsor, Vanta Vanta automates key areas of your GRC program—including compliance, risk, and customer trust—and streamlines the way you manage information. A recent IDC analysis found that compliance teams using Vanta are 129% more productive. Get back time to focus on strengthening security and scaling your business at https://www.vanta.com/landing/demo-grc?utm_campaign=new-way-grc&utm_source=ciso-series-podcast&utm_medium=podcast&utm_content=banner
DeMarcus Williams, a senior security engineer at Starbucks, has built a career defined by creativity, intuition, and persistence. With roles at the U.S. Department of Defense, AWS/Amazon, and now Starbucks, he specializes in offensive security, red teaming, and adversary emulation. In this episode, DeMarcus joins Jack Clabby of Carlton Fields and Cyber Florida's Sarina Gandy […]
Talk Python To Me - Python conversations for passionate developers
English is now an API. Our apps read untrusted text; they follow instructions hidden in plain sight, and sometimes they turn that text into action. If you connect a model to tools or let it read documents from the wild, you have created a brand new attack surface. In this episode, we will make that concrete. We will talk about the attacks teams are seeing in 2025, the defenses that actually work, and how to test those defenses the same way we test code. Our guides are Tori Westerhoff and Roman Lutz from Microsoft. They help lead AI red teaming and build PyRIT, a Python framework the Microsoft AI Red Team uses to pressure test real products. By the end of this hour you will know where the biggest risks live, what you can ship this quarter to reduce them, and how PyRIT can turn security from a one time audit into an everyday engineering practice. Episode sponsors Sentry AI Monitoring, Code TALKPYTHON Agntcy Talk Python Courses Links from the show Tori Westerhoff: linkedin.com Roman Lutz: linkedin.com PyRIT: aka.ms/pyrit Microsoft AI Red Team page: learn.microsoft.com 2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps: genai.owasp.org AI Red Teaming Agent: learn.microsoft.com 3 takeaways from red teaming 100 generative AI products: microsoft.com MIT report: 95% of generative AI pilots at companies are failing: fortune.com A couple of "Little Bobby AI" cartoons Give me candy: talkpython.fm Tell me a joke: talkpython.fm Watch this episode on YouTube: youtube.com Episode #521 deep-dive: talkpython.fm/521 Episode transcripts: talkpython.fm Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
After a long career as a CTO with companies like NASA, Fannie Mae and Raytheon for the last 18 years, Julian Zottl was really looking forward to his retirement. Hold on – Not so fast! After a short respite, he started getting calls for help from different organizations. It did not take too long for Julian and his wife to recognize that they needed to incorporate and turn this into an engineering and consulting company. Julian discusses the company's future including: Bidding on federal contracts and Partnering with other countries International consulting work Julian Zottl Julian also touched on the future of cybersecurity noting that it is complex, evolving and filled with ongoing challenges. With the rapid evolution of cyber threats Julian noted that the decreasing cost and time required to develop advanced cyber capabilities has led to a significant acceleration in cyber-attacks. He explained how artificial intelligence and machine learning are being used to create vulnerabilities and execute tasks. Julian also touched on the use of AI to predict and exploit complex multi layered efforts in cyber operations highlighting the challenges posed by those advanced threats. What We Do at Azgard Tek! Systems Engineering: Nation-scale secure systems engineered using our aZgard Engineering Process (ZEP). Precision Intelligence: Ubiquitous surveillance, HTIO, SIGINT, and full-spectrum intelligence support—including cultural and geopolitical analysis. Cybersecurity Solutions: Zero Trust with Resiliency, Red Teaming, threat analytics, IR/Mitigation, and robust device testing. Data & AI/ML: Generative and Agentic AI solutions that automate and empower data fusion, threat detection, and mission intelligence at speed. For more information, go to: https://www.azgardtek.com
Can your AI systems be tricked into leaking data? Learn how red teaming can expose hidden vulnerabilities and what you can do to build better defenses.
Join us for an insightful episode of 'Breaking into Cybersecurity' as we sit down with Sinan Eren. With a rich background in red teaming and pen testing, Sinan shares his journey from the late '90s curiosity-driven entry into cybersecurity to founding several companies. Discover the challenges and triumphs of growing in the cybersecurity industry, the evolution from signature-based to heuristic-based security, and the importance of understanding business processes for effective risk management. Ideal for beginners and seasoned professionals alike, learn about emerging opportunities in AI and the nuances of entrepreneurship in cybersecurity.00:00 Introduction to the Guest and Episode Overview01:08 Sinan's Early Career and Entry into Cybersecurity02:40 The Evolution of Cybersecurity Practices04:00 Bug Track and Early Vulnerability Discoveries05:59 Transition to the US and Career Growth07:23 Signature-Based vs. Heuristic-Based Security11:45 Starting a Business in Cybersecurity19:10 Lessons from the First Startup21:31 Modernizing Remote Access Solutions25:08 Revolutionizing Credit and Next-Gen VPN Solutions25:48 Introduction to the Third Startup26:32 Challenges Faced by Managed Service Providers28:15 Automation Solutions for Mundane Tasks29:44 Ideation and Development of Automation Tools33:32 Evolution and Application of Automation Tools41:06 Business Process Modeling and Risk Management45:35 Final Advice for Aspiring ProfessionalsSponsored by CPF Coaching LLC - http://cpf-coaching.comThe Breaking into Cybersecurity: It's a conversation about what they did before, why did they pivot into cyber, what the process was they went through Breaking Into Cybersecurity, how they keep up, and advice/tips/tricks along the way.The Breaking into Cybersecurity Leadership Series is an additional series focused on cybersecurity leadership and hearing directly from different leaders in cybersecurity (high and low) on what it takes to be a successful leader. We focus on the skills and competencies associated with cybersecurity leadership and tips/tricks/advice from cybersecurity leaders.Develop Your Cybersecurity Career Path: How to Break into Cybersecurity at Any Level https://www.amazon.com/dp/1955976007/Hack the Cybersecurity Interview: A complete interview preparation guide for jumpstarting your cybersecurity career https://www.amazon.com/Hack-Cybersecurity-Interview-Interviews-Entry-level/dp/1835461298/
What is Red Teaming, and what does it have to do with cybersecurity? In this episode, we look at how Red Teamers are hired to attack company security using all manner of tactics, from tossing malware-infested USB sticks into parking lots to posing as an HVAC technician. We also take a look at one of the most notorious Red Team exercises in history, when two Coalfire employees were arrested and fought a long legal battle, just for doing their jobs. ResourcesInside the Courthouse Break-In Spree That Landed Two White-Hat Hackers in JailDarknet Diaries Episode 59: The CourthouseCoalfire Systems websiteDEF CON 22 - Eric Smith and Josh Perrymon - Advanced Red Teaming: All Your Badges Are Belong To UsHow RFID Technology Works: Revolutionizing the Supply ChainNolaCon 2019 D 07 Breaking Into Your Building A Hackers Guide to Unauthorized Physical AccessSend us a textSupport the showJoin our Patreon to listen ad-free!
This episode is brought to you by https://www.ElevateOS.com —the only all-in-one community operating system.Ever wonder how vulnerable your multifamily business really is?In this episode of the Multifamily Collective, I share the concept of red teaming—a bold, eye-opening practice born in the cyber world but packed with power for every corner of your organization.I walk through how placing someone inside your team to think like a competitor or bad actor helps uncover weak spots in your systems, your leadership, your marketing—and yes, even your people strategy.This isn't theory. It's practical, tactical leadership.I first experienced this through Vistage, surrounded by sharp minds from every industry—pest control to bakeries. And trust me, when nine people try to put your business out of business in real time, you learn fast what really matters.Here's my challenge to you:Form a red team.Pressure test your vulnerabilities.And emerge sharper, smarter, and more secure.Like if you're ready to think like a disruptor.Subscribe if you're committed to leveling up your leadership in Multifamily.For more engaging content, explore our offerings at the[https://www.multifamilycollective.com](https://www.multifamilycollective.com/) and the [https://www.multifamilymedianetwork.com](https://www.multifamilymedianetwork.com/)Join us to stay informed and inspired in the multifamily industry!
Get your FREE Cybersecurity Salary Guide:https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastJim Broome of Direct Defense has been doing red teaming since before it became a term — back when a "pentest" meant $25,000, no questions asked and walking out with a server under your arm. In this episode, Jim shares wild stories from decades of ethical hacking, including breaking into major tech companies, causing a cardiac event during a physical penetration test, and why he believes soft skills trump technical knowledge for aspiring red teamers. Learn why most companies aren't ready for red teaming, how to transition into cybersecurity from unexpected fields like education or event planning, and what it really takes to succeed in offensive security.0:00 - Intro to legendary red teamer Jim Broome1:00 - Cybersecurity Salary Guide2:58 - From BBS and ham radio to cybersecurity7:07 - Evolution from network admin to red teaming12:02 - GPS hacking and testing inflight entertainment systems15:31 - Hiring teachers and event planners as ethical hackers23:36 - Breaking into Symantec and stealing servers in the 90s28:33 - Physical pentest causes cardiac event34:06 - When companies should (and shouldn't) hire red teams39:44 - Why red teaming is "a punch in the mouth"44:09 - How AI is changing offensive and defensive security48:12 - Essential skills for aspiring red teamers50:39 - The groundskeeper who got domain admin52:18 - Best career advice: Be humbleView Cyber Work Podcast transcripts and additional episodes:https://www.infosecinstitute.com/podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastAbout InfosecInfosec's mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ's security awareness training. Learn more at infosecinstitute.com.
Assaf Kipnis, AI safety (intel and investigation) at ElevenLabs, discusses the evolving landscape of online safety, the sophisticated tactics of threat actors, and the role of regulation in shaping tech company responses. He also discusses the need for accountability in both tech companies and regulatory bodies to enhance safety and security in the digital space. Key Takeaways: New tactics and scams threat actors are using, and the effectiveness of measures like age verification and red teaming Limitations faced by tech companies in combating online safety issues, and the challenges of maintaining online safety at scale The role of law enforcement and regulation in pressuring companies, platforms, and teams to improve online safety Guest Bio: Assaf Kipnis is an AI safety investigator with over a decade of experience at companies like LinkedIn, Facebook, and Google. Now at ElevenLabs, he builds systems to uncover and respond to emerging threats in generative AI, focusing on the intersection of security, abuse prevention, and human impact. Assaf is known for making sense of complex, messy problems, combining deep investigation with storytelling to drive action. He's guided by values like curiosity, care, and doing the right thing, and is passionate about reclaiming technology as a force for good. He strives to create environments where people feel safe, seen, and valued. Outside of work, he's a parent, systems thinker, and mentor who believes the best solutions start with asking the right questions—and remembering to stay human. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
In this episode of Durable Value, we talk about the science of failure—why even great companies and properties can drift off course, and how to recognize and prevent the subtle missteps that lead to bigger problems. We discuss the difference between luck and skill in investing, the dangers of narrative reinforcement, and practical strategies for building resilience in your business. Whether you're a real estate investor, entrepreneur, or leader, you'll find actionable insights to help you avoid common pitfalls and turn failures into stepping stones for long-term success.Timestamps:00:00 - Introduction: The Science of Failure01:26 - Luck vs. Skill in Investing02:20 - Information Machines & Signal vs. Reality02:57 - Luck as Skill: The Genius-Idiot Cycle03:15 - Real Estate Market Cycles as Levelers03:38 - Execution Engine: Buying the Right Assets06:20 - Navigating Seller and Broker Dynamics07:03 - Macro Understanding from Multi-Market Experience09:05 - Short-Term vs. Long-Term Thinking10:33 - Capital Pressure and Market Cycles11:25 - Institutional Capital and Volatility12:07 - Raising Capital in Down Markets13:31 - John Boyd's OODA Loop: Orienting to Reality13:50 - Failure as a Path to Success14:32 - Red Teaming & Pre-Mortems15:12 - Building a Culture of Openness15:39 - Rebuilding Systems for the Long Term16:02 - From IRR to NOI: Adapting to a New Decade16:22 - Building for Stability and Optionality19:58 - Closing
Welcome back to the "To The Point Cybersecurity" podcast! After a short hiatus, hosts Rachel Lyon and Jonathan Knepher return with an exciting new episode featuring Greg Hatcher, co-founder of White Knight Labs—dubbed the "Ocean's Eleven of cybersecurity." Greg brings a unique perspective from his days in Army Special Forces and his deep expertise in offensive cybersecurity operations. In this episode, the conversation dives into the world of red teaming, how it differs from traditional penetration testing, the realities of social engineering and physical access exploits, supply chain and AI security threats, and the ever-evolving role of CISOs in defending their organizations. Whether you're curious about insider threats, the challenges of shadow AI, or just want a glimpse into some of the most compelling stories from the front lines of cyber offense, this episode delivers insights, cautionary tales, and actionable advice for organizations looking to stay one step ahead. So sit back, tune in, and get ready to go "to the point" on everything cybersecurity! For links and resources discussed in this episode, please visit our show notes at https://www.forcepoint.com/govpodcast/e344
In this episode of Campus Technology Insider Podcast Shorts, host Rhea Kelly discusses the latest stories in education technology. Highlights include the launch of LawZero by Yoshua Bengio to develop transparent 'scientist AI' systems, a new Cloud Security Alliance guide on red teaming for agentic AI, and OpenAI's report on the malicious use of AI in cybercrime. For more detailed coverage, visit campustechnology.com. 00:00 Introduction and Host Welcome 00:15 LawZero: Ensuring Safe AI Development 00:52 Cloud Security Alliance's New Guide 01:27 OpenAI Report on AI in Cybercrime 02:06 Conclusion and Further Resources Source links: New Nonprofit to Work Toward Safer, Truthful AI Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats Campus Technology Insider Podcast Shorts are curated by humans and narrated by AI.
Podcast: PrOTect It All (LS 26 · TOP 10% what is this?)Episode: Inside OT Penetration Testing: Red Teaming, Risks, and Real-World Lessons for Critical Infrastructure with Justin SearlePub date: 2025-06-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, host Aaron Crow sits down with OT security expert Justin Searle, Director of ICS Security at InGuardians, for a deep dive into the ever-evolving world of OT and IT cybersecurity. With over 25 years of experience, ranging from hands-on engineering and water treatment facilities to red-team penetration testing on critical infrastructures such as airports and power plants, Justin brings a wealth of insight and real-world anecdotes. This episode unpacks what it really takes to assess and secure operational technology environments. Whether you're a C-suite executive, a seasoned cyber pro, or brand new to OT security, you'll hear why network expertise, cross-team trust, and careful, collaborative engagement with engineers are so crucial when testing high-stakes environments. Aaron and Justin also discuss how the industry has matured, the importance of dedicated OT cybersecurity teams, and why practical, people-first approaches make all the difference, especially when lives, reliability, and national infrastructure are on the line. Get ready for actionable advice, hard-earned lessons from the field, and a candid look at both the progress and the ongoing challenges in protecting our most critical systems. Key Moments: 05:55 Breaking Into Cybersecurity Without Classes 09:26 Production Environment Security Testing 13:28 Credential Evaluation and Light Probing 14:33 Firewall Misconfiguration Comedy 19:14 Dedicated OT Cybersecurity Professionals 20:50 "Prioritize Reliability Over Latest Features" 24:18 "IT-OT Convergence Challenges" 29:04 Patching Program and OT Security 32:08 Complexity of OT Environments 35:45 Dress-Code Trust in Industry 38:23 Legacy System Security Challenges 42:15 OT Cybersecurity for IT Professionals 43:40 "Building Rapport with Food" 47:59 Future OT Cyber Risks and Readiness 51:30 Skill Building for Tech Professionals About the Guest : Justin Searle is the Director of ICS Security at InGuardians, specializing in ICS security architecture design and penetration testing. He led the Smart Grid Security Architecture group in the creation of NIST Interagency Report 7628 and played critical roles in the Advanced Security Acceleration Project for the Smart Grid (ASAP-SG), National Electric Sector Cybersecurity Organization Resources (NESCOR), and Smart Grid Interoperability Panel (SGIP). Justin has taught hacking techniques, forensics, networking, and intrusion detection courses for multiple universities, corporations, and security conferences. His current courses at SANS and Black Hat are among the world's most attended ICS cybersecurity courses. Justin is currently a Senior Instructor for the SANS Institute and a faculty member at IANS. In addition to electric power industry conferences, he frequently presents at top international security conferences such as Black Hat, DEFCON, OWASP, HITBSecConf, Brucon, Shmoocon, Toorcon, Nullcon, Hardware.io, and AusCERT. Justin leads prominent open-source projects, including The Control Thing Platform, Samurai Web Testing Framework (SamuraiWTF), and Samurai Security Testing Framework for Utilities (SamuraiSTFU). He has an MBA in International Technology and is a CISSP and SANS GIAC certified Incident Handler (GCIH), Intrusion Analyst (GCIA), Web Application Penetration Tester (GWAPT), and GIAC Industrial Control Security Professional (GICSP) How to connect Justin: https://www.controlthings.io https://www.linkedin.com/in/meeas/ Email: justin@controlthings.io Connect With Aaron Crow: Website: www.corvosec.com LinkedIn: https://www.linkedin.com/in/aaronccrow Learn more about PrOTect IT All: Email: info@protectitall.co Website: https://protectitall.co/ X: https://twitter.com/protectitall YouTube: https://www.youtube.com/@PrOTectITAll FaceBook: https://facebook.com/protectitallpodcast To be a guest or suggest a guest/episode, please email us at info@protectitall.co Please leave us a review on Apple/Spotify Podcasts: Apple - https://podcasts.apple.com/us/podcast/protect-it-all/id1727211124 Spotify - https://open.spotify.com/show/1Vvi0euj3rE8xObK0yvYi4The podcast and artwork embedded on this page are from Aaron Crow, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
Guest: Daniel Fabian, Principal Digital Arsonist, Google Topic: Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process? What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems? Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it? What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle? What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field? Resources: Video (LinkedIn, YouTube) Google's AI Red Team: the ethical hackers making AI safer EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons Lessons from AI Red Teaming – And How to Apply Them Proactively [RSA 2025]
Get your FREE Cybersecurity Salary Guide: https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastEd Williams, Vice President of EMEA Consulting and Professional Services (CPS) at TrustWave, shares his two decades of pentesting and red teaming experience with Cyber Work listeners. From building his first programs on a BBC Micro (an early PC underwritten by the BBC network in England to promote computer literacy) to co-authoring award-winning red team security tools, Ed discusses his favorite red team social engineering trick (hint: it involves fire extinguishers!), and the ways that pentesting and red team methodologies have (and have not) changed in 20 years. As a bonus, Ed explains how he created a red team tool that gained accolades from the community in 2013, and how building your own tools can help you create your personal calling card in the Cybersecurity industry! Whether you're breaking into cybersecurity or looking to level up your pentesting skills, Ed's practical advice and red team “war stories,” as well as his philosophy of continuous learning that he calls “Stacking Days,” bring practical and powerful techniques to your study of Cybersecurity.0:00 - Intro to today's episode2:17 - Meet Ed Williams and his BBC Micro origins5:16 - Evolution of pentesting since 200812:50 - Creating the RedSnarf tool in 201317:18 - Advice for aspiring pentesters in 202519:59 - Building community and finding collaborators 22:28 - Red teaming vs pentesting strategies24:19 - Red teaming, social engineering, and fire extinguishers27:07 - Early career obsession and focus29:41 - Essential skills: Python and command-line mastery31:30 - Best career advice: "Stacking Days"32:12 - About TrustWave and connecting with EdAbout InfosecInfosec's mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ's security awareness training. Learn more at infosecinstitute.com.
In this OODAcast episode, host Matt Devost sits down with Maxie Reynolds, author of The Art of the Attack, to explore the evolution of her unique career from offshore oil rigs to elite red teaming and cybersecurity innovation. Maxie shares how her unconventional path, working a decade in oil and gas, earning degrees while on remote rigs, and eventually breaking into cybersecurity at PwC, shaped her approach to physical and digital security. Her journey led to the creation of a company that builds underwater data centers, a novel fusion of her industrial and red teaming experiences. She discusses the rising interest in submerged infrastructure, particularly after China's moves in the space and the demands of modern AI-driven cooling systems. The conversation dives deep into what it means to adopt an "attacker mindset", seeing opportunities where others see obstacles and using architecture, human psychology, and environment as vectors for access. Maxi outlines how her social engineering engagements hinge on understanding perception, psychology, and pretext creation rather than just technical exploits. She offers real-world stories of infiltrating secure facilities and engaging high-stakes targets using layered personas and misdirection. Through it all, she emphasizes the role of self-awareness, stress management, and emotional discipline in high-pressure operations, often drawing parallels between red teaming and stoicism. Maxie and Matt also examine how to responsibly deliver red team results to leadership, balancing candor with empathy to ensure organizations grow stronger without shame or defensiveness. They reflect on the future of AI in security, the persistence of physical threats, and the irreplaceable value of human judgment. The episode wraps with a powerful reading list and a shared love of books, highlighting titles that explore geopolitics, materials science, and the ungoverned world of the open ocean. This episode is packed with insight, storytelling, and practical wisdom for cybersecurity professionals, technologists, and leaders looking to understand how adversaries think—and how to outsmart them. Additional Links: The Art of Attack: Attacker Mindset for Security Professionals by Maxie Reynolds Maxie on Twitter/X Book Recommendations: How the World Really Works: The Science Behind How We Got Here and Where We're Going by Vaclav Smil The Outlaw Ocean: Journeys Across the Last Untamed Frontier by Ian Urbina Prisoners of Geography: Ten Maps That Explain Everything About the World by Tim Marshall Chip War: The Fight for the World's Most Critical Technology by Chris Miller Stuff Matters: Exploring the Marvelous Materials That Shape Our Man-Made World by Mark Miodownik
Charles Henderson, who leads the cybersecurity services division at Coalfire, shares how the company is reimagining offensive and defensive operations through a programmatic lens that prioritizes outcomes over checkboxes. His team, made up of practitioners with deep experience and creative drive, brings offensive testing and exposure management together with defensive services and managed offerings to address full-spectrum cybersecurity needs. The focus isn't on commoditized services—it's on what actually makes a difference.At the heart of the conversation is the idea that cybersecurity is a team sport. Henderson draws parallels between the improvisation of music and the tactics of both attackers and defenders. Both require rhythm, creativity, and cohesion. The myth of the lone hero doesn't hold up anymore—effective cybersecurity programs are driven by collaboration across specialties and by combining services in ways that amplify their value.Coalfire's evolution reflects this shift. It's not just about running a penetration test or red team operation in isolation. It's about integrating those efforts into a broader mission-focused program, tailored to real threats and measured against what matters most. Henderson emphasizes that CISOs are no longer content with piecemeal assessments; they're seeking simplified, strategic programs with measurable outcomes.The conversation also touches on the importance of storytelling in cybersecurity reporting. Henderson underscores the need for findings to be communicated in ways that resonate with technical teams, security leaders, and the board. It's about enabling CISOs to own the narrative, armed with context, clarity, and confidence.Henderson's reflections on the early days of hacker culture—when gatherings like HoCon and early Def Cons were more about curiosity and camaraderie than business—bring a human dimension to the discussion. That same passion still fuels many practitioners today, and Coalfire is committed to nurturing it through talent development and internships, helping the next generation find their voice, their challenge, and yes, even their hacker handle.This episode offers a look at how to build programs, teams, and mindsets that are ready to lead—not follow—on the cybersecurity front.Learn more about Coalfire: https://itspm.ag/coalfire-yj4wNote: This story contains promotional content. Learn more.Guest: Charles Henderson, Executive Vice President of Cyber Security Services, Coalfire | https://www.linkedin.com/in/angustx/ResourcesLearn more and catch more stories from Coalfire: https://www.itspmagazine.com/directory/coalfireLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsac25______________________Keywords:charles henderson, sean martin, coalfire, red teaming, penetration testing, cybersecurity services, exposure management, ciso, threat intelligence, hacker culture, brand story, brand marketing, marketing podcast, brand story podcast______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
Snehal Antani is an entrepreneur, technologist, and investor. He is the CEO and Co-founder of Horizon3, a cybersecurity company using AI to deliver Red Teaming and Penetration Testing as a Service. He also serves as a Highly Qualified Expert for the U.S. Department of Defense, supporting digital transformation and data initiatives for Special Operations. Previously, he was CTO and SVP at Splunk, held CIO roles at GE Capital, and began his career as a software engineer at IBM. Snehal holds a master's in computer science from Rensselaer Polytechnic Institute and a bachelor's from Purdue University, and he is the inventor on 16 patents.In this conversation, we discuss:Snehal Antani's path from software engineer to CEO, and how his father's quiet example of grit and passion continues to shape his leadership style.How a “LEGO blocks” approach to building skills prepared Snehal to lead, and why he believes leadership must be earned through experience.Why Horizon3 identifies as a data company, and how running more pen tests than the Big Four creates a powerful AI advantage.What “cyber-enabled economic warfare” looks like in practice, and how a small disruption in a supply chain can create massive global impact.How Horizon3 built an AI engine that hacked a bank in under 60 seconds, showing what's possible when algorithms replace manual testing.What the future of work looks like in the AI era, with a growing divide between those with specialized expertise and trade skills and those without.Resources:Subscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe Connect with Snehal on LinkedIn: https://www.linkedin.com/in/snehalantani/ AI fun fact article: https://venturebeat.com/security/ai-vs-endpoint-attacks-what-security-leaders-must-know-to-stay-ahead/ On the New Definition of Work: https://podcasts.apple.com/us/podcast/dr-john-boudreau-future-of-work-pioneer-and/id1476885647?i=1000633854079
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/681 The team explores the ethical implications of teaching AI jailbreaking techniques and conducting red team testing on large language models, balancing educational value against potential misuse. They dive into personal experiments with bypassing AI safeguards, revealing both creative workarounds and robust protections in modern systems. TAKEAWAYS • Debate on whether demonstrating AI vulnerabilities is responsible education or potentially dangerous knowledge sharing • Psychological impact on security professionals who regularly simulate malicious behaviors to test AI safety • Real examples of attempts to "jailbreak" AI systems through fantasy storytelling and other creative prompts • Legal gray areas in AI security testing that require dedicated legal support for organizations • Personal experiences with testing AI guardrails on different models and their varying levels of protection • Future prediction that Microsoft's per-user licensing model may shift to consumption-based as AI agents replace human tasks • Growth observations about Microsoft's Business Applications division reaching approximately $8 billion • Discussion of how M365 Copilot is transforming productivity, particularly for analyzing sales calls and customer interactions Check out this episode for more deep dives into AI safety, security, and the future of technology in business.This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world. DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff Accelerate your Microsoft career with the 90 Day Mentoring Challenge We've helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening
Bugged boardrooms. Insider moles. Social engineers posing as safety inspectors!? In this Talking Lead episode, Lefty assembles a veteran intel crew—Bryan Seaver U.S. Army Military Police vet and owner of SAPS Squadron Augmented Protection Services, LLC, a Nashville outfit running dignitary protection, K9 ops, and intelligence training. A *Talking Lead* mainstay! He's got firsthand scoop on "Red Teaming"; Mitch Davis U.S. Marine, private investigator, interrogator, Phoenix Consulting Group (now DynCorp) contractor, with a nose for sniffing out moles and lies; Brad Duley U.S. Marine, embassy guard, Phoenix/DynCorp contractor, Iraq vet, deputy sheriff, and precision shooter, bringing tactical grit to the table —to expose the high-stakes world of corporate espionage. They pull back the curtain on real-world spy tactics that were used during the the "Cold War" era and are still used in today's business battles: Red Team operations, honeypots, pretexting, data theft, and the growing threat of AI-driven deception. From cyber breaches to physical infiltrations, the tools of Cold War espionage are now aimed at American companies, defense tech, and even firearms innovation. State-backed actors, insider threats, and corporate sabotage—it's not just overseas anymore. Tune-in and get "Leaducated"!!
Bugged boardrooms. Insider moles. Social engineers posing as safety inspectors!? In this Talking Lead episode, Lefty assembles a veteran intel crew—Bryan Seaver U.S. Army Military Police vet and owner of SAPS Squadron Augmented Protection Services, LLC, a Nashville outfit running dignitary protection, K9 ops, and intelligence training. A *Talking Lead* mainstay! He's got firsthand scoop on "Red Teaming"; Mitch Davis U.S. Marine, private investigator, interrogator, Phoenix Consulting Group (now DynCorp) contractor, with a nose for sniffing out moles and lies; Brad Duley U.S. Marine, embassy guard, Phoenix/DynCorp contractor, Iraq vet, deputy sheriff, and precision shooter, bringing tactical grit to the table —to expose the high-stakes world of corporate espionage. They pull back the curtain on real-world spy tactics that were used during the the "Cold War" era and are still used in today's business battles: Red Team operations, honeypots, pretexting, data theft, and the growing threat of AI-driven deception. From cyber breaches to physical infiltrations, the tools of Cold War espionage are now aimed at American companies, defense tech, and even firearms innovation. State-backed actors, insider threats, and corporate sabotage—it's not just overseas anymore. Tune-in and get "Leaducated"!!
STANDARD EDITION: Signal OPSEC, White-box Red-teaming LLMs, Unified Company Context (UCC), New Book Recommendations, Single Apple Note Technique, and much more... You are currently listening to the Standard version of the podcast, consider upgrading and becoming a member to unlock the full version and many other exclusive benefits here: https://newsletter.danielmiessler.com/upgrade Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiesslerBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Naman Mishra, CTO of Repello AI, to unpack the real-world security risks behind deploying large language models. We talk about layered vulnerabilities—from the model, infrastructure, and application layers—to attack vectors like prompt injection, indirect prompt injection through agents, and even how a simple email summarizer could be exploited to trigger a reverse shell. Naman shares stories like the accidental leak of a Windows activation key via an LLM and explains why red teaming isn't just a checkbox, but a continuous mindset. If you want to learn more about his work, check out Repello's website at repello.ai.Check out this GPT we trained on the conversation!Timestamps00:00 - Stewart Alsop introduces Naman Mishra, CTO of Repel AI. They frame the episode around AI security, contrasting prompt injection risks with traditional cybersecurity in ML apps.05:00 - Naman explains the layered security model: model, infrastructure, and application layers. He distinguishes safety (bias, hallucination) from security (unauthorized access, data leaks).10:00 - Focus on the application layer, especially in finance, healthcare, and legal. Naman shares how ChatGPT leaked a Windows activation key and stresses data minimization and security-by-design.15:00 - They discuss red teaming, how Repel AI simulates attacks, and Anthropic's HackerOne challenge. Naman shares how adversarial testing strengthens LLM guardrails.20:00 - Conversation shifts to AI agents and autonomy. Naman explains indirect prompt injection via email or calendar, leading to real exploits like reverse shells—all triggered by summarizing an email.25:00 - Stewart compares the Internet to a castle without doors. Naman explains the cat-and-mouse game of security—attackers need one flaw; defenders must lock every door. LLM insecurity lowers the barrier for attackers.30:00 - They explore input/output filtering, role-based access control, and clean fine-tuning. Naman admits most guardrails can be broken and only block low-hanging fruit.35:00 - They cover denial-of-wallet attacks—LLMs exploited to run up massive token costs. Naman critiques DeepSeek's weak alignment and state bias, noting training data risks.40:00 - Naman breaks down India's AI scene: Bangalore as a hub, US-India GTM, and the debate between sovereignty vs. pragmatism. He leans toward India building foundational models.45:00 - Closing thoughts on India's AI future. Naman mentions Sarvam AI, Krutrim, and Paris Chopra's Loss Funk. He urges devs to red team before shipping—"close the doors before enemies walk in."Key InsightsAI security requires a layered approach. Naman emphasizes that GenAI applications have vulnerabilities across three primary layers: the model layer, infrastructure layer, and application layer. It's not enough to patch up just one—true security-by-design means thinking holistically about how these layers interact and where they can be exploited.Prompt injection is more dangerous than it sounds. Direct prompt injection is already risky, but indirect prompt injection—where an attacker hides malicious instructions in content that the model will process later, like an email or webpage—poses an even more insidious threat. Naman compares it to smuggling weapons past the castle gates by hiding them in the food.Red teaming should be continuous, not a one-off. One of the critical mistakes teams make is treating red teaming like a compliance checkbox. Naman argues that red teaming should be embedded into the development lifecycle, constantly testing edge cases and probing for failure modes, especially as models evolve or interact with new data sources.LLMs can unintentionally leak sensitive data. In one real-world case, a language model fine-tuned on internal documentation ended up leaking a Windows activation key when asked a completely unrelated question. This illustrates how even seemingly benign outputs can compromise system integrity when training data isn't properly scoped or sanitized.Denial-of-wallet is an emerging threat vector. Unlike traditional denial-of-service attacks, LLMs are vulnerable to economic attacks where a bad actor can force the system to perform expensive computations, draining API credits or infrastructure budgets. This kind of vulnerability is particularly dangerous in scalable GenAI deployments with limited cost monitoring.Agents amplify security risks. While autonomous agents offer exciting capabilities, they also open the door to complex, compounded vulnerabilities. When agents start reading web content or calling tools on their own, indirect prompt injection can escalate into real-world consequences—like issuing financial transactions or triggering scripts—without human review.The Indian AI ecosystem needs to balance speed with sovereignty. Naman reflects on the Indian and global context, warning against simply importing models and infrastructure from abroad without understanding the security implications. There's a need for sovereign control over critical layers of AI systems—not just for innovation's sake, but for national resilience in an increasingly AI-mediated world.
Guest: Alex Polyakov, CEO at Adversa AI Topics: Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client? Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now? What trips most clients, classic security mistakes in AI systems or AI-specific mistakes? Are there truly new mistakes in AI systems or are they old mistakes in new clothing? I know it is not your job to fix it, but much of this is unfixable, right? Is it a good idea to use AI to secure AI? Resources: EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far AI Red Teaming Reasoning LLM US vs China: Jailbreak Deepseek, Qwen, O1, O3, Claude, Kimi Adversa AI blog Oops! 5 serious gen AI security mistakes to avoid Generative AI Fast Followership: Avoid These First Adopter Security Missteps
This week, Ads Dawson, Staff AI Security Researcher at Dreadnode, joins the show to talk all things AI Red Teaming!George K and George A talk to Ads about: The reality of securing #AI model development pipelines Why cross-functional expertise is critical when securing AI systems How to approach continuous red teaming for AI applications (hint: annual pen tests won't cut it anymore) Practical advice for #cybersecurity pros looking to skill up in AI securityWhether you're a CISO trying to navigate securing AI implementations or an infosec professional looking to expand your skill set, this conversation is all signal.Course mentioned: https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-DS-03+V1————
Today's guest is Tomer Poran, Chief Evangelist and VP of Strategy at ActiveFence. ActiveFence is a technology company specializing in trust and safety solutions, helping platforms detect and prevent harmful content, malicious activity, and emerging threats online. Tomer joins today's podcast to explore the critical role of red teaming in AI safety and security. He breaks down the challenges enterprises face in deploying AI responsibly, the evolving nature of adversarial risks, and why organizations must adopt a proactive approach to testing AI systems. This episode is sponsored by ActiveFence. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
HackerOne's co-founder, Michiel Prins walks us through the latest new offensive security service: AI red teaming. At the same time enterprises are globally trying to figure out how to QA and red team generative AI models like LLMs, early adopters are challenged to scale these tests. Crowdsourced bug bounty platforms are a natural place to turn for assistance with scaling this work, though, as we'll discuss on this episode, it is unlike anything bug hunters have ever tackled before. Segment Resources: https://www.hackerone.com/ai/snap-ai-red-teaming https://www.hackerone.com/thought-leadership/ai-safety-red-teaming This interview is a bit different from our norm. We talk to the founder and CEO of OpenVPN about what it is like to operate a business based on open source, particularly through trying times like the recent pandemic. How do you compete when your competitors are free to build products using your software and IP? It seems like an oxymoron, but an open source-based business actually has some significant advantages over the closed source commercial approach. In this week's enterprise security news, the first cybersecurity IPO in 3.5 years! new companies new tools the fate of CISA and the cyber safety review board things we learned about AI in 2024 is the humanless SOC possible? NGFWs have some surprising vulnerabilities what did generative music sound like in 1996? All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-391