POPULARITY
⬥GUEST⬥Eric O'Neill, Keynote Speaker, Cybersecurity Expert, Spy Hunter, Bestselling Author. Attorney | On Linkedin: https://www.linkedin.com/in/eric-m-oneill/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥In this episode of the Redefining CyberSecurity Podcast, host Sean Martin reconnects with Eric O'Neill, National Security Strategist at NeXasure and former FBI counterintelligence operative. Together, they explore how cybercrime has matured into a global economy—and why organizations of every size must learn to compete, not just defend.O'Neill draws from decades of undercover work and corporate investigation to reveal that cybercriminals now operate like modern businesses: they innovate, specialize, and scale. The difference? Their product is your data. He argues that resilience—not prevention—is the true marker of readiness. Companies can't assume they're too small or too obscure to be targeted. “It's just a matter of numbers,” he says. “At some point, you will get struck. You need to be able to take the punch and keep moving.”The discussion covers the practical realities facing small and midsize businesses: limited budgets, fragmented tools, and misplaced confidence. O'Neill explains why so many organizations over-invest in overlapping technologies while under-investing in strategy. His firm helps clients identify these inefficiencies and replace tool sprawl with coordinated defense.Preparation, O'Neill says, should follow his PAID methodology—Prepare, Assess, Investigate, Decide. The goal is to plan ahead, detect fast, and act decisively. Those that do not prepare spend ten times more responding after an incident than they would have spent preventing it.Martin and O'Neill also examine how storytelling bridges the gap between security teams and executive boards. Using relatable analogies—like house fires and insurance—O'Neill makes cybersecurity human. His message is simple: security is not a technical decision; it's a business one.Listen to hear how the business of cybercrime mirrors legitimate enterprise—and why understanding that truth might be your best defense.⬥RESOURCES⬥Book: Spies, Lies, and Cybercrime by Eric O'Neill – Book linkBook: Gray Day by Eric O'Neill – Book linkFree, Weekly Newsletter: spies-lies-cybercrime.ericoneill.netPodcast: Former FBI Spy Hunter Eric O'Neill Explains How Cybercriminals Use Espionage techniques to Attack Us: https://redefiningsocietyandtechnologypodcast.com/episodes/new-book-spies-lies-and-cyber-crime-former-fbi-spy-hunter-eric-oneill-explains-how-cybercriminals-use-espionage-techniques-to-attack-us-redefining-society-and-technology-podcast-with-marco-ciappelli⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
⬥GUEST⬥Eric O'Neill, Keynote Speaker, Cybersecurity Expert, Spy Hunter, Bestselling Author. Attorney | On Linkedin: https://www.linkedin.com/in/eric-m-oneill/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥In this episode of the Redefining CyberSecurity Podcast, host Sean Martin reconnects with Eric O'Neill, National Security Strategist at NeXasure and former FBI counterintelligence operative. Together, they explore how cybercrime has matured into a global economy—and why organizations of every size must learn to compete, not just defend.O'Neill draws from decades of undercover work and corporate investigation to reveal that cybercriminals now operate like modern businesses: they innovate, specialize, and scale. The difference? Their product is your data. He argues that resilience—not prevention—is the true marker of readiness. Companies can't assume they're too small or too obscure to be targeted. “It's just a matter of numbers,” he says. “At some point, you will get struck. You need to be able to take the punch and keep moving.”The discussion covers the practical realities facing small and midsize businesses: limited budgets, fragmented tools, and misplaced confidence. O'Neill explains why so many organizations over-invest in overlapping technologies while under-investing in strategy. His firm helps clients identify these inefficiencies and replace tool sprawl with coordinated defense.Preparation, O'Neill says, should follow his PAID methodology—Prepare, Assess, Investigate, Decide. The goal is to plan ahead, detect fast, and act decisively. Those that do not prepare spend ten times more responding after an incident than they would have spent preventing it.Martin and O'Neill also examine how storytelling bridges the gap between security teams and executive boards. Using relatable analogies—like house fires and insurance—O'Neill makes cybersecurity human. His message is simple: security is not a technical decision; it's a business one.Listen to hear how the business of cybercrime mirrors legitimate enterprise—and why understanding that truth might be your best defense.⬥RESOURCES⬥Book: Spies, Lies, and Cybercrime by Eric O'Neill – Book linkBook: Gray Day by Eric O'Neill – Book linkFree, Weekly Newsletter: spies-lies-cybercrime.ericoneill.netPodcast: Former FBI Spy Hunter Eric O'Neill Explains How Cybercriminals Use Espionage techniques to Attack Us: https://redefiningsocietyandtechnologypodcast.com/episodes/new-book-spies-lies-and-cyber-crime-former-fbi-spy-hunter-eric-oneill-explains-how-cybercriminals-use-espionage-techniques-to-attack-us-redefining-society-and-technology-podcast-with-marco-ciappelli⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
Organizations pour millions into protecting running applications—yet attackers are targeting the delivery path itself.This episode of AppSec Contradictions reveals why CI/CD and cloud pipelines are becoming the new frontline in cybersecurity.
Organizations pour millions into protecting running applications—yet attackers are targeting the delivery path itself.This episode of AppSec Contradictions reveals why CI/CD and cloud pipelines are becoming the new frontline in cybersecurity.
Sean Martin from The Quarantined took some time out recently to catch up with HEAVY Mag's Ali Williams to chat about their new release Nemesis (Friend of Mine), overcoming toxic relationships and algorithms and what the next chapter for the band entails. Discussing the creative process of songwriting and their musical influences and intentions, Sean explains the origin of their song Nemesis (Friend of Mine), which started as a poem inspired by a sudden burst of creativity. The transformation from poem to song presented challenges, especially in conveying the depth of the lyrics in a musical format. He goes on to talk about blending different musical genres, such as rock and pop rhythms, saying their goal was to capture the emotional energy of the lyrics while drawing inspiration from various music styles, including Aaliyah's Tell Me You're That Somebody. Martin details the meaning and influences behind the lyrics of their new track, noting the song addresses themes of paranoia, confrontation, and understanding narcissistic behaviours. It reflects on overcoming manipulation and the personal growth that comes from understanding difficult personalities. These guys had the incredible opportunity to be recorded at Blackbird Studios in Nashville, which contributed a warm sonic quality to the music due to the use of historic equipment. Martin recalls the recording sessions were a calm experience, leading to a sound that improved upon previous versions. The production process involved collaboration with studio musicians who were given creative freedom within the framework of Sean's original composition. This approach ultimately led to a more refined and satisfying final product. The Quarantined's music often addresses political and social issues, aiming to promote free thinking and cautioning against fascism. He touches on the current state of societal discourse in the U.S., emphasizing the importance of diverse perspectives, acknowledging that he feels that artists and musicians, particularly from the US are in a position where the ability to express political and social commentary through music has been reduced to virtually impossible unless you want to be cancelled. Drawing heavily on a blend of punk, metal, and hip-hop influences, Martin describes it as a watering down of the essence of what those genres stand for, highlighting the similarities between these genres in terms of their energy and message. The goal is to create music that resonates across different audience segments. Ideally, without enraging the public or facing adversary reaction. The Quarantined's new release Nemesis (Friend of Mine) is out now and available on all platforms.Become a supporter of this podcast: https://www.spreaker.com/podcast/heavy-music-interviews--2687660/support.
"Just because you can, doesn't always mean you should."Episode SummaryIn this episode of The Gun Experiment, we're chopping it up in Studio with our good friend and firearms instructor, Sean Martin, aka Pink Shirt Tactical. Big Keith and I dive into gun news, hot takes, and personal stories about hunting, fitness, current political drama, and of course, plenty of Second Amendment talk. We touch on recent matches like the Hero Down Shootout, discuss firearm law updates (like Hawaii's Vampire Rule and the P320 issue in Chicago), and share some hilarious community stories—from kid obsessions with town councilmen to belt buckles for F-150 key fobs. We debate open carry “auditors,” government accountability, and even take a swipe at media soundbites. This episode's a mix of laughs, strong opinions, and actionable insights for anyone who carries or is passionate about gun rights and personal responsibility.Call to Action1. Join our mailing list: Thegunexperiment.com2. Subscribe and leave us a comment on Apple or Spotify3. Follow us on all of our social media: Instagram Twitter Youtube Facebook4. Be a part of our growing community, join our Discord page!5. Grab some cool TGE merch6. Ask us anything at AskMikeandKeith@gmail.com5. Be sure to support the sponsors of the show. They are a big part of making the show possible.Show SponsorsSwig – Protein, Creatine and meal replacement made in America by pro-2A owners. For 20% off, head to swig.com and enter code TGE20 at checkout.Key TakeawaysStaying fit and healthy is just as important as responsible gun ownership.The firearms community needs to use good judgment—just because open-carry activism is legal doesn't mean it's always smart.Court decisions (like Hawaii's Vampire Rule and the P320 recall in Chicago) are reshaping our rights—stay informed.Community involvement, whether with local elections or supporting pro-2A organizations, makes a difference.Don't trust everything mainstream media says—question, verify, and use your own judgment.Fun and function can go together—even if you're rocking a belt buckle for your F-150 keys.Guest InformationSean Martin (aka Pink Shirt Tactical)Firearms instructor, competitor, and regular contributor to The Gun Experiment. Connect with him on Instagram.Keywordsgun rights podcast, Second Amendment, firearms news, open carry debate, P320 recall, gun laws Hawaii, Hero Down Shootout, gun fitness,...
Guest and HostGuest: Marco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comHost: Sean Martin, Co-Founder at ITSPmagazine, Studio C60, and Host of Redefining CyberSecurity Podcast & Music Evolves Podcast | Website: https://www.seanmartin.com/Show NotesIn this candid episode of Music Evolves, Sean Martin and Marco Ciappelli unpack the creative, ethical, and deeply personal tensions surrounding AI-generated music—where it fits, where it falters, and where it crosses the line.Sean opens with a clear position: AI can support the creative process, but its outputs shouldn't be commercialized unless the ingredients—i.e., training data—are ethically sourced and properly licensed. His concern is grounded in authorship and consent. If a model learns from unlicensed tracks, even indirectly, is it sampling without credit?Marco responds by acknowledging how deeply embedded influence is in all creative acts. As a writer and musician, he often discovers melodies or storylines in his own work that echo familiar structures—not out of theft, but because of lived experience. “We are made of what we absorb,” he says, drawing parallels between human memory and how AI models are trained.But the critical difference? Humans feel. They reinterpret. They falter. They declare their intent. AI does none of that—at least, not yet.The discussion isn't anti-technology. Instead, it's about boundaries. Both Sean and Marco agree that tools like neural networks can be fascinating collaborators. But when those tools start to blur authorship or generate perfect replicas of a human's imperfection—say, the crackle of a vinyl or the slide of a finger across a string—what are we really listening to? And who, if anyone, should profit from it?They wrestle with questions of transparency (“Did you write that… or did AI?”), authorship (“If you like it but don't know it's AI, does it matter?”), and commercialization (“Is it still your art if someone else feeds it to a machine?”). And perhaps most importantly, they invite you to answer for yourself.
Show NotesIn this episode, we unpack the core ideas behind the Sonic Frontiers article “From Sampling to Scraping: AI Music, Rights, and the Return of Creative Control.” As AI-generated music floods streaming platforms, rights holders are deploying new tools like neural fingerprinting to detect derivative works — even when no direct sampling occurs. But what does it mean to “detect influence,” and can algorithms truly distinguish theft from inspiration?We explore the implications for artists who want to experiment with AI without being replaced by it, and the shifting desires of listeners who may soon prefer human-made music the way some still seek out vinyl, film cameras, or wooden roller coasters — not for efficiency, but for the feel.The article also touches on the burden of rights enforcement in this new age. While major labels can embed detection systems, who protects the independent artist? And if AI enables anyone to create, does it also require everyone to monitor?This episode invites you to reflect on what we value in music: speed and volume, or craft and control?
⬥GUEST⬥Walter Haydock, Founder, StackAware | On Linkedin: https://www.linkedin.com/in/walter-haydock/⬥HOST⬥Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥No-Code Meets AI: Who's Really in Control?As AI gets embedded deeper into business workflows, a new player has entered the security conversation: no-code automation tools. In this episode of Redefining CyberSecurity, host Sean Martin speaks with Walter Haydock, founder of StackAware, about the emerging risks when AI, automation, and business users collide—often without traditional IT or security oversight.Haydock shares how organizations are increasingly using tools like Zapier and Microsoft Copilot Studio to connect systems, automate tasks, and boost productivity—all without writing a single line of code. While this democratization of development can accelerate innovation, it also introduces serious risks when systems are built and deployed without governance, testing, or visibility.The conversation surfaces critical blind spots. Business users may be automating sensitive workflows involving customer data, proprietary systems, or third-party APIs—without realizing the implications. AI prompts gone wrong can trigger mass emails, delete databases, or unintentionally expose confidential records. Recursion loops, poor authentication, and ambiguous access rights are all too easy to introduce when development moves this fast and loose.Haydock emphasizes that this isn't just a technology issue—it's an organizational one. Companies need to decide: who owns risk when anyone can build and deploy a business process? He encourages a layered approach, including lightweight approval processes, human-in-the-loop checkpoints for sensitive actions, and upfront evaluations of tools for legal compliance and data residency.Security teams, he notes, must resist the urge to block no-code outright. Instead, they should enable safer adoption through clear guidelines, tool allowlists, training, and risk scoring systems. Meanwhile, business leaders must engage early with compliance and risk stakeholders to ensure their productivity gains don't come at the expense of long-term exposure.For organizations embracing AI-powered automation, this episode offers a clear takeaway: treat no-code like production code—because that's exactly what it is.⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
What does it really take to be a CISO the business can rely on? In this episode, Sean Martin shares insights from a recent conversation with Tim Brown, CISO at SolarWinds, following his keynote at AISA CyberCon and his role in leading a CISO Bootcamp for current and future security leaders. The article at the heart of this episode focuses not on technical skills or frameworks, but on the leadership qualities that matter most: context, perspective, communication, and trust.Tim's candid reflections — including the personal toll of leading through a crisis — remind us that clarity doesn't come from control. It comes from connection. CISOs must communicate risk in ways that resonate across teams and business leaders. They need to build trusted relationships before they're tested and create space for themselves and their teams to process pressure in healthy, sustainable ways.Whether you're already in the seat or working toward it, this conversation invites you to rethink what preparation really looks like. It also leaves you with two key questions: Where do you get your clarity, and who are you learning from? Tune in, reflect, and join the conversation.
What does it really take to be a CISO the business can rely on? In this episode, Sean Martin shares insights from a recent conversation with Tim Brown, CISO at SolarWinds, following his keynote at AISA CyberCon and his role in leading a CISO Bootcamp for current and future security leaders. The article at the heart of this episode focuses not on technical skills or frameworks, but on the leadership qualities that matter most: context, perspective, communication, and trust.Tim's candid reflections — including the personal toll of leading through a crisis — remind us that clarity doesn't come from control. It comes from connection. CISOs must communicate risk in ways that resonate across teams and business leaders. They need to build trusted relationships before they're tested and create space for themselves and their teams to process pressure in healthy, sustainable ways.Whether you're already in the seat or working toward it, this conversation invites you to rethink what preparation really looks like. It also leaves you with two key questions: Where do you get your clarity, and who are you learning from? Tune in, reflect, and join the conversation.
⬥GUEST⬥Walter Haydock, Founder, StackAware | On Linkedin: https://www.linkedin.com/in/walter-haydock/⬥HOST⬥Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥No-Code Meets AI: Who's Really in Control?As AI gets embedded deeper into business workflows, a new player has entered the security conversation: no-code automation tools. In this episode of Redefining CyberSecurity, host Sean Martin speaks with Walter Haydock, founder of StackAware, about the emerging risks when AI, automation, and business users collide—often without traditional IT or security oversight.Haydock shares how organizations are increasingly using tools like Zapier and Microsoft Copilot Studio to connect systems, automate tasks, and boost productivity—all without writing a single line of code. While this democratization of development can accelerate innovation, it also introduces serious risks when systems are built and deployed without governance, testing, or visibility.The conversation surfaces critical blind spots. Business users may be automating sensitive workflows involving customer data, proprietary systems, or third-party APIs—without realizing the implications. AI prompts gone wrong can trigger mass emails, delete databases, or unintentionally expose confidential records. Recursion loops, poor authentication, and ambiguous access rights are all too easy to introduce when development moves this fast and loose.Haydock emphasizes that this isn't just a technology issue—it's an organizational one. Companies need to decide: who owns risk when anyone can build and deploy a business process? He encourages a layered approach, including lightweight approval processes, human-in-the-loop checkpoints for sensitive actions, and upfront evaluations of tools for legal compliance and data residency.Security teams, he notes, must resist the urge to block no-code outright. Instead, they should enable safer adoption through clear guidelines, tool allowlists, training, and risk scoring systems. Meanwhile, business leaders must engage early with compliance and risk stakeholders to ensure their productivity gains don't come at the expense of long-term exposure.For organizations embracing AI-powered automation, this episode offers a clear takeaway: treat no-code like production code—because that's exactly what it is.⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
First CISO Charged by SEC: Tim Brown on Trust, Context, and Leading Through Crisis - Interview with Tim Brown | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco CiappelliAISA CyberCon Melbourne | October 15-17, 2025Tim Brown's job changed overnight. December 11th, he was the CISO at SolarWinds managing security operations. December 12th, he was leading the response to one of the most scrutinized cybersecurity incidents in history.Connecting from New York and Florence to Melbourne, Sean Martin and Marco Ciappelli caught up with their longtime friend ahead of his keynote at AISA CyberCon. The conversation reveals what actually happens when a CISO faces the unthinkable—and why the relationships you build before crisis hits determine whether you survive it.Tim became the first CISO ever charged by the SEC, a distinction nobody wants but one that shaped his mission: if sharing his experience helps even one security leader prepare better, then the entire saga becomes worthwhile. He's candid about the settlement process still underway, the emotional weight of having strangers ask for selfies, and the mental toll that landed him in a Zurich hospital with a heart attack the week his SEC charges were announced."For them to hear something and hear the context—to hear us taking six months off development, 400 engineers focused completely on security for six months in pure focus—when you say it with emotion, it conveys the real cost," Tim explained. Written communication failed during the incident. People needed to talk, to hear, to feel the weight of decisions being made in real time.What saved SolarWinds wasn't just technical capability. It was implicit trust. The war room team operated without second-guessing each other. The CIO handled deployment and investigation. Engineering figured out how the build system was compromised. Marketing and legal managed their domains. Tim didn't waste cycles checking their work because trust was already built."If we didn't have that, we would've been second-guessing what other people did," he said. That trust came from relationships established long before December 2020, from a culture where people knew their roles and respected each other's expertise.Now Tim's focused on mentoring the next generation through the RSA Conference CSO Bootcamp, helping aspiring CISOs and security leaders at smaller companies build the knowledge, community, and relationships they'll need when—not if—their own December 12th arrives. He tailors every talk to his audience, never delivering the same speech twice. Context matters in crisis, but it matters in communication too.Australia played a significant role during SolarWinds' incident response, with the Australian government partnering closely in January 2021. Tim hadn't been back in a decade, making his return to Melbourne for CyberCon particularly meaningful. He's there to share lessons earned the hardest way possible, and to remind security leaders that stress management, safe spaces, and knowing when to compartmentalize aren't luxuries—they're survival skills.His keynote covers the different stages of incident response, how culture drives crisis outcomes, and why the teams that step up matter more than the ones that run away. For anyone leading security teams, Tim's message is clear: build trust now, before you need it.AISA CyberCon Melbourne runs October 15-17, 2025 Coverage provided by ITSPmagazineGUEST:Tim Brown, CISO at SolarWinds | On LinkedIn: https://www.linkedin.com/in/tim-brown-ciso/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More
First CISO Charged by SEC: Tim Brown on Trust, Context, and Leading Through Crisis - Interview with Tim Brown | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco CiappelliAISA CyberCon Melbourne | October 15-17, 2025Tim Brown's job changed overnight. December 11th, he was the CISO at SolarWinds managing security operations. December 12th, he was leading the response to one of the most scrutinized cybersecurity incidents in history.Connecting from New York and Florence to Melbourne, Sean Martin and Marco Ciappelli caught up with their longtime friend ahead of his keynote at AISA CyberCon. The conversation reveals what actually happens when a CISO faces the unthinkable—and why the relationships you build before crisis hits determine whether you survive it.Tim became the first CISO ever charged by the SEC, a distinction nobody wants but one that shaped his mission: if sharing his experience helps even one security leader prepare better, then the entire saga becomes worthwhile. He's candid about the settlement process still underway, the emotional weight of having strangers ask for selfies, and the mental toll that landed him in a Zurich hospital with a heart attack the week his SEC charges were announced."For them to hear something and hear the context—to hear us taking six months off development, 400 engineers focused completely on security for six months in pure focus—when you say it with emotion, it conveys the real cost," Tim explained. Written communication failed during the incident. People needed to talk, to hear, to feel the weight of decisions being made in real time.What saved SolarWinds wasn't just technical capability. It was implicit trust. The war room team operated without second-guessing each other. The CIO handled deployment and investigation. Engineering figured out how the build system was compromised. Marketing and legal managed their domains. Tim didn't waste cycles checking their work because trust was already built."If we didn't have that, we would've been second-guessing what other people did," he said. That trust came from relationships established long before December 2020, from a culture where people knew their roles and respected each other's expertise.Now Tim's focused on mentoring the next generation through the RSA Conference CSO Bootcamp, helping aspiring CISOs and security leaders at smaller companies build the knowledge, community, and relationships they'll need when—not if—their own December 12th arrives. He tailors every talk to his audience, never delivering the same speech twice. Context matters in crisis, but it matters in communication too.Australia played a significant role during SolarWinds' incident response, with the Australian government partnering closely in January 2021. Tim hadn't been back in a decade, making his return to Melbourne for CyberCon particularly meaningful. He's there to share lessons earned the hardest way possible, and to remind security leaders that stress management, safe spaces, and knowing when to compartmentalize aren't luxuries—they're survival skills.His keynote covers the different stages of incident response, how culture drives crisis outcomes, and why the teams that step up matter more than the ones that run away. For anyone leading security teams, Tim's message is clear: build trust now, before you need it.AISA CyberCon Melbourne runs October 15-17, 2025 Coverage provided by ITSPmagazineGUEST:Tim Brown, CISO at SolarWinds | On LinkedIn: https://www.linkedin.com/in/tim-brown-ciso/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More
Everyone Is Protecting My Password, But Who Is Protecting My Toilet Paper? - Interview with Amberley Brady | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco CiappelliAISA CyberCon Melbourne | October 15-17, 2025Empty shelves trigger something primal in us now. We've lived through the panic, the uncertainty, the realization that our food supply isn't as secure as we thought. Amberley Brady hasn't forgotten that feeling, and she's turned it into action.Speaking with her from Florence to Sydney ahead of AISA CyberCon in Melbourne, I discovered someone who came to cybersecurity through an unexpected path—studying law, working in policy, but driven by a singular passion for food security. When COVID-19 hit Australia in 2019 and grocery store shelves emptied, Amberley couldn't shake the question: what happens if this keeps happening?Her answer was to build realfoodprice.com.au, a platform tracking food pricing transparency across Australia's supply chain. It's based on the Hungarian model, which within three months saved consumers 50 million euros simply by making prices visible from farmer to wholesaler to consumer. The markup disappeared almost overnight when transparency arrived."Once you demonstrate transparency along the supply chain, you see where the markup is," Amberley explained. She gave me an example that hit home: watermelon farmers were getting paid 40 cents per kilo while their production costs ran between $1.00 to $1.50. Meanwhile, consumers paid $2.50 to $2.99 year-round. Someone in the middle was profiting while farmers lost money on every harvest.But this isn't just about fair pricing—it's about critical infrastructure that nobody's protecting. Australia produces food for 70 million people, far more than its own population needs. That food moves through systems, across borders, through supply chains that depend entirely on technology most farmers never think about in cybersecurity terms.The new autonomous tractors collecting soil data? That information goes somewhere. The sensors monitoring crop conditions? Those connect to systems someone else controls. China recognized this vulnerability years ago—with 20% of the world's population but only 7% of arable land, they understood that food security is national security.At CyberCon, Amberley is presenting two sessions that challenge the cybersecurity community to expand their thinking. "Don't Outsource Your Thinking" tackles what she calls "complacency creep"—our growing trust in AI that makes us stop questioning, stop analyzing with our gut instinct. She argues for an Essential Nine in Australia's cybersecurity framework, adding the human firewall to the technical Essential Eight.Her second talk, cheekily titled "Everyone is Protecting My Password, But No One's Protecting My Toilet Paper," addresses food security directly. It's provocative, but that's the point. We saw what happened in Japan recently with the rice crisis—the same panic buying, the same distrust, the same empty shelves that COVID taught us to fear."We will run to the store," Amberley said. "That's going to be human behavior because we've lived through that time." And here's the cybersecurity angle: those panics can be manufactured. A fake image of empty shelves, an AI-generated video, strategic disinformation—all it takes is triggering that collective memory.Amberley describes herself as an early disruptor in the agritech cybersecurity space, and she's right. Most cybersecurity professionals think about hospitals, utilities, financial systems. They don't think about the autonomous vehicles in fields, the sensor networks in soil, the supply chain software moving food across continents.But she's starting the conversation, and CyberCon's audience—increasingly diverse, including people from HR, risk management, and policy—is ready for it. Because at the end of the day, everyone has to eat. And if we don't start thinking about the cyber vulnerabilities in how we grow, move, and price food, we're leaving our most basic need unprotected.AISA CyberCon Melbourne runs October 15-17, 2025 Virtual coverage provided by ITSPmagazineGUEST:Amberley Brady, Food Security & Cybersecurity Advocate, Founder of realfoodprice.com.au | On LinkedIn: https://www.linkedin.com/in/amberley-b-a62022353/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More
Send us a textSEASON 4 PREMIERE: The world of Putting 2&2 Together has been turned upside down. Hot Off the Press is in chaos. Gunshots have been fired, but who shot who? All we can do is pick up the pieces and go on before something else goes wrong. — And who says it won't? Based on the play Two and Two Together by Peter Cosmas Sofronas. Written and Directed by Peter Cosmas Sofronas. Produced by Peter Cosmas Sofronas with Dan Murray, Starring (in alphabetical order) Samuel Berbel as Max, Gordon Ellis as Sean Martin, Adam Everett as the News Anchor, Matthew Garlin as Paul Shaw, Nick Gould as Matt Sharpe, Adam Heroux as David Sharpe, Dan Murray as Tommy Hanson, Alexander Pirnie as Walter Gettelman, and Rachael Rabinovitz as Hayley Gettelman. Credits and Narration by Leonard Caplan. Sound Engineering by Dan Murray. Sound Editing by Peter Cosmas Sofronas. Theme Music by Valerie Forgione.Support the showScripts of Two and Two Together and the first two seasons of Putting 2&2 Together can be purchased at Amazon.com. Merchandise available at TeeSpring. Donations can be made at By Me a Coffee. For further information, please visit puttingtwoandtwotogether.com.
Beyond Blame: Navigating the Digital World with Our KidsAISA CyberCon Melbourne | October 15-17, 2025There's something fundamentally broken in how we approach online safety for young people. We're quick to point fingers—at tech companies, at schools, at kids themselves—but Jacqueline Jayne (JJ) wants to change that conversation entirely.Speaking with her from Florence while she prepared for her session at AISA CyberCon Melbourne this week, it became clear that JJ understands what many in the cybersecurity world miss: this isn't a technical problem that needs a technical solution. It's a human problem that requires us to look in the mirror."The online world reflects what we've built for them," JJ told me, referring to our generation. "Now we need to step up and help fix it."Her session, "Beyond Blame: Keeping Our Kids Safe Online," tackles something most cybersecurity professionals avoid—the uncomfortable truth that being an IT expert doesn't automatically make you equipped to protect the young people in your life. Last year's presentation at Cyber Con drew a full house, with nearly every hand raised when she asked who came because of a kid in their world.That's the fascinating contradiction JJ exposes: rooms full of cybersecurity professionals who secure networks and defend against sophisticated attacks, yet find themselves lost when their own children navigate TikTok, Roblox, or encrypted messaging apps.The timing couldn't be more relevant. With Australia implementing a social media ban for anyone under 16 starting December 10, 2025, and similar restrictions appearing globally, parents and carers face unprecedented challenges. But as JJ points out, banning isn't understanding, and restriction isn't education.One revelation from our conversation particularly struck me—the hidden language of emojis. What seems innocent to adults carries entirely different meanings across demographics, from teenage subcultures to, disturbingly, predatory networks online. An explosion emoji doesn't just mean "boom" anymore. Context matters, and most adults are speaking a different digital dialect than their kids.JJ, who successfully guided her now 19-year-old son through the gaming and social media years, isn't offering simple solutions because there aren't any. What she provides instead are conversation starters, resources tailored to different age groups, and even AI prompts that parents can customize for their specific situations.The session reflects a broader shift happening at events like Cyber Con. It's no longer just IT professionals in the room. HR representatives, risk managers, educators, and parents are showing up because they've realized that digital safety doesn't respect departmental boundaries or professional expertise."We were analog brains in a digital world," JJ said, capturing our generational position perfectly. But today's kids? They're born into this interconnectedness, and COVID accelerated everything to a point where taking it away isn't an option.The real question isn't who to blame. It's what role each of us plays in creating a safer digital environment. And that's a conversation worth having—whether you're at the Convention and Exhibition Center in Melbourne this week or joining virtually from anywhere else.AISA CyberCon Melbourne runs October 15-17, 2025 Virtual coverage provided by ITSPmagazine___________GUEST:Jacqueline (JJ) Jayne, Reducing human error in cyber and teaching 1 million people online safety. On Linkedin: https://www.linkedin.com/in/jacquelinejayne/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More
During his keynote at SecTor 2025, HD Moore, founder and CEO of runZero and widely recognized for creating Metasploit, invites the cybersecurity community to rethink the foundational “rules” we continue to follow—often without question. In conversation with Sean Martin and Marco Ciappelli for ITSPmagazine's on-location event coverage, Moore breaks down where our security doctrines came from, why some became obsolete, and which ones still hold water.One standout example? The rule to “change your passwords every 30 days.” Moore explains how this outdated guidance—rooted in assumptions from the early 2000s when password sharing was rampant—led to predictable patterns and frustrated users. Today, the advice has flipped: focus on strong, unique passwords per service, stored securely via password managers.But this keynote isn't just about passwords. Moore uses this lens to explore how many security “truths” were formed in response to technical limitations or outdated behaviors—things like shared network trust, brittle segmentation, and fragile authentication models. As technology matures, so too should the rules. Enter passkeys, hardware tokens, and enclave-based authentication. These aren't just new tools—they're a fundamental shift in where and how we anchor trust.Moore also calls out an uncomfortable truth: the very products we rely on to protect our systems—firewalls, endpoint managers, and security appliances—are now among the top vectors for breach, per Mandiant's latest report. That revelation struck a chord with conference attendees, who appreciated Moore's willingness to speak plainly about systemic security debt.He also discusses the inescapable vulnerabilities in AI agent flows, likening prompt injection attacks to the early days of cross-site scripting. The tech itself invites risk, he warns, and we'll need new frameworks—not just tweaks to old ones—to manage what comes next.This conversation is a must-listen for anyone questioning whether our security playbooks are still fit for purpose—or simply carried forward by habit.___________GUEST:HD Moore, Founder and CEO of RunZero | On Linkedin: https://www.linkedin.com/in/hdmoore/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comRESOURCES:Keynote: The Once and Future Rules of Cybersecurity: https://www.blackhat.com/sector/2025/briefings/schedule/#keynote-the-once-and-future-rules-of-cybersecurity-49596Learn more and catch more stories from our SecTor 2025 coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/sector-cybersecurity-conference-toronto-2025Mandiant M-Trends Breach Report: https://cloud.google.com/blog/topics/threat-intelligence/m-trends-2025/OPM Data Breach Summary: https://oversight.house.gov/report/opm-data-breach-government-jeopardized-national-security-generation/Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More
During his keynote at SecTor 2025, HD Moore, founder and CEO of runZero and widely recognized for creating Metasploit, invites the cybersecurity community to rethink the foundational “rules” we continue to follow—often without question. In conversation with Sean Martin and Marco Ciappelli for ITSPmagazine's on-location event coverage, Moore breaks down where our security doctrines came from, why some became obsolete, and which ones still hold water.One standout example? The rule to “change your passwords every 30 days.” Moore explains how this outdated guidance—rooted in assumptions from the early 2000s when password sharing was rampant—led to predictable patterns and frustrated users. Today, the advice has flipped: focus on strong, unique passwords per service, stored securely via password managers.But this keynote isn't just about passwords. Moore uses this lens to explore how many security “truths” were formed in response to technical limitations or outdated behaviors—things like shared network trust, brittle segmentation, and fragile authentication models. As technology matures, so too should the rules. Enter passkeys, hardware tokens, and enclave-based authentication. These aren't just new tools—they're a fundamental shift in where and how we anchor trust.Moore also calls out an uncomfortable truth: the very products we rely on to protect our systems—firewalls, endpoint managers, and security appliances—are now among the top vectors for breach, per Mandiant's latest report. That revelation struck a chord with conference attendees, who appreciated Moore's willingness to speak plainly about systemic security debt.He also discusses the inescapable vulnerabilities in AI agent flows, likening prompt injection attacks to the early days of cross-site scripting. The tech itself invites risk, he warns, and we'll need new frameworks—not just tweaks to old ones—to manage what comes next.This conversation is a must-listen for anyone questioning whether our security playbooks are still fit for purpose—or simply carried forward by habit.___________GUEST:HD Moore, Founder and CEO of RunZero | On Linkedin: https://www.linkedin.com/in/hdmoore/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comRESOURCES:Keynote: The Once and Future Rules of Cybersecurity: https://www.blackhat.com/sector/2025/briefings/schedule/#keynote-the-once-and-future-rules-of-cybersecurity-49596Learn more and catch more stories from our SecTor 2025 coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/sector-cybersecurity-conference-toronto-2025Mandiant M-Trends Breach Report: https://cloud.google.com/blog/topics/threat-intelligence/m-trends-2025/OPM Data Breach Summary: https://oversight.house.gov/report/opm-data-breach-government-jeopardized-national-security-generation/Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More
In this issue of the Future of Cyber newsletter, Sean Martin digs into a topic that's quietly reshaping how software gets built—and how it breaks: the rise of AI-powered coding tools like ChatGPT, Claude, and GitHub Copilot.These tools promise speed, efficiency, and reduced boilerplate—but what are the hidden trade-offs? What happens when the tools go offline, or when the systems built through them are so abstracted that even the engineers maintaining them don't fully understand what they're working with?Drawing from conversations across the cybersecurity, legal, and developer communities—including a recent legal tech conference where law firms are empowering attorneys to “vibe code” internal tools—this article doesn't take a hard stance. Instead, it raises urgent questions:Are we creating shadow logic no one can trace?Do developers still understand the systems they're shipping?What happens when incident response teams face AI-generated code with no documentation?Are AI-generated systems introducing silent fragility into critical infrastructure?The piece also highlights insights from a recent podcast conversation with security architect Izar Tarandach, who compares AI coding to junior development: fast and functional, but in need of serious oversight. He warns that organizations rushing to automate development may be building brittle systems on shaky foundations, especially when security practices are assumed rather than applied.This is not a fear-driven screed or a rejection of AI. Rather, it's a call to assess new dependencies, rethink development accountability, and start building contingency plans before outages, hallucinations, or misconfigurations force the issue.If you're a CISO, developer, architect, risk manager—or anyone involved in software delivery or security—this article is designed to make you pause, think, and ideally, respond.
In this issue of the Future of Cyber newsletter, Sean Martin digs into a topic that's quietly reshaping how software gets built—and how it breaks: the rise of AI-powered coding tools like ChatGPT, Claude, and GitHub Copilot.These tools promise speed, efficiency, and reduced boilerplate—but what are the hidden trade-offs? What happens when the tools go offline, or when the systems built through them are so abstracted that even the engineers maintaining them don't fully understand what they're working with?Drawing from conversations across the cybersecurity, legal, and developer communities—including a recent legal tech conference where law firms are empowering attorneys to “vibe code” internal tools—this article doesn't take a hard stance. Instead, it raises urgent questions:Are we creating shadow logic no one can trace?Do developers still understand the systems they're shipping?What happens when incident response teams face AI-generated code with no documentation?Are AI-generated systems introducing silent fragility into critical infrastructure?The piece also highlights insights from a recent podcast conversation with security architect Izar Tarandach, who compares AI coding to junior development: fast and functional, but in need of serious oversight. He warns that organizations rushing to automate development may be building brittle systems on shaky foundations, especially when security practices are assumed rather than applied.This is not a fear-driven screed or a rejection of AI. Rather, it's a call to assess new dependencies, rethink development accountability, and start building contingency plans before outages, hallucinations, or misconfigurations force the issue.If you're a CISO, developer, architect, risk manager—or anyone involved in software delivery or security—this article is designed to make you pause, think, and ideally, respond.
⬥GUEST⬥Pieter VanIperen, CISO and CIO of AlphaSense | On Linkedin: https://www.linkedin.com/in/pietervaniperen/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥Real-World Principles for Real-World Security: A Conversation with Pieter VanIperenPieter VanIperen, the Chief Information Security and Technology Officer at AlphaSense, joins Sean Martin for a no-nonsense conversation that strips away the noise around cybersecurity leadership. With experience spanning media, fintech, healthcare, and SaaS—including roles at Salesforce, Disney, Fox, and Clear—Pieter brings a rare clarity to what actually works in building and running a security program that serves the business.He shares why being “comfortable being uncomfortable” is an essential trait for today's security leaders—not just reacting to incidents, but thriving in ambiguity. That distinction matters, especially when every new technology trend, vendor pitch, or policy update introduces more complexity than clarity. Pieter encourages CISOs to lead by knowing when to go deep and when to zoom out, especially in areas like compliance, AI, and IT operations where leadership must translate risks into outcomes the business cares about.One of the strongest points he makes is around threat intelligence: it must be contextual. “Generic threat intel is an oxymoron,” he argues, pointing out how the volume of tools and alerts often distracts from actual risks. Instead, Pieter advocates for simplifying based on principles like ownership, real impact, and operational context. If a tool hasn't been turned on for two months and no one noticed, he says, “do you even need it?”The episode also offers frank insight into vendor relationships. Pieter calls out the harm in trying to “tell a CISO what problems they have” rather than listening. He explains why true partnerships are based on trust, humility, and a long-term commitment—not transactional sales quotas. “If you disappear when I need you most, you're not part of the solution,” he says.For CISOs and vendors alike, this episode is packed with perspective you can't Google. Tune in to challenge your assumptions—and maybe your entire security stack.⬥SPONSORS⬥ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
⬥GUEST⬥Pieter VanIperen, CISO and CIO of AlphaSense | On Linkedin: https://www.linkedin.com/in/pietervaniperen/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥Real-World Principles for Real-World Security: A Conversation with Pieter VanIperenPieter VanIperen, the Chief Information Security and Technology Officer at AlphaSense, joins Sean Martin for a no-nonsense conversation that strips away the noise around cybersecurity leadership. With experience spanning media, fintech, healthcare, and SaaS—including roles at Salesforce, Disney, Fox, and Clear—Pieter brings a rare clarity to what actually works in building and running a security program that serves the business.He shares why being “comfortable being uncomfortable” is an essential trait for today's security leaders—not just reacting to incidents, but thriving in ambiguity. That distinction matters, especially when every new technology trend, vendor pitch, or policy update introduces more complexity than clarity. Pieter encourages CISOs to lead by knowing when to go deep and when to zoom out, especially in areas like compliance, AI, and IT operations where leadership must translate risks into outcomes the business cares about.One of the strongest points he makes is around threat intelligence: it must be contextual. “Generic threat intel is an oxymoron,” he argues, pointing out how the volume of tools and alerts often distracts from actual risks. Instead, Pieter advocates for simplifying based on principles like ownership, real impact, and operational context. If a tool hasn't been turned on for two months and no one noticed, he says, “do you even need it?”The episode also offers frank insight into vendor relationships. Pieter calls out the harm in trying to “tell a CISO what problems they have” rather than listening. He explains why true partnerships are based on trust, humility, and a long-term commitment—not transactional sales quotas. “If you disappear when I need you most, you're not part of the solution,” he says.For CISOs and vendors alike, this episode is packed with perspective you can't Google. Tune in to challenge your assumptions—and maybe your entire security stack.⬥SPONSORS⬥ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
SBOMs were supposed to be the ingredient label for software—bringing transparency, faster response, and stronger trust. But reality shows otherwise. Fewer than 1% of GitHub projects have policy-driven SBOMs. Only 15% of developer SBOM questions get answered. And while 86% of EU firms claim supply chain policies, just 47% actually fund them.So why do SBOMs stall as compliance artifacts instead of risk-reduction tools? And what happens when they do work?In this episode of AppSec Contradictions, Sean Martin examines:Why SBOM adoption is laggingThe cost of static SBOMs for developers, AppSec teams, and business leadersReal-world examples where SBOMs deliver measurable valueHow AISBOMs are extending transparency into AI models and dataCatch the full companion article in the Future of Cybersecurity newsletter for deeper analysis and more research.
SBOMs were supposed to be the ingredient label for software—bringing transparency, faster response, and stronger trust. But reality shows otherwise. Fewer than 1% of GitHub projects have policy-driven SBOMs. Only 15% of developer SBOM questions get answered. And while 86% of EU firms claim supply chain policies, just 47% actually fund them.So why do SBOMs stall as compliance artifacts instead of risk-reduction tools? And what happens when they do work?In this episode of AppSec Contradictions, Sean Martin examines:Why SBOM adoption is laggingThe cost of static SBOMs for developers, AppSec teams, and business leadersReal-world examples where SBOMs deliver measurable valueHow AISBOMs are extending transparency into AI models and dataCatch the full companion article in the Future of Cybersecurity newsletter for deeper analysis and more research.
Podcast: REAL Mentors PodcastEpisode: How To Be Successful in Work AND in Life | Chad Willardson | Ep. 50Pub date: 2025-09-30Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationChad Willardson went from top 2% at Merrill Lynch, speaking on Wall Street stages, to walking away from the corporate “matrix” to build a life where family and freedom come first. Today, he's the founder of Pacific Capital, a fiduciary wealth advisory firm serving entrepreneurs and families, a five-time author, and a leading voice challenging the hustle culture with a new framework: work-life integration.Raised with strong family and faith values but no entrepreneurial wealth, Chad learned early that true success isn't just about a big bank account—it's about being present at home and building wealth that lasts across generations. His books Smart, Not Spoiled and Why Work-Life Balance Is a Lie reveal how to raise grounded kids, manage money differently, and create both financial freedom and family legacy at the same time.This episode explores:Growing up in a family that valued presence and faith over money—and how that shaped his pathTurning down lucrative Wall Street jobs as a newlywed to prioritize marriage and fatherhoodLeaving Merrill Lynch after 9 years despite high performance, and the freedom entrepreneurship unlockedWhy entrepreneurs are “wealth wired differently”—and why traditional advice fails themSmart, Not Spoiled: practical ways to raise kids who understand money, work ethic, and value creationWhy financial literacy should be taught in schools from 1st grade through collegeThe viral Babysitter Guide that became a parenting and leadership lessonMissing the Kentucky Derby “main event” to make his son's playoff game—and why he'd do it 100 out of 100 timesWork-life integration: putting family on the calendar first, then businessHow money mindsets are inherited—and how anyone can rewire theirs to build wealthWhy financial freedom is possible for everyone, regardless of backgroundThis episode is about faith, family, and freedom. Chad's story will challenge you to stop chasing hustle culture, start defining success on your own terms, and create wealth without sacrificing the relationships that matter most.---
When we talk about AI at cybersecurity conferences these days, one term is impossible to ignore: agentic AI. But behind the excitement around AI-driven productivity and autonomous workflows lies an unresolved—and increasingly urgent—security issue: identity.In this episode, Sean Martin and Marco Ciappelli speak with Cristin Flynn Goodwin, keynote speaker at SecTor 2025, about the intersection of AI agents, identity management, and legal risk. Drawing from decades at the center of major security incidents—most recently as the head cybersecurity lawyer at Microsoft—Cristin frames today's AI hype within a longstanding identity crisis that organizations still haven't solved.Why It Matters NowAgentic AI changes the game. AI agents can act independently, replicate themselves, and disappear in seconds. That's great for automation—but terrifying for risk teams. Cristin flags the pressing need to identify and authenticate these ephemeral agents. Should they be digitally signed? Should there be a new standard body managing agent identities? Right now, we don't know.Meanwhile, attackers are already adapting. AI tools are being used to create flawless phishing emails, spoofed banking agents, and convincing digital personas. Add that to the fact that many consumers and companies still haven't implemented strong MFA, and the risk multiplier becomes clear.The Legal ViewFrom a legal standpoint, Cristin emphasizes how regulations like New York's DFS Cybersecurity Regulation are putting pressure on CISOs to tighten IAM controls. But what about individuals? “It's an unfair fight,” she says—no consumer can outpace a nation-state attacker armed with AI tooling.This keynote preview also calls attention to shadow AI agents: tools employees may create outside the control of IT or security. As Cristin warns, they could become “offensive digital insiders”—another dimension of the insider threat amplified by AI.Looking AheadThis is a must-listen episode for CISOs, security architects, policymakers, and anyone thinking about AI safety and digital trust. From the potential need for real-time, verifiable agent credentials to the looming collision of agentic AI with quantum computing, this conversation kicks off SecTor 2025 with urgency and clarity.Catch the full episode now, and don't miss Cristin's keynote on October 1.___________Guest:Cristin Flynn Goodwin, Senior Consultant, Good Harbor Security Risk Management | On LinkedIn: https://www.linkedin.com/in/cristin-flynn-goodwin-24359b4/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974BlackCloak: https://itspm.ag/itspbcweb___________ResourcesKeynote: Agentic AI and Identity: The Biggest Problem We're Not Solving: https://www.blackhat.com/sector/2025/briefings/schedule/#keynote-agentic-ai-and-identity-the-biggest-problem-were-not-solving-49591Learn more and catch more stories from our SecTor 2025 coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/sector-cybersecurity-conference-toronto-2025New York Department of Financial Services Cybersecurity Regulation: https://www.dfs.ny.gov/industry_guidance/cybersecurityGood Harbor Security Risk Management (Richard Clarke's firm): https://www.goodharbor.net/Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More
Neoborn Caveman, your green-tea-slurping host, invites his Purple Rabbit crew (that's you, not the parasitic overlords) to an open tea-house conversation. Sip along as we explore government overreach, from the 1952 UK ID card abolition to modern digital ID scams like Oracle's TikTok ties threatening sovereignty. Neoborn shares personal health journeys, promoting natural remedies like green tea and rejecting victim-playing culture. He calls out media manipulation—think asteroid fear-mongering and AI truth-twisting—and warns against generalizing groups. From Eurovision boycotts to Canadian policy oversteps, this episode urges preserving stories to counter division, learning from history, and embracing your unique worth to stay free-spirited. Gather for more unfiltered episodes at patreon.com/theneoborncavemanshow . With the special appearance of Sean Martin (only in the Patreon episode)Music guests are Sweet Water, Broken Colors, pMad and many othersKey TakeawaysQuestion digital IDs and government motives; the UK's 1952 ID abolition shows control can be reversed.Data privacy is under threat; Oracle-TikTok deals and Mediterranean data schemes demand resistance.Natural remedies, like green tea, can support health, as shown in Neoborn's personal experiments.Media and AI distort reality; bots and fear-mongering (e.g., Apophis asteroid) undermine truth—rely on logic.Human connections through stories heal division and isolation, fostering real bonds.Storytelling preserves personal and historical truths, countering manipulation and neglect.Generalizing groups (ethnicity, politics) fuels hate—judge actions, not people, to avoid historical traps.Historical lessons (UK IDs, population exchanges) warn against unchecked power—act proactively.Embrace your unique value; growth through trials silences naysayers, inner and outer.Sound Bites“Are we the lost souls or who we are? Are we the victims of the new Project Blue Beam coming?“I don't need drugs to breathe. It's interesting, right?”“Don't generalize. If you say all Chinese are bad, then what about Jackie Chan?”“Only the unloved hate, the immature.”“You are special, you are amazing, you are one of a kind."“Prevent before it happens. You know it's a scheme, a scam and a political maneuver.”Timestamps00:00 Welcome to The Neoborn Caveman Show00:47 Exploring Project Blue Beam and Psyops01:12 Green Tea Rituals and Freedom's Erosion05:15 Personal Challenges and Societal Issues07:40 Social Media and Asteroid Fear-Mongering10:04 Digital IDs and Government Overreach12:24 Data Privacy and Tech Control14:47 Government Lies and Public Deception17:16 Canadian Overreach and Freedom Convoy19:39 Natural Remedies and Big Pharma Critique21:43 Media Manipulation and AI Truth-Twisting29:51 Open Tea House Conversations32:13 Human Connections Over News and Noise34:25 Kids' Punk Rock and Creative Expression36:30 Building Real Human Connections38:54 Storytelling to Preserve Humanity40:48 Excuses vs. Genuine Connection46:07 History's Dark Lessons on Control48:30 Eurovision Boycotts and Political Art50:51 Rejecting Generalizations in Israel-Palestine55:21 Rejecting Generalizations and Division57:14 Historical Context for Unity59:44 Only the Unloved Hate01:00:39 UK's ID Card History Lesson01:04:17 Resisting Digital Control Now01:05:52 Embracing Your Unique GreatnessHumanity centered satirical takes on the world & news + music - with a marble mouthed host.Free speech marinated in comedy.Supporting Purple Rabbits. Hosted on Acast. See acast.com/privacy for more information.
Neoborn Caveman invites Sean Martin to talk about his current projects, including music videos that incorporate storytelling and personal experiences from his military background. He emphasizes the importance of addressing social justice issues and the role of government accountability. They also talk about mental health, particularly coping with PTSD, and the impact of social media on public perception and confirmation bias. Sean shares insights on the music industry, the creative process behind his upcoming album, and the healing power of music.Key TakeawaysStorytelling in music videos can create deeper connections.Military experiences can inform artistic expression and social commentary.Coping with PTSD requires ongoing effort and self-investment.Perspective is crucial in understanding emotions and reactions.Social media can amplify confirmation bias and misinformation.Narcissism is increasingly prevalent in society due to mass communication.Music serves as a powerful tool for healing and connection.The music industry has changed, requiring new strategies for success.Empathy and compassion are essential for societal improvement.Art should challenge norms and provoke thought. Sound bites"You can't just follow orders blindly.""Coping with PTSD is a constant work.""Life isn't meant to be easy."Keywordsmusic, storytelling, military, PTSD, social justice, mental health, narcissism, music industry, creativity, healingHumanity centered satirical takes on the world & news + music - with a marble mouthed host.Free speech marinated in comedy.Supporting Purple Rabbits. Hosted on Acast. See acast.com/privacy for more information.
⬥GUEST⬥Aunshul Rege, Director at The CARE Lab at Temple University | On Linkedin: https://www.linkedin.com/in/aunshul-rege-26526b59/⬥CO-HOST⬥Julie Haney, Computer scientist and Human-Centered Cybersecurity Program Lead, National Institute of Standards and Technology | On LinkedIn: https://www.linkedin.com/in/julie-haney-037449119/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥Cybersecurity Is for Everyone — If We Teach It That WayCybersecurity impacts us all, yet most people still see it as a tech-centric domain reserved for experts in computer science or IT. Dr. Aunshul Rege, Associate Professor in the Department of Criminal Justice at Temple University, challenges that perception through her research, outreach, and education programs — all grounded in community, empathy, and human behavior.In this episode, Dr. Rege joins Sean Martin and co-host Julie Haney to share her multi-layered approach to cybersecurity awareness and education. Drawing from her unique background that spans computer science and criminology, she explains how understanding human behavior is critical to understanding and addressing digital risk.One powerful initiative she describes brings university students into the community to teach cyber hygiene to seniors — a demographic often left out of traditional training programs. These student-led sessions focus on practical topics like scams and password safety, delivered in clear, respectful, and engaging ways. The result? Not just education, but trust-building, conversation, and long-term community engagement.Dr. Rege also leads interdisciplinary social engineering competitions that invite students from diverse academic backgrounds — including theater, nursing, business, and criminal justice — to explore real-world cyber scenarios. These events prove that you don't need to code to contribute meaningfully to cybersecurity. You just need curiosity, communication skills, and a willingness to learn.Looking ahead, Temple University is launching a new Bachelor of Arts in Cybersecurity and Human Behavior — a program that weaves in community engagement, liberal arts, and applied practice to prepare students for real-world roles beyond traditional technical paths.If you're a security leader looking to improve awareness programs, a university educator shaping the next generation, or someone simply curious about where you fit in the cyber puzzle, this episode offers a fresh perspective: cybersecurity works best when it's human-first.⬥SPONSORS⬥ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥Dr. Aunshul Rege is an Associate Professor here, and much of her work is conducted under this department: https://liberalarts.temple.edu/academics/departments-and-programs/criminal-justiceTemple Digital Equity Plan (2022): https://www.phila.gov/media/20220412162153/Philadelphia-Digital-Equity-Plan-FINAL.pdfTemple University Digital Equity Center / Digital Access Center: https://news.temple.edu/news/2022-12-06/temple-launches-digital-equity-center-north-philadelphiaNICE Cybersecurity Workforce Framework: https://www.nist.gov/itl/applied-cybersecurity/nice/nice-framework-resource-center⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
⬥GUEST⬥Aunshul Rege, Director at The CARE Lab at Temple University | On Linkedin: https://www.linkedin.com/in/aunshul-rege-26526b59/⬥CO-HOST⬥Julie Haney, Computer scientist and Human-Centered Cybersecurity Program Lead, National Institute of Standards and Technology | On LinkedIn: https://www.linkedin.com/in/julie-haney-037449119/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥Cybersecurity Is for Everyone — If We Teach It That WayCybersecurity impacts us all, yet most people still see it as a tech-centric domain reserved for experts in computer science or IT. Dr. Aunshul Rege, Associate Professor in the Department of Criminal Justice at Temple University, challenges that perception through her research, outreach, and education programs — all grounded in community, empathy, and human behavior.In this episode, Dr. Rege joins Sean Martin and co-host Julie Haney to share her multi-layered approach to cybersecurity awareness and education. Drawing from her unique background that spans computer science and criminology, she explains how understanding human behavior is critical to understanding and addressing digital risk.One powerful initiative she describes brings university students into the community to teach cyber hygiene to seniors — a demographic often left out of traditional training programs. These student-led sessions focus on practical topics like scams and password safety, delivered in clear, respectful, and engaging ways. The result? Not just education, but trust-building, conversation, and long-term community engagement.Dr. Rege also leads interdisciplinary social engineering competitions that invite students from diverse academic backgrounds — including theater, nursing, business, and criminal justice — to explore real-world cyber scenarios. These events prove that you don't need to code to contribute meaningfully to cybersecurity. You just need curiosity, communication skills, and a willingness to learn.Looking ahead, Temple University is launching a new Bachelor of Arts in Cybersecurity and Human Behavior — a program that weaves in community engagement, liberal arts, and applied practice to prepare students for real-world roles beyond traditional technical paths.If you're a security leader looking to improve awareness programs, a university educator shaping the next generation, or someone simply curious about where you fit in the cyber puzzle, this episode offers a fresh perspective: cybersecurity works best when it's human-first.⬥SPONSORS⬥ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥Dr. Aunshul Rege is an Associate Professor here, and much of her work is conducted under this department: https://liberalarts.temple.edu/academics/departments-and-programs/criminal-justiceTemple Digital Equity Plan (2022): https://www.phila.gov/media/20220412162153/Philadelphia-Digital-Equity-Plan-FINAL.pdfTemple University Digital Equity Center / Digital Access Center: https://news.temple.edu/news/2022-12-06/temple-launches-digital-equity-center-north-philadelphiaNICE Cybersecurity Workforce Framework: https://www.nist.gov/itl/applied-cybersecurity/nice/nice-framework-resource-center⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
Sean Martin is the lead vocalist, guitarist, and driving force behind The Quarantined, a band renowned for its raw, unflinching sound and fearless exploration of challenging subjects. A songwriter who isn't afraid to confront trauma, injustice, and the darker corners of the human experience, Sean brings an intensity and honesty to his music that connects deeply with listeners.Much of his writing is rooted in his own battle with PTSD, which has shaped both his perspective and his art. Songs like “Shadow,” written during a time of personal crisis, channel the weight of dread, intrusive thoughts, and sleepless nights into powerful, cathartic music. For Sean, creating is more than just making records—it's a way of reclaiming control, pushing back against oppressive systems, and transforming pain into something that inspires resilience.On stage and in the studio, Sean delivers with unrelenting passion, blending heavy riffs, haunting melodies, and lyrics that don't shy away from uncomfortable truths. His vision for The Quarantined goes beyond just music; it's about sparking awareness, encouraging defiance against injustice, and giving a voice to those struggling in silence.Highlights from Toby Gribben's Friday afternoon show on Shout Radio. Featuring chat with top showbiz guests. Hosted on Acast. See acast.com/privacy for more information.
The decision to leave a successful corporate position and start a company requires more than just identifying a market opportunity. For Shankar Somasundaram, it required witnessing firsthand how traditional cybersecurity approaches consistently failed in the environments that matter most to society: hospitals, manufacturing plants, power facilities, and critical infrastructure.Somasundaram's path to founding Asimily began with diverse technical experience spanning telecommunications and early machine learning development. This foundation proved essential when he transitioned to cybersecurity, eventually building and growing the IoT security division at a major enterprise security company.During his corporate tenure, Somasundaram gained direct exposure to security challenges across healthcare systems, industrial facilities, utilities, manufacturing plants, and oil and gas operations. Each vertical revealed the same fundamental problem: existing security solutions were designed for traditional IT environments where confidentiality and integrity took precedence, but operational technology environments operated under entirely different rules.The mismatch became clear through everyday operational realities. Hospital ultrasound machines couldn't be taken offline during procedures for security updates. Manufacturing production lines couldn't be rebooted for patches without scheduling expensive downtime. Power plant control systems required continuous availability to serve communities. These environments prioritized operational continuity above traditional security controls.Beyond technical challenges, Somasundaram observed a persistent communication gap between security and operations teams. IT security professionals spoke in terms of vulnerabilities and patch management. Operations teams focused on uptime, safety protocols, and production schedules. Neither group had effective frameworks for translating their concerns into language the other could understand and act upon.This divide created frustration for Chief Security Officers who understood risks existed but lacked clear paths to mitigation that wouldn't disrupt critical business operations. Organizations could identify thousands of vulnerabilities across their operational technology environments, but struggled to prioritize which issues actually posed meaningful risks given their specific operational contexts.Somasundaram recognized an opportunity to approach this problem differently. Rather than building another vulnerability scanner or forcing operational environments to conform to IT security models, he envisioned a platform that would provide contextual risk analysis and actionable mitigation strategies tailored to operational requirements.The decision to leave corporate security and start Asimily wasn't impulsive. Somasundaram had previous entrepreneurial experience and understood the startup process. He waited for the right convergence of market need, personal readiness, and strategic opportunity. When corporate priorities shifted through acquisitions, the conditions aligned for his departure.Asimily's founding mission centered on bridging the gap between operational technology and information technology teams. The company wouldn't just build another security tool; it would create a translation layer enabling different organizational departments to collaborate effectively on risk reduction.This approach required understanding multiple stakeholder perspectives within client organizations. Sometimes the primary user would be a Chief Information Security Officer. Other times, it might be a manufacturing operations head managing production floors, or a clinical operations director in healthcare. The platform needed to serve all these perspectives while maintaining technical depth.Somasundaram's product engineering background informed this multi-stakeholder approach. His experience with complex system integration—from telecommunications infrastructure to machine learning algorithms—provided insight into how security platforms could integrate with existing IT infrastructure while addressing operational technology requirements.The vision extended beyond traditional vulnerability management to comprehensive risk analysis considering operational context, business impact, and regulatory requirements. Rather than treating all vulnerabilities equally, Asimily would analyze each device within its specific environment and use case, providing organizations with actionable intelligence for informed decision-making.Somasundaram's entrepreneurial journey illustrates how diverse technical experience, industry knowledge, and strategic timing converge to address complex market problems. His transition from corporate executive to startup founder demonstrates how deep industry exposure can reveal opportunities to solve problems that established players might overlook or underestimate.Today, as healthcare systems, manufacturing facilities, and critical infrastructure become increasingly connected, the vision Somasundaram brought to Asimily's founding has proven both timely and necessary. The company's development reflects not just market demand, but the value of approaching familiar problems from fresh perspectives informed by real operational experience.Learn more about Asimily: itspm.ag/asimily-104921Note: This story contains promotional content. Learn more.Guest: Shankar Somasundaram, CEO & Founder, Asimily | On LinkedIn: https://www.linkedin.com/in/shankar-somasundaram-a7315b/Company Directory: https://www.itspmagazine.com/directory/asimilyResourcesLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Threat modeling is often called the foundation of secure software design—anticipating attackers, uncovering flaws, and embedding resilience before a single line of code is written. But does it really work in practice?In this episode of AppSec Contradictions, Sean Martin explores why threat modeling so often fails to deliver:It's treated as a one-time exercise, not a continuous processResearch shows teams who put risk first discover 2x more high-priority threatsYet fewer than 4 in 10 organizations use systematic threat modeling at scaleDrawing on insights from SANS, Forrester, and Gartner, Sean breaks down the gap between theory and reality—and why evolving our processes, not just our models, is the only path forward.
Join us on RadioBypass for an in-depth interview with Sean Martin, frontman of The Quarantined. Known for their hard-hitting riffs and socially conscious lyrics, The Quarantined bring a raw energy and message-driven sound to the rock scene.In this conversation, Sean opens up about the band's origins, the meaning behind their songs, the independent grind, we discuss their latest single, Shadow and what's next for The Quarantined. Whether you're here for the music, the stories, or the insight into the modern rock landscape, you won't want to miss this one. We will also be playing two killer songs from The Quarantined!Highlights of this interview:The story behind The Quarantined's formationSongwriting with purpose and meaningNavigating the independent music sceneUpcoming projects and what's on the horizonTurn it up and discover Rock and Roll music that DESERVES to be heard—only on RadioBypass.
Wayne “Radar” Riley and Sean Martin joined Gary Williams on the show today. Williams began the show on Rory McIlroy's Irish Open win the U.S. win in the Walker Cup. “Radar” Riley discussed how big the win was for Riley, the Americans playing in Europe and should Scottie Scheffler be there and gave us a few Europeans that Americans golf fans should keep their eyes on next year. Martin talked about the Fall being a time for golf to go global, qualification for cards the following year and giving the top players time off.
AI is everywhere in application security today — but instead of fixing the problem of false positives, it often makes the noise worse. In this first episode of AppSec Contradictions, Sean Martin explores why AI in application security is failing to deliver on its promises.False positives dominate AppSec programs, with analysts wasting time on irrelevant alerts, developers struggling with insecure AI-written code, and business leaders watching ROI erode. Industry experts like Forrester and Gartner warn that without strong governance, AI risks amplifying chaos instead of clarifying risk.This episode breaks down:• Why 70% of analyst time is wasted on false positives• How AI-generated code introduces new security risks• What “alert fatigue” means for developers, security teams, and business leaders• Why automating bad processes creates more noise, not less
Broadcasting from Florence and Los Angeles, I Had One of Those Conversations...You know the kind—where you start discussing one thing and suddenly realize you're mapping the entire landscape of how different societies approach technology. That's exactly what happened when Rob Black and I connected across the Atlantic for the pilot episode of ITSPmagazine Europe: The Transatlantic Broadcast.Rob was calling from what he optimistically described as "sunny" West Sussex (complete with biblical downpours and Four Seasons weather in one afternoon), while I enjoyed actual California sunshine. But this geographic distance perfectly captured what we were launching: a genuine exploration of how European perspectives on cybersecurity, technology, and society differ from—and complement—American approaches.The conversation emerged from something we'd discovered at InfoSecurity Europe earlier this year. After recording several episodes together with Sean Martin, we realized we'd stumbled onto something crucial: most global technology discourse happens through an American lens, even when discussing fundamentally European challenges. Digital sovereignty isn't just a policy buzzword in Brussels—it represents a completely different philosophy about how democratic societies should interact with technology.Rob Black: Bridging Defense Research and Digital RealityRob brings credentials that perfectly embody the European approach to cybersecurity—one that integrates geopolitics, human sciences, and operational reality in ways that purely technical perspectives miss. As UK Cyber Citizen of the Year 2024, he's recognized for contributions that span UK Ministry of Defense research on human elements in cyber operations, international relations theory, and hands-on work with university students developing next-generation cybersecurity leadership skills.But what struck me during our pilot wasn't his impressive background—it was his ability to connect macro-level geopolitical cyber operations with the daily impossible decisions that Chief Information Security Officers across Europe face. These leaders don't see themselves as combatants in a digital war, but they're absolutely operating on front lines where nation-state actors, criminal enterprises, and hybrid threats converge.Rob's international relations expertise adds crucial context that American cybersecurity discourse often overlooks. We're witnessing cyber operations as extensions of statecraft—the ongoing conflict in Ukraine demonstrates how narrative battles and digital infrastructure attacks interweave with kinetic warfare. European nations are developing their own approaches to cyber deterrence, often fundamentally different from American strategies.European Values Embedded in Technology ChoicesWhat emerged from our conversation was something I've observed but rarely heard articulated so clearly: Europe approaches technology governance through distinctly different cultural and philosophical frameworks than America. This isn't just about regulation—though the EU's leadership from GDPR through the AI Act certainly shapes global standards. It's about fundamental values embedded in technological choices.Rob highlighted algorithmic bias as a perfect example. When AI systems are developed primarily in Silicon Valley, they embed specific cultural assumptions and training data that may not reflect European experiences, values, or diverse linguistic traditions. The implications cascade across everything from hiring algorithms to content moderation to criminal justice applications.We discussed how this connects to broader patterns of technological adoption. I'd recently written about how the transistor radio revolution of the 1960s paralleled today's smartphone-driven transformation—both technologies were designed for specific purposes but adopted by users in ways inventors never anticipated. The transistor radio became a tool of cultural rebellion; smartphones became instruments of both connection and surveillance.But here's what's different now: the stakes are global, the pace is accelerated, and the platforms are controlled by a handful of American and Chinese companies. European voices in these conversations aren't just valuable—they're essential for understanding how different democratic societies can maintain their values while embracing technological transformation.The Sociological Dimensions Technology Discourse MissesMy background in political science and sociology of communication keeps pulling me toward questions that pure technologists might skip: How do different European cultures interpret privacy rights differently? Why do Nordic countries approach digital government services so differently than Mediterranean nations? What happens when AI training data reflects primarily Anglo-American cultural assumptions but gets deployed across 27 EU member states with distinct languages and traditions?Rob's perspective adds the geopolitical layer that's often missing from cybersecurity conversations. We're not just discussing technical vulnerabilities—we're examining how different societies organize themselves digitally, how they balance individual privacy against collective security, and how they maintain democratic values while defending against authoritarian digital influence operations.Perhaps most importantly, we're both convinced that the next generation of European cybersecurity leaders needs fundamentally different skills than previous generations. Technical expertise remains crucial, but they also need to communicate complex risks to non-technical decision-makers, operate comfortably with uncertainty rather than seeking perfect solutions, and understand that cybersecurity decisions are ultimately political decisions about what kind of society we want to maintain.Why European Perspectives Matter GloballyEurope represents 27 different nations with distinct histories, languages, and approaches to technology governance, yet they're increasingly coordinating digital policies through EU frameworks. This complexity is fascinating and the implications are global. When Europe implements new AI regulations or data protection standards, Silicon Valley adjusts its practices worldwide.But European perspectives are too often filtered through American media or reduced to regulatory footnotes in technology publications. We wanted to create space for European voices to explain their approaches in their own terms—not as responses to American innovation, but as distinct philosophical and practical approaches to technology's role in democratic society.Rob pointed out something crucial during our conversation: we're living through a moment where "every concept that we've thought about in terms of how humans react to each other and how they react to the world around them now needs to be reconsidered in light of how humans react through a computer mediated existence." This isn't abstract philosophizing—it's the practical challenge facing policymakers, educators, and security professionals across Europe.Building Transatlantic Understanding, Not DivisionThe "Transatlantic Broadcast" name reflects our core mission: connecting perspectives across borders rather than reinforcing them. Technology challenges—from cybersecurity threats to AI governance to digital rights—don't respect national boundaries. Solutions require understanding how different democratic societies approach these challenges while maintaining their distinct values and traditions.Rob and I come from different backgrounds—his focused on defense research and international relations, mine on communication theory and sociological analysis—but we share curiosity about how technology shapes society and how society shapes technology in return. Sean Martin brings the American cybersecurity industry perspective that completes our analytical triangle.Cross-Border Collaboration for European Digital FutureThis pilot episode represents just the beginning of what we hope becomes a sustained conversation. We're planning discussions with European academics developing new frameworks for digital rights, policymakers implementing AI governance across member states, industry leaders building privacy-first alternatives to Silicon Valley platforms, and civil society advocates working to ensure technology serves democratic values.We want to understand how digital transformation looks different across European cultures, how regulatory approaches evolve through multi-stakeholder processes, and how European innovation develops characteristics that reflect distinctly European values and approaches to technological development.The Invitation to Continue This ConversationBroadcasting from our respective sides of the Atlantic, we're extending an invitation to join this ongoing dialogue. Whether you're developing cybersecurity policy in Brussels, building startups in Berlin, teaching digital literacy in Barcelona, or researching AI ethics in Amsterdam, your perspective contributes to understanding how democratic societies can thrive in an increasingly digital world.European voices aren't afterthoughts in global technology discourse—they're fundamental contributors to understanding how diverse democratic societies can maintain their values while embracing technological change. This conversation needs academic researchers, policy practitioners, industry innovators, and engaged citizens from across Europe and beyond.If this resonates with your own observations about technology's role in society, subscribe to follow our journey as we explore these themes with guests from across Europe and the transatlantic technology community.And if you want to dig deeper into these questions or share your own perspective on European approaches to cybersecurity and technology governance, I'd love to continue the conversation directly. Get in touch with us on Linkedin! Marco CiappelliBroadcasting from Los Angeles (USA) & Florence (IT)On Linkedin: https://www.linkedin.com/in/marco-ciappelliRob BlackBroadcasting from London (UK)On Linkedin https://www.linkedin.com/in/rob-black-30440819Sean MartinBroadcasting from New York City (USA)On Linkedin: https://www.linkedin.com/in/imsmartinThe transatlantic conversation about technology, society, and democratic values starts now.
AI Dependency Crisis + EV Infrastructure Failures: Tech Reality Check 2025When Two Infrastructure Promises Collide with RealityThe promise was simple: AI would augment human intelligence, and electric vehicles would transform transportation. The reality in 2025? Both are hitting infrastructure walls that expose uncomfortable truths about how technology actually scales.Sean Martin and Marco Ciappelli didn't plan to connect these dots in their latest Random and Unscripted weekly recap, but the conversation naturally evolved from AI dependency concerns to electric vehicle infrastructure challenges—revealing how both represent the same fundamental problem: mistaking technological capability for systemic readiness."The AI is telling us what success looks like and we're measuring against that, and who knows if it's right or wrong," Sean observed, describing what's become an AI dependency crisis in cybersecurity teams. Organizations aren't just using AI as a tool; they're letting it define their decision-making frameworks without maintaining the critical thinking skills to evaluate those frameworks.Marco connected this to their recent Black Cat analysis, describing the "paradox loop"—where teams lose both the ability to take independent action and think clearly because they're constantly feeding questions to AI, creating echo chambers of circular reasoning. "We're gonna be screwed," he said with characteristic directness. "We go back to something being magic again."This isn't academic hand-wringing. Both hosts developed their expertise when understanding fundamental technology was mandatory—when you had to grasp cables, connections, and core systems to make anything work. Their concern is for teams that might never develop that foundational knowledge, mistaking AI convenience for actual competence.The electric vehicle discussion, triggered by Marco's conversation with Swedish consultant Matt Larson, revealed parallel infrastructure failures. "Upgrading to electric vehicles isn't like updating software," Sean noted, recalling his own experience renting an EV and losing an hour to charging—"That's not how you're gonna sell it."Larson's suggestion of an "Apollo Program" for EV infrastructure acknowledges what the industry often ignores: some technological transitions require massive, coordinated investment beyond individual company capabilities. The cars work; the surrounding ecosystem barely exists. Sound familiar to anyone implementing AI without considering organizational infrastructure?From his Object First webinar on backup systems, Sean extracted a deceptively simple insight: immutability matters precisely because bad actors specifically target backups to enable ransomware success. "You might think you're safe and resilient until something happens and you realize you're not."Marco's philosophical take—comparing immutable backups to never stepping in the same river twice—highlights why both cybersecurity and infrastructure transitions demand unchanging foundations even as everything else evolves rapidly.The episode's most significant development was their expanded event coverage announcement. Moving beyond traditional cybersecurity conferences to cover IBC Amsterdam (broadcasting technology since 1967), automotive security events, gaming conferences, and virtual reality gatherings represents recognition that infrastructure challenges cross every industry."That's where things really get interesting," Sean noted about broader tech events. When cybersecurity professionals only discuss security in isolation, they miss how infrastructure problems manifest across music production, autonomous vehicles, live streaming, and emerging technologies.Both AI dependency and EV infrastructure failures share the same root cause: assuming technological capability automatically translates to systemic implementation. The gap between "this works in a lab" and "this works in reality" represents the most critical challenge facing technology leaders in 2025.Their call to action extends beyond cybersecurity: if you know about events that address infrastructure challenges at the intersection of technology and society, reach out. The "usual suspects" of security conferences aren't where these broader infrastructure conversations are happening.What infrastructure gaps are you seeing between technology promises and implementation reality? Join the conversation on LinkedIn or connect through ITSP Magazine.________________Hosts links:
I had one of those conversations that reminded me why I'm so passionate about exploring the intersection of technology and society. Speaking with Mark Smith, a board member at IBC and co-lead of their accelerator program, I found myself transported back to my roots in communication and media studies, but with eyes wide open to what's coming next.Mark has spent over 30 years in media technology, including 23 years building Mobile World Congress in Barcelona. When someone with that depth of experience gets excited about what's happening now, you pay attention. And what's happening at IBC 2025 in Amsterdam this September is nothing short of a redefinition of how we create, distribute, and authenticate content.The numbers alone are staggering: 1,350 exhibitors across 14 halls, nearly 300 speakers, 45,000 visitors. But what struck me wasn't the scale—it's the philosophical shift happening in how we think about media production. We're witnessing television's centennial year, with the first demonstrations happening in 1925, and yet we're simultaneously seeing the birth of entirely new forms of creative expression.What fascinated me most was Mark's description of their Accelerator Media Innovation Program. Since 2019, they've run over 50 projects involving 350 organizations, creating what he calls "a safe environment" for collaboration. This isn't just about showcasing new gadgets—it's about solving real challenges that keep media professionals awake at night. In our Hybrid Analog Digital Society, the traditional boundaries between broadcaster and audience, between creator and consumer, are dissolving faster than ever.The AI revolution in media production particularly caught my attention. Mark spoke about "AI assistant agents" and "agentic AI" with the enthusiasm of someone who sees liberation rather than replacement. As he put it, "It's an opportunity to take out a lot of laborious processes." But more importantly, he emphasized that it's creating new jobs—who would have thought "AI prompter" would become a legitimate profession?This perspective challenges the dystopian narrative often surrounding AI adoption. Instead of fearing the technology, the media industry seems to be embracing it as a tool for enhanced creativity. Mark's excitement was infectious when describing how AI can remove the "boring" aspects of production, allowing creative minds to focus on what they do best—tell stories that matter.But here's where it gets really interesting from a sociological perspective: the other side of the screen. We talked about how streaming revolutionized content consumption, giving viewers unprecedented control over their experience. Yet Mark observed something I've noticed too—while the technology exists for viewers to be their own directors (choosing camera angles in sports, for instance), many prefer to trust the professional's vision. We're not necessarily seeking more control; we're seeking more relevance and authenticity.This brings us to one of the most critical challenges of our time: content provenance. In a world where anyone can create content that looks professional, how do we distinguish between authentic journalism and manufactured narratives? Mark highlighted their work on C2PA (content provenance initiative), developing tools that can sign and verify media sources, tracking where content has been manipulated.This isn't just a technical challenge—it's a societal imperative. As Mark noted, YouTube is now the second most viewed platform in the UK. When user-generated content competes directly with traditional media, we need new frameworks for understanding truth and authenticity. The old editorial gatekeepers are gone; we need technological solutions that preserve trust while enabling creativity.What gives me hope is the approach I heard from Mark and his colleagues. They're not trying to control technology's impact on society—they're trying to shape it consciously. The IBC Accelerator Program represents something profound: an industry taking responsibility for its own transformation, creating spaces for collaboration rather than competition, focusing on solving real problems rather than just building cool technology.The Google Hackfest they're launching this year perfectly embodies this philosophy. Young broadcast engineers and software developers working together on real challenges, supported by established companies like Formula E. It's not about replacing human creativity with artificial intelligence—it's about augmenting human potential with technological tools.As I wrapped up our conversation, I found myself thinking about my own journey from studying sociology of communication in a pre-internet world to hosting podcasts about our digital transformation. Technology doesn't just change how we communicate—it changes who we are as communicators, as creators, as human beings sharing stories.IBC 2025 isn't just a trade show; it's a glimpse into how we're choosing to redefine our relationship with media technology. And that choice—that conscious decision to shape rather than simply react—gives me genuine optimism about our Hybrid Analog Digital Society.Subscribe to Redefining Society and Technology Podcast for more conversations exploring how we're consciously shaping our technological future. Your thoughts and reflections always enrich these discussions.
⬥GUEST⬥Andy Ellis, Legendary CISO [https://howtociso.com] | On LinkedIn: https://www.linkedin.com/in/csoandy/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥In this episode of Redefining CyberSecurity, host Sean Martin speaks with Andy Ellis, former CSO at Akamai and current independent advisor, about the shifting expectations of security leadership in today's SaaS-powered, AI-enabled business environment.Andy highlights that many organizations—especially mid-sized startups—struggle not because they lack resources, but because they don't know how to contextualize what security means to their business goals. Often, security professionals aren't equipped to communicate with executives or boards in a way that builds shared understanding. That's where advisors like Andy step in: not to provide a playbook, but to help translate and align.One of the core ideas discussed is the reframing of security as an enabler rather than a gatekeeper. With businesses built almost entirely on SaaS platforms and outsourced operations, IT and security should no longer be siloed. Andy encourages security teams to “own the stack”—not just protect it—by integrating IT management, vendor oversight, and security into a single discipline.The conversation also explores how AI and automation empower employees at every level to “vibe code” their own solutions, shifting innovation away from centralized control. This democratization of tech raises new opportunities—and risks—that security teams must support, not resist. Success comes from guiding, not gatekeeping.Andy shares practical ways CISOs can build influence, including a deceptively simple yet powerful technique: ask every stakeholder what security practice they hate the most and what critical practice is missing. These questions uncover quick wins that earn political capital—critical fuel for driving long-term transformation.From his “First 91 Days” guide for CISOs to his book 1% Leadership, Andy offers not just theory but actionable frameworks for influencing culture, improving retention, and measuring success in ways that matter.Whether you're a CISO, a founder, or an aspiring security leader, this episode will challenge how you think about the role security plays in business—and what it means to lead from the middle.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥Inspiring Post: https://www.linkedin.com/posts/csoandy_how-to-ciso-the-first-91-days-ugcPost-7330619155353632768-BXQT/Book: “How to CISO: The First 91-Day Guide” by Andy Ellis — https://howtociso.com/library/first-91-days-guide/Book: “1% Leadership: Master the Small Daily Habits that Build Exceptional Teams” — https://www.amazon.com/1-Leadership-Daily-Habits-Exceptional/dp/B0BSV7T2KZ⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast:
What happens when a cybersecurity incident requires legal precision, operational coordination, and business empathy—all at once? That's the core question addressed in this origin story with Bryan Marlatt, Chief Regional Officer for North America at CyXcel.Bryan brings over 30 years of experience in IT and cybersecurity, with a history as a CISO, consultant, and advisor. He now helps lead an organization that sits at the intersection of law, cyber, and geopolitics—an uncommon combination that reflects the complexity of modern risk. CyXcel was founded to address this reality head-on, integrating legal counsel, cybersecurity expertise, and operational insight into a single, business-first consulting model.Rather than treat cybersecurity as a checklist or a technical hurdle, Bryan frames it as a service that should start with the business itself: its goals, values, partnerships, and operating environment. That's why their engagements often begin with conversations with sales, finance, or operations—not just the CIO or CISO. It's about understanding what needs to be protected and why, before prescribing how.CyXcel supports clients before, during, and after incidents—ranging from tailored tabletop exercises to legal coordination during breach response and post-incident recovery planning. Their work spans critical sectors like healthcare, utilities, finance, manufacturing, and agriculture—where technology, law, and regulation often converge under pressure.Importantly, Bryan emphasizes the need for tailored guidance, not generic frameworks. He notes that many companies don't realize how incomplete their protections are until it's too late. In one example, he recounts a hospital system that chose to “pay the fine” rather than invest in cybersecurity—a decision that risks reputational and operational harm far beyond the regulatory penalty.From privacy laws and third-party contract reviews to incident forensics and geopolitical risk analysis, this episode reveals how cybersecurity consulting is evolving to meet a broader—and more human—set of business needs.Learn more about CyXcel: https://itspm.ag/cyxcel-922331Note: This story contains promotional content. Learn more.Guest: Bryan Marlatt, Chief Regional Officer (North America) at CyXcel | On LinkedIn: https://www.linkedin.com/in/marlattb/ResourcesLearn more and catch more stories from CyXcel: https://www.itspmagazine.com/directory/cyxcelLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story
This year at Black Hat USA 2025, the conversation is impossible to escape: artificial intelligence. But while every vendor claims an AI-powered edge, the real question is how organizations can separate meaningful innovation from noise.In our discussion with Evgeniy Kharam, Vice President of Cybersecurity Architecture at Herjavec Group (formerly), Chief Strategy Officer (CSO) at Discern Security, and long-time security leader and author, the theme of AI confusion takes center stage. Evgeniy notes that CISOs and security architects don't have the time or resources to analyze what “AI” means in every product pitch. With over 4,000 vendors in the ecosystem, each layering its own flavor of AI, the burden falls on security leaders to distinguish hype from usable automation.From Gondola Pitches to AI OverloadEvgeniy shares how his creative networking events—skiing, biking, and beyond—mirror the industry's need for genuine connection and trust. Just as his “gondola pitch” builds authentic engagement, buyers want clarity and honesty from technology providers. The proliferation of AI labels, however, makes that trust harder to establish.Where AI Can HelpEvgeniy highlights areas where AI can reduce friction, from vulnerability management and detection to policy writing and compliance. Yet, even here, issues such as hallucinations, privacy tradeoffs, and ethics cannot be ignored. When AI begins influencing employee monitoring or analyzing sensitive data, organizations face difficult questions about fairness, transparency, and control.The Unspoken Challenge: Surveillance and TrustAs we discuss the balance between employee privacy and corporate protection, it becomes clear that AI introduces new layers of surveillance. In Europe, cultural and legal boundaries create clear separation between personal and professional lives. In North America, the lines blur, raising ethical debates that may ultimately be tested in courts.The takeaway? AI has the potential to unlock workflows that were previously too costly or complex. But without transparency, governance, and a commitment to responsible use, the “AI in everything” trend risks overwhelming the very leaders it is meant to help.___________Guest:Evgeniy Kharam, Chief Strategy Officer (CSO), Discern Security | On LinkedIn: https://www.linkedin.com/in/ekharam/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974BlackCloak: https://itspm.ag/itspbcwebAkamai: https://itspm.ag/akamailbwcDropzoneAI: https://itspm.ag/dropzoneai-641Stellar Cyber: https://itspm.ag/stellar-9dj3___________ResourcesLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
We're Becoming Dumb and Numb": Why Black Hat 2025's AI Hype Is Killing Cybersecurity -- And Our Ability to Think Random and Unscripted Weekly Update Podcast with Sean Martin and Marco Ciappelli__________________SummarySean and Marco dissect Black Hat USA 2025, where every vendor claimed to have "agentic AI" solutions. They expose how marketing buzzwords create noise that frustrates CISOs seeking real value. Marco references the Greek myth of Talos - an ancient AI robot that seemed invincible until one fatal flaw destroyed it - as a metaphor for today's overinflated AI promises. The discussion spirals into deeper concerns: are we becoming too dependent on AI decision-making? They warn about echo chambers, lowest common denominators, and losing our ability to think critically. The solution? Stop selling perfection, embrace product limitations, and keep humans in control. __________________10 Notable QuotesSean:"It's hard for them to siphon the noise. Sift through the noise, I should say, and figure out what the heck is really going on.""If we completely just use it for the easy button, we'll stop thinking and we won't use it as a tool to make things better.""We'll stop thinking and we won't use it as a tool to make our minds better, to make our decisions better.""We are told then that this is the reality. This is what good looks like.""Maybe there's a different way to even look at things. So it's kind of become uniform... a very low common denominator that is just good enough for everybody."Marco:"Do you really wanna trust the weapon to just go and shoot everybody? At least you can tell it's a human factor and that's the people that ultimately decide.""If we don't make decision anymore, we're gonna turn out in a lot of those sci-fi stories, like the time machine where we become dumb.""We all perceive reality to be different from what it is, and then it creates a circular knowledge learning where we use AI to create the knowledge, then to ask the question, then to give the answers.""We're just becoming dumb and numb. More than dumb, but we become numb to everything else because we're just not thinking with our own head.""You're selling the illusion of security and that could be something that then you replicate in other industries." Picture this: You walk into the world's largest cybersecurity conference, and every single vendor booth is screaming the same thing – "agentic AI." Different companies, different products, but somehow they all taste like the same marketing milkshake.That's exactly what Sean Martin and Marco Ciappelli witnessed at Black Hat USA 2025, and their latest Random and Unscripted with Sean and Marco episode pulls no punches in exposing what's really happening behind the buzzwords."Marketing just took all the cool technology that each vendor had, put it in a blender and made a shake that just tastes the same," Marco reveals on Random and Unscripted with Sean and Marco, describing how the conference floor felt like one giant echo chamber where innovation got lost in translation.But this isn't just another rant about marketing speak. The Random and Unscripted with Sean and Marco conversation takes a darker turn when Marco introduces the ancient Greek myth of Talos – a bronze giant powered by divine ichor who was tasked with autonomously defending Crete. Powerful, seemingly invincible, until one small vulnerability brought the entire system crashing down.Sound familiar?"Do you really wanna trust the weapon to just go and shoot everybody?" Marco asks, drawing parallels between ancient mythology and today's rush to hand over decision-making to AI systems we don't fully understand.Sean, meanwhile, talked to frustrated CISOs throughout the event who shared a common complaint: "It's hard for them to sift through the noise and figure out what the heck is really going on." When every vendor claims their AI is autonomous and perfect, how do you choose? How do you even know what you're buying?The real danger, they argue on Random and Unscripted with Sean and Marco, isn't just bad purchasing decisions. It's what happens when we stop thinking altogether."If we completely just use it for the easy button, we'll stop thinking and we won't use it as a tool to make our minds better," Sean warns. We risk settling for what he calls the "lowest common denominator" – a world where AI tells us what success looks like, and we never question whether we could do better.Marco goes even further, describing a "circular knowledge learning" trap where "we use AI to create the knowledge, then to ask the question, then to give the answers." The result? "We're just becoming dumb and numb. More than dumb, but we become numb to everything else because we're just not thinking with our own head."Their solution isn't to abandon AI – it's to get honest about what it can and can't do. "Stop looking for the easy button and stop selling the easy button," Marco urges vendors on Random and Unscripted with Sean and Marco. "Your product is probably as good as it is."Sean adds: "Don't be afraid to share your blemishes, share your weaknesses. Share your gaps."Because here's the thing CISOs know that vendors often forget: "CISOs are not stupid. They talk to each other. The truth will come out."In an industry built on protecting against deception, maybe it's time to stop deceiving ourselves about what AI can actually deliver. ________________ Keywordscybersecurity, artificialintelligence, blackhat2025, agentic, ai, marketing, ciso, cybersec, infosec, technology, leadership, vendor, innovation, automation, security, tech, AI, machinelearning, enterprise, business________________Hosts links:
At Black Hat USA 2025, artificial intelligence wasn't the shiny new thing — it was the baseline. Nearly every product launch, feature update, and hallway conversation had an “AI-powered” stamp on it. But when AI becomes the lowest common denominator for security, the questions shift.In this episode, I read my latest opinion piece exploring what happens when the tools we build to protect us are the same ones that can obscure reality — or rewrite it entirely. Drawing from the Lock Note discussion, Jennifer Granick's keynote on threat modeling and constitutional law, my own CISO hallway conversations, and a deep review of 60+ vendor announcements, I examine the operational, legal, and governance risks that emerge when speed and scale take priority over transparency and accountability.We talk about model poisoning — not just in the technical sense, but in how our industry narrative can get corrupted by hype and shallow problem-solving. We look at the dangers of replacing entry-level security roles with black-box automation, where a single model misstep can cascade into thousands of bad calls at machine speed. And yes, we address the potential liability for CISOs and executives who let it happen without oversight.Using Mikko Hyppönen's “Game of Tetris” metaphor, I explore how successes vanish quietly while failures pile up for all to see — and why in the AI era, that stack can build faster than ever.If AI is everywhere, what defines the premium layer above the baseline? How do we ensure we can still define success, measure it accurately, and prove it when challenged?Listen in, and then join the conversation: Can you trust the “reality” your systems present — and can you prove it?________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________✦ ResourcesArticle: When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore?https://www.linkedin.com/pulse/when-artificial-intelligence-becomes-baseline-we-even-martin-cissp-4idqe/The Future of Cybersecurity Article: How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber at Black Hat 2025: https://www.linkedin.com/pulse/how-novel-novelty-security-leaders-try-cut-through-sean-martin-cissp-xtune/Black Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEALearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25Article: When Virtual Reality Is A Commodity, Will True Reality Come At A Premium? https://sean-martin.medium.com/when-virtual-reality-is-a-commodity-will-true-reality-come-at-a-premium-4a97bccb4d72Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.
At Black Hat 2025, Sean Martin sits down with Ofir Stein, CTO and Co-Founder of Apono, to discuss the pressing challenges of identity and access management in today's hybrid, AI-driven environments. Stein's background in technology infrastructure and DevOps, paired with his co-founder's deep cybersecurity expertise, positions the company to address one of the most common yet critical problems in enterprise security: how to secure permissions without slowing the pace of business.Organizations often face a tug-of-war between security teams seeking to minimize risk and engineering or business units pushing for rapid access to systems. Stein explains that traditional approaches to access control — where permissions are either always on or granted through manual processes — create friction and risk. Over-provisioned accounts become prime targets for attackers, while delayed access slows innovation.Apono addresses this through a Zero Standing Privilege approach, where no user — human or non-human — retains permanent permissions. Instead, access is dynamically granted based on business context and automatically revoked when no longer needed. This ensures engineers and systems get the right access at the right time, without exposing unnecessary attack surfaces.The platform integrates seamlessly with existing identity providers, governance systems, and IT workflows, allowing organizations to centralize visibility and control without replacing existing tools. Dynamic, context-based policies replace static rules, enabling access that adapts to changing conditions, including the unpredictable needs of AI agents and automated workflows.Stein also highlights continuous discovery and anomaly detection capabilities, enabling organizations to see and act on changes in privilege usage in real time. By coupling visibility with automated policy enforcement, organizations can not only identify over-privileged accounts but also remediate them immediately — avoiding the cycle of one-off audits followed by privilege creep.The result is a solution that scales with modern enterprise needs, reduces risk, and empowers both security teams and end users. As Stein notes, giving engineers control over their own access — including the ability to revoke it — fosters a culture of shared responsibility for security, rather than one of gatekeeping.Learn more about Apono: https://itspm.ag/apono-1034Note: This story contains promotional content. Learn more.Guest:Ofir Stein, CTO and Co-Founder of Apono | On LinkedIn: https://www.linkedin.com/in/ofir-stein/ResourcesLearn more and catch more stories from Apono: https://www.itspmagazine.com/directory/aponoLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-storyKeywords: sean martin, ofir stein, apono, zero standing privilege, access management, identity security, privilege creep, just in time access, ai security, governance, cloud security, black hat, black hat usa 2025, cybersecurity, permissions
In an era where organizations depend heavily on commercial applications to run their operations, the integrity of those applications has become a top security concern. Saša Zdjelar, Chief Trust Officer at ReversingLabs and Operating Partner at Crosspoint Capital, shares how protecting the software supply chain now extends far beyond open source risk.Zdjelar outlines how modern applications are built from a mix of first-party, contracted, open source, and proprietary third-party components. By the time software reaches production, its lineage spans geographies, development teams, and sometimes even AI-generated code. Incidents like SolarWinds, Kaseya, and CircleCI demonstrate that trusted vendors are no longer immune to compromise, and commercial software can introduce critical vulnerabilities or malicious payloads deep into enterprise systems.Regulatory drivers are increasing scrutiny. Executive Order 14028, Europe's Cyber Resilience Act, DORA, and U.S. Department of Defense software sourcing restrictions all require greater transparency, such as a Software Bill of Materials (SBOM). However, Zdjelar cautions that SBOMs—while valuable—are like ingredient lists without recipes: they don't reveal if a product is secure, just what's in it.ReversingLabs addresses this gap with a no-compromise analysis engine capable of deconstructing any file, of any size or complexity, to assess its safety. This capability enables organizations to make risk-based decisions, continuously monitor for unexpected changes between software versions, and operationalize controls at points such as procurement, SCCM deployments, or file transfers into critical environments.For CISOs, this represents a true technical control where previously only contractual clauses, questionnaires, or insurance policies existed. By placing analysis at the front of the software lifecycle, organizations can reduce reliance on costly manual testing and sandboxing, improve detection of tampering or hidden behavior, and even influence cyber insurance rates.The takeaway is clear: software supply chain security is a board-level concern, and the focus must expand beyond open source. With the right controls, organizations can avoid becoming the next headline-making breach and maintain trust with customers, partners, and regulators.Learn more about ReversingLabs: https://itspm.ag/reversinglabs-v57bNote: This story contains promotional content. Learn more.Guest: Saša Zdjelar, Chief Trust Officer at ReversingLabs and Operating Partner at Crosspoint Capital | On Linkedin: https://www.linkedin.com/in/sasazdjelar/ResourcesLearn more and catch more stories from ReversingLabs: https://www.itspmagazine.com/directory/reversinglabsLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-storyKeywords: Black Hat 2025, Black Hat USA, sean martin, saša zdjelar, software supply chain security, commercial software risk, binary analysis, software bill of materials, sbom security, malicious code detection, ciso strategies, third party software risk, software tampering detection, malware analysis tools, devsecops security, application security testing, cybersecurity compliance
At Black Hat USA 2025, Sean Martin, co-founder of ITSPmagazine, sat down with Brett Stone-Gross, Senior Director of Threat Intelligence at Zscaler, to discuss the findings from the company's latest ransomware report. Over the past five years, the research has tracked how attack patterns, targets, and business models have shifted—most notably from file encryption to data theft and extortion.Brett explains that many ransomware groups now find it more profitable—and less risky—to steal sensitive data and threaten to leak it unless paid, rather than encrypt files and disrupt operations. This change also allows attackers to stay out of the headlines and avoid immediate law enforcement pressure, while still extracting massive payouts. One case saw a Fortune 50 company pay $75 million to prevent the leak of 100 terabytes of sensitive medical data—without a single file being encrypted.The report highlights variation in attacker methods. Some groups focus on single large targets; others, like the group “LOP,” exploit vulnerabilities in widely used file transfer applications, making supply chain compromise a preferred tactic. Once inside, attackers validate their claims by providing file trees and sample data—proving the theft is real.Certain industries remain disproportionately affected. Healthcare, manufacturing, and technology are perennial top targets, with oil and gas seeing a sharp increase this year. Many victims operate with legacy systems, slow to adopt modern security measures, making them vulnerable. Geographically, the U.S. continues to be hit hardest, accounting for roughly half of all observed ransomware incidents.The conversation also addresses why organizations fail to detect such massive data theft—sometimes hundreds of gigabytes per day over weeks. Poor monitoring, limited security staffing, and alert fatigue all contribute. Brett emphasizes that reducing exposure starts with eliminating unnecessary internet-facing services and embracing zero trust architectures to prevent lateral movement.The ransomware report serves not just as a data source but as a practical guide. By mapping observed attacker behaviors to defensive strategies, organizations can better identify and close their most dangerous gaps—before becoming another statistic in next year's findings.Learn more about Zscaler: https://itspm.ag/zscaler-327152Note: This story contains promotional content. Learn more.Guest:Brett Stone-Gross, Senior Director of Threat Intelligence at Zscaler, | On LinkedIn: https://www.linkedin.com/in/brett-stone-gross/ResourcesLearn more and catch more stories from Zscaler: https://www.itspmagazine.com/directory/zscalerLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-storyKeywords: sean martin, brett stone-gross, ransomware, data extortion, cyber attacks, zero trust security, threat intelligence, data breach, cyber defense, network security, file transfer vulnerability, data protection, black hat, black hat usa 2025, zscaler
Mike Wayne, responsible for global sales at BlinkOps, joins ITSPmagazine host Sean Martin to discuss how organizations can harness agentic AI to transform security operations—and much more.The conversation begins with a clear reality: business processes are complex, and when security is added into the mix, orchestrating workflows efficiently becomes even more challenging. BlinkOps addresses this by providing a platform that not only automates security tasks but also extends across HR, finance, sales, and marketing. By enabling automation in areas like employee onboarding/offboarding or access management, the platform helps organizations improve efficiency, reduce risk, and free human talent for higher-value work.Mike explains that while traditional SOAR tools require heavy scripting and ongoing maintenance, BlinkOps takes a different approach. Its security co-pilot allows users to describe automations in plain language, which are then generated—90% complete—by the system. Whether the user is a SOC analyst or an HR manager, the platform supports low-code and no-code capabilities, making automation accessible to “citizen developers” across the organization.The concept of micro agents is central. Instead of relying on large, complex AI models that can hallucinate or act unpredictably, BlinkOps uses focused, purpose-built agents with smaller context windows. These agents handle specific tasks—such as enriching security alerts—within larger workflows, ensuring accuracy and control.The benefits are tangible. One customer's triage agent processed 400 alerts in just eight days without direct human intervention, while another saved $1.8 million in manual endpoint deployment costs over a single month. Outcomes like reduced mean time to respond (MTTR) and faster time to automation are key drivers for adoption, especially when facing zero-day vulnerabilities where speed is critical.BlinkOps runs as SaaS, hybrid, or in secure environments like GovCloud, making it adaptable for organizations of all sizes and compliance requirements.The takeaway is clear: AI-driven automation doesn't just improve security operations—it creates new efficiencies across the enterprise. As Mike puts it, when a process can be automated, “just blink it.”Learn more about BlinkOps: https://itspm.ag/blinkops-942780Note: This story contains promotional content. Learn more.Guest: Mike Wayne, Vice President, Global Sales at BlinkOps | On Linkedin: https://www.linkedin.com/in/mikejwayne/ResourcesLearn more and catch more stories from BlinkOps: https://www.itspmagazine.com/directory/blinkopsLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-storyKeywords: sean martin, mike wayne, blink ops, ai automation, agentic ai, micro agents, security automation, soc automation, workflow automation, zero day response, alert triage, enrichment agent, low code automation, cyber security ai, enterprise automation, black hat usa, black hat 2025
With the 2025 season of men's majors fully in the rearview mirror, we've called upon friend of the pod, Sean Martin to challenge Soly and TC in the latest edition of our quiz show as DJ tests the guys on their knowledge of what we saw at Augusta, Quail Hollow, Oakmont and Portrush. Join us in our support of the Evans Scholars Foundation: https://nolayingup.com/esf Support our Sponsors: Rhoback The Stack System If you enjoyed this episode, consider joining The Nest: No Laying Up's community of avid golfers. Nest members help us maintain our light commercial interruptions (3 minutes of ads per 90 minutes of content) and receive access to exclusive content, discounts in the pro shop, and an annual member gift. It's a $90 annual membership, and you can sign up or learn more at nolayingup.com/join Subscribe to the No Laying Up Newsletter here: https://newsletter.nolayingup.com/ Subscribe to the No Laying Up Podcast channel here: https://www.youtube.com/@NoLayingUpPodcast Learn more about your ad choices. Visit megaphone.fm/adchoices