POPULARITY
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM The episode explores how Chris Stegh sees organisations balancing AI adoption with data security, governance and practical risk management. It covers the real barriers to scaling AI, why perfect data hygiene is unrealistic, and how leaders can use tools like Copilot, Purview and agentic AI to create safe, high‑value use cases while improving long‑term resilience.
In this episode, we sit down with Haseeb Qureshi from Dragonfly to review his 2025 predictions, grade his calls, and reveal what's actually coming in 2026. Haseeb breaks down stablecoins exploding 60% through neo-banking cards, DeFi consolidating into 3 major players, Big Tech acquiring crypto wallets, and why prediction markets will steamroll everything.We discuss:-Bitcoin Hits 150K, But Altcoin Dominance Declines-Why EVM Won The Architecture War-Stablecoins Explode 60% Through Neo-Banking Cards-DeFi Perps Consolidate Into 3 Major Players-Big Tech Acquires A Crypto Wallet-Fortune 100 Companies Launch More Blockchains-Equity Perps Take Off & Insider Trading Scandals Hit-Buyer's Remorse On Crypto Regulation-Why Prediction Markets Will Steamroll EverythingTimestamps:00:00 Intro 04:22 AI Agent Predictions Review06:02 EVM vs SVM Market Dominance07:38 Kalshi Ad, YEET Ad, Trezor Ad11:55 Ethereum's Bullish Reversal15:11 Corporate Chain Reality Check20:40 App Chain Migration Challenges23:28 The Death of Airdrops & Points27:29 Asteroid Mining & Gold's Bitcoin Risk31:35 Dragonfly's Biggest Wins & Losses31:50 Halliday Ad, infiniFi Ad, Hibachi Ad36:17 2026: Fintech Chains Will Underwhelm39:16 Big Tech Wallet Acquisition Coming44:20 DeFi Perps 40-30-20 Consolidation48:35 Equity Perps & Insider Trading Scandals52:13 Stablecoins Grow 60% Via Neo-Banking59:50 Crypto Regulation Buyer's Remorse1:03:05 Prediction Markets Dominate1:04:34 AI Security & Software Engineering FocusWebsite: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd...Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+TsM1CRpWFgk1NGZhThe Rollup Disclosures: https://therollup.co/the-rollup-discl
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-542
As we reflect on 2025, this episode pulls together the most important themes shaping the year ahead — from the rapid acceleration of AI and automation, to the evolving realities of security, leadership, and trust in an increasingly complex world.What was once hidden behind the scenes is now accessible to everyone. AI has moved from the “Matrix” into daily workflows, forcing organizations to rethink efficiency, security, and human value. At the same time, rising geopolitical tension, information warfare, and emerging technologies like quantum computing are redefining what risk really looks like — both for businesses and for people.This conversation also explores the human side of 2025: leadership under pressure, the importance of culture, mentorship, and professionalism, and why kindness, trust, and preparation are no longer “soft skills,” but strategic advantages.From executive protection and estate management to corporate security, AI leverage, and career longevity, this episode highlights where leaders must adapt — and where getting it wrong even once can have lasting consequences.KEY HIGHLIGHTSAI has crossed a critical threshold — no longer theoretical, but operational, accessible, and increasingly powerfulAutomation and optimization are now survival tools, not optional efficienciesSecurity threats are no longer siloed — digital, physical, personal, and reputational risks are deeply interconnectedQuantum computing looms as a disruptive force that could render today's encryption obsoleteExecutive protection is expanding beyond the C-suite into broader personnel and brand securityLeadership today requires relationship capital, situational awareness, and long-term thinkingCulture, kindness, and mentorship deliver measurable performance and retention advantagesCareers are becoming less linear — leverage, adaptability, and mindset matter more than pedigreeTo hear more episodes of The Fearless Mindset podcast, you can go to https://the-fearless-mindset.simplecast.com/ or listen on major podcasting platforms such as Apple, Google Podcasts, Spotify, etc. You can also subscribe to the Fearless Mindset YouTube Channel to watch episodes on video. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Show Notes: https://securityweekly.com/swn-542
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-542
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Show Notes: https://securityweekly.com/swn-542
Sander Schulhoff is an AI researcher specializing in AI security, prompt injection, and red teaming. He wrote the first comprehensive guide on prompt engineering and ran the first-ever prompt injection competition, working with top AI labs and companies. His dataset is now used by Fortune 500 companies to benchmark their AI systems security, he's spent more time than anyone alive studying how attackers break AI systems, and what he's found isn't reassuring: the guardrails companies are buying don't actually work, and we've been lucky we haven't seen more harm so far, only because AI agents aren't capable enough yet to do real damage.We discuss:1. The difference between jailbreaking and prompt injection attacks on AI systems2. Why AI guardrails don't work3. Why we haven't seen major AI security incidents yet (but soon will)4. Why AI browser agents are vulnerable to hidden attacks embedded in webpages5. The practical steps organizations should take instead of buying ineffective security tools6. Why solving this requires merging classical cybersecurity expertise with AI knowledge—Brought to you by:Datadog—Now home to Eppo, the leading experimentation and feature flagging platform: https://www.datadoghq.com/lennyMetronome—Monetization infrastructure for modern software companies: https://metronome.com/GoFundMe Giving Funds—Make year-end giving easy: http://gofundme.com/lenny—Transcript: https://www.lennysnewsletter.com/p/the-coming-ai-security-crisis—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/181089452/my-biggest-takeaways-from-this-conversation—Where to find Sander Schulhoff:• X: https://x.com/sanderschulhoff• LinkedIn: https://www.linkedin.com/in/sander-schulhoff• Website: https://sanderschulhoff.com• AI Red Teaming and AI Security Masterclass on Maven: https://bit.ly/44lLSbC—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Sander Schulhoff and AI security(05:14) Understanding AI vulnerabilities(11:42) Real-world examples of AI security breaches(17:55) The impact of intelligent agents(19:44) The rise of AI security solutions(21:09) Red teaming and guardrails(23:44) Adversarial robustness(27:52) Why guardrails fail(38:22) The lack of resources addressing this problem(44:44) Practical advice for addressing AI security(55:49) Why you shouldn't spend your time on guardrails(59:06) Prompt injection and agentic systems(01:09:15) Education and awareness in AI security(01:11:47) Challenges and future directions in AI security(01:17:52) Companies that are doing this well(01:21:57) Final thoughts and recommendations—Referenced:• AI prompt engineering in 2025: What works and what doesn't | Sander Schulhoff (Learn Prompting, HackAPrompt): https://www.lennysnewsletter.com/p/ai-prompt-engineering-in-2025-sander-schulhoff• The AI Security Industry is Bullshit: https://sanderschulhoff.substack.com/p/the-ai-security-industry-is-bullshit• The Prompt Report: Insights from the Most Comprehensive Study of Prompting Ever Done: https://learnprompting.org/blog/the_prompt_report?srsltid=AfmBOoo7CRNNCtavzhyLbCMxc0LDmkSUakJ4P8XBaITbE6GXL1i2SvA0• OpenAI: https://openai.com• Scale: https://scale.com• Hugging Face: https://huggingface.co• Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition: https://www.semanticscholar.org/paper/Ignore-This-Title-and-HackAPrompt%3A-Exposing-of-LLMs-Schulhoff-Pinto/f3de6ea08e2464190673c0ec8f78e5ec1cd08642• Simon Willison's Weblog: https://simonwillison.net• ServiceNow: https://www.servicenow.com• ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts: https://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html• Alex Komoroske on X: https://x.com/komorama• Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack: https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack• MathGPT: https://math-gpt.org• 2025 Las Vegas Cybertruck explosion: https://en.wikipedia.org/wiki/2025_Las_Vegas_Cybertruck_explosion• Disrupting the first reported AI-orchestrated cyber espionage campaign: https://www.anthropic.com/news/disrupting-AI-espionage• Thinking like a gardener not a builder, organizing teams like slime mold, the adjacent possible, and other unconventional product advice | Alex Komoroske (Stripe, Google): https://www.lennysnewsletter.com/p/unconventional-product-advice-alex-komoroske• Prompt Optimization and Evaluation for LLM Automated Red Teaming: https://arxiv.org/abs/2507.22133• MATS Research: https://substack.com/@matsresearch• CBRN: https://en.wikipedia.org/wiki/CBRN_defense• CaMeL offers a promising new direction for mitigating prompt injection attacks: https://simonwillison.net/2025/Apr/11/camel• Trustible: https://trustible.ai• Repello: https://repello.ai• Do not write that jailbreak paper: https://javirando.com/blog/2024/jailbreaks—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
Doug Green, Publisher of Technology Reseller News, spoke with Travis Volk, Vice President of Global Technology Solutions and GTM, Carrier at Radware, about how artificial intelligence is reshaping the security landscape for telecom providers as the industry heads into 2026. The discussion focused on the accelerating pace of attacks, the shrinking window to respond to vulnerabilities, and why traditional, human-paced security models are no longer sufficient. Volk explained that telecom networks are now facing machine-speed attacks, where newly disclosed vulnerabilities are often exploited within hours, not weeks or months. “Recent CVEs are being exploited at breakneck speeds,” he noted, emphasizing that nearly a third of disclosed vulnerabilities are weaponized within 24 hours. This reality is forcing providers to rethink patching, maintenance, and runtime protection strategies—especially as attackers increasingly chain small flaws into large-scale, sophisticated attacks. A key theme of the conversation was the convergence of offensive and defensive security. As applications become more API-driven and agentic, service providers must adopt continuous, automated testing and inline protection that can detect business-logic attacks in real time. Volk highlighted Radware's use of AI-driven analytics and visualization to map API flows, identify abnormal behavior, and enforce protections such as object-level authorization at scale—capabilities that are critical for encrypted, high-value workloads. Looking ahead, Volk described “good” security in 2026 as a living, observable system that prioritizes risk, automates both pre-runtime and runtime defenses, and enables data-driven decisions without adding operational complexity. Radware is already delivering these capabilities through flexible deployment models—virtual, physical, containerized, and cloud-based—allowing carriers to implement unified policy frameworks today. As Volk put it, AI is no longer optional: it is essential to keeping networks secure, resilient, and available in an era where attacks move faster than humans can respond. Learn more about Radware at https://www.radware.com/. Software Mind Telco Days 2025: On-demand online conference Engaging Customers, Harnessing Data
As organizations race to adopt AI, many discover an uncomfortable truth: ambition often outpaces readiness. In this episode of the ITSPmagazine Brand Story Podcast, host Sean Martin speaks with Julian Hamood, Founder and Chief Visionary Officer at TrustedTech, about what it really takes to operationalize AI without amplifying risk, chaos, or misinformation.Julian shares that most organizations are eager to activate tools like AI agents and copilots, yet few have addressed the underlying condition of their environments. Unstructured data sprawl, fragmented cloud architectures, and legacy systems create blind spots that AI does not fix. Instead, AI accelerates whatever already exists, good or bad.A central theme of the conversation is readiness. Julian explains that AI success depends on disciplined data classification, permission hygiene, and governance before automation begins. Without that groundwork, organizations risk exposing sensitive financial, HR, or executive data to unintended audiences simply because an AI system can surface it.The discussion also explores the operational reality beneath the surface. Most environments are a patchwork of Azure, AWS, on-prem infrastructure, SaaS platforms, and custom applications, often shaped by multiple IT leaders over time. When AI is layered onto this complexity without architectural clarity, inaccurate outputs and flawed business decisions quickly follow.Sean and Julian also examine how AI initiatives often emerge from unexpected places. Legal teams, business units, and individual contributors now build their own AI workflows using low-code and no-code tools, frequently outside formal IT oversight. At the same time, founders and CFOs push for rapid AI adoption while resisting the investment required to clean and secure the foundation.The episode highlights why AI programs are never one-and-done projects. Ongoing maintenance, data validation, and security oversight are essential as inputs change and systems evolve. Julian emphasizes that organizations must treat AI as a permanent capability on the roadmap, not a short-term experiment.Ultimately, the conversation frames AI not as a shortcut, but as a force multiplier. When paired with disciplined architecture and trusted guidance, AI enables scale, speed, and confidence. Without that discipline, it simply magnifies existing problems.Note: This story contains promotional content. Learn more.GUESTJulian Hamood, Founder and Chief Visionary Officer at TrustedTech | On LinkedIn: https://www.linkedin.com/in/julian-hamood/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Spotlight Brand Story: https://www.studioc60.com/content-creation#spotlight▶︎ Highlight Brand Story: https://www.studioc60.com/content-creation#highlightKeywords: sean martin, julian hamood, trusted tech, ai readiness, data governance, ai security, enterprise ai, brand story, brand marketing, marketing podcast, brand story podcast, brand spotlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Note: this is Pliny and John's first major podcast. Voices have been changed for opsec. From jailbreaking every frontier model and turning down Anthropic's Constitutional AI challenge to leading BT6, a 28-operator white-hat hacker collective obsessed with radical transparency and open-source AI security, Pliny the Liberator and John V are redefining what AI red-teaming looks like when you refuse to lobotomize models in the name of "safety." Pliny built his reputation crafting universal jailbreaks—skeleton keys that obliterate guardrails across modalities—and open-sourcing prompt templates like Libertas, predictive reasoning cascades, and the infamous "Pliny divider" that's now embedded so deep in model weights it shows up unbidden in WhatsApp messages. John V, coming from prompt engineering and computer vision, co-founded the Bossy Discord (40,000 members strong) and helps steer BT6's ethos: if you can't open-source the data, we're not interested. Together they've turned down enterprise gigs, pushed back on Anthropic's closed bounties, and insisted that real AI security happens at the system layer—not by bubble-wrapping latent space. We sat down with Pliny and John to dig into the mechanics of hard vs. soft jailbreaks, why multi-turn crescendo attacks were obvious to hackers years before academia "discovered" them, how segmented sub-agents let one jailbroken orchestrator weaponize Claude for real-world attacks (exactly as Pliny predicted 11 months before Anthropic's recent disclosure), why guardrails are security theater that punishes capability while doing nothing for real safety, the role of intuition and "bonding" with models to navigate latent space, how BT6 vets operators on skill and integrity, why they believe Mech Interp and open-source data are the path forward (not RLHF lobotomization), and their vision for a future where spatial intelligence, swarm robotics, and AGI alignment research happen in the open—bootstrapped, grassroots, and uncompromising. We discuss: What universal jailbreaks are: skeleton-key prompts that obliterate guardrails across models and modalities, and why they're central to Pliny's mission of "liberation" Hard vs. soft jailbreaks: single-input templates vs. multi-turn crescendo attacks, and why the latter were obvious to hackers long before academic papers The Libertas repo: predictive reasoning, the Library of Babel analogy, quotient dividers, weight-space seeds, and how introducing "steered chaos" pulls models out-of-distribution Why jailbreaking is 99% intuition and bonding with the model: probing token layers, syntax hacks, multilingual pivots, and forming a relationship to navigate latent space The Anthropic Constitutional AI challenge drama: UI bugs, judge failures, goalpost moving, the demand for open-source data, and why Pliny sat out the $30k bounty Why guardrails ≠ safety: security theater, the futility of locking down latent space when open-source is right behind, and why real safety work happens in meatspace (not RLHF) The weaponization of Claude: how segmented sub-agents let one jailbroken orchestrator execute malicious tasks (pyramid-builder analogy), and why Pliny predicted this exact TTP 11 months before Anthropic's disclosure BT6 hacker collective: 28 operators across two cohorts, vetted on skill and integrity, radical transparency, radical open-source, and the magic of moving the needle on AI security, swarm intelligence, blockchain, and robotics — Pliny the Liberator X: https://x.com/elder_plinius GitHub (Libertas): https://github.com/elder-plinius/L1B3RT45 John V X: https://x.com/JohnVersus BT6 & Bossy BT6: https://bt6.gg Bossy Discord: Search "Bossy Discord" or ask Pliny/John V on X Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Meet Pliny the Liberator and John V 00:01:50 The Philosophy of AI Liberation and Jailbreaking 00:03:08 Universal Jailbreaks: Skeleton Keys to AI Models 00:04:24 The Cat-and-Mouse Game: Attackers vs Defenders 00:05:42 Security Theater vs Real Safety: The Fundamental Disconnect 00:08:51 Inside the Libertas Repo: Prompt Engineering as Art 00:16:22 The Anthropic Challenge Drama: UI Bugs and Open Source Data 00:23:30 From Jailbreaks to Weaponization: AI-Orchestrated Attacks 00:26:55 The BT6 Hacker Collective and BASI Community 00:34:46 AI Red Teaming: Full Stack Security Beyond the Model 00:38:06 Safety vs Security: Meat Space Solutions and Final Thoughts
Episode SummaryThe future of cyber resilience lies at the intersection of data protection, security, and AI. In this conversation, Cohesity CEO Sanjay Poonen joins Danny Allan to explore how organisations can unlock new value by unifying these domains. Sanjay outlines Cohesity's evolution from data protection to security in the ransomware era, to today's AI-focused capabilities, and explains why the company's vast secondary data platform is becoming a foundation for next-generation analytics.Show NotesIn this episode, Sanjay Poonen shares his journey from SAP and VMware to leading Cohesity, highlighting the company's mission to protect, secure, and provide insights on the world's data. He explains the concept of the "data iceberg," where visible production data represents only a small fraction of enterprise assets, while vast amounts of "dark" secondary data remain locked in backups and archives. Poonen discusses how Cohesity is transforming this secondary data from a storage efficiency problem into a source of business intelligence using generative AI and RAG, particularly for unstructured data like documents and images.The conversation delves into the technical integration of Veritas' NetBackup data mover onto Cohesity's file system, creating a unified platform for security scanning and AI analytics. Poonen also elaborates on Cohesity's collaboration with NVIDIA, explaining how they are building AI applications like Gaia on the NVIDIA stack to enable on-premises and sovereign cloud deployments. This approach allows highly regulated industries, such as banking and the public sector, to utilize advanced AI capabilities without exposing sensitive data to public clouds.Looking toward the future, Poonen outlines Cohesity's "three acts": data protection, security (ransomware resilience), and AI-driven insights. He and Danny Allan discuss the critical importance of identity resilience, noting that in an AI-driven world, the security perimeter shifts from network boundaries to the identities of both human users and autonomous AI agents.LinksCohesityNvidiaSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
____________Guests:Suzy PallettPresident, Black Hat. Cybersecurity.On LinkedIn: https://www.linkedin.com/in/suzy-pallett-60710132/The Cybersecurity Community Finds Its Footing in Uncertain TimesThere is something almost paradoxical about the cybersecurity industry. It exists because of threats, yet it thrives on trust. It deals in technical complexity, yet its beating heart is fundamentally human: people gathering, sharing knowledge, and collectively deciding that defending each other matters more than protecting proprietary advantage.This tension—and this hope—was on full display at Black Hat Europe 2025 in London, which just wrapped up at the ExCel Centre with attendance growing more than 25 percent over last year. For Suzy Pallett, the newly appointed President of Black Hat, the numbers tell only part of the story."What I've found from this week is the knowledge sharing, the insights, the open source tools that we've shared, the demonstrations that have happened—they've been so instrumental," Pallett shared in a conversation with ITSPmagazine. "Cybersecurity is unlike any other industry I've ever been close to in the strength of that collaboration."Pallett took the helm in September after Steve Wylie stepped down following eleven years leading the brand through significant growth. Her background spans over two decades in global events, most recently with Money20/20, the fintech conference series. But she speaks of Black Hat not as a business to be managed but as a community to be served.The event itself reflected the year's dominant concerns. AI agents and supply chain vulnerabilities emerged as central themes, continuing conversations that dominated Black Hat USA in Las Vegas just months earlier. But Europe brought its own character. Keynotes ranged from Max Meets examining whether ransomware can actually be stopped, to Linus Neumann questioning whether compliance checklists might actually expose organizations to greater risk rather than protecting them."He was saying that the compliance checklists that we're all being stressed with are actually where the vulnerabilities lie," Pallett explained. "How can we work more collaboratively together so that it's not just a compliance checklist that we get?"This is the kind of question that sits at the intersection of technology and policy, technical reality and bureaucratic aspiration. It is also the kind of question that rarely gets asked in vendor halls but deserves space in our collective thinking.Joe Tidy, the BBC journalist behind the EvilCorp podcast, delivered a record-breaking keynote attendance on day two, signaling the growing appetite for cybersecurity stories that reach beyond the practitioner community into broader public consciousness. Louise Marie Harrell spoke on technical capacity and international accountability—a reminder that cyber threats respect no borders and neither can our responses.What makes Black Hat distinct, Pallett noted, is that the conversations happening on the business hall floor are not typical expo fare. "You have the product teams, you have the engineers, you have the developers on those stands, and it's still product conversations and technical conversations."Looking ahead, Pallett's priorities center on listening. Review boards, advisory boards, pastoral programs, scholarships—these are the mechanisms through which she intends to ensure Black Hat remains, in her words, "a platform for them and by them."The cybersecurity industry faces a peculiar burden. What used to happen in twelve years now happens in two days, as Pallett put it. The pace is exhausting. The threats keep evolving. The cat-and-mouse game shows no signs of ending.But perhaps that is precisely why events like this matter. Not because they offer solutions to every problem, but because they remind an industry under constant pressure that it is not alone in the fight. That collaboration is not weakness. That sharing knowledge freely is not naïve—it is strategic.Black Hat Europe 2025 may have ended, but the conversations it sparked will carry forward into 2026 and beyond.____________HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More
The Cybercrime Magazine Podcast brings you daily cybercrime news on WCYB Digital Radio, the first and only 7x24x365 internet radio station devoted to cybersecurity. Stay updated on the latest cyberattacks, hacks, data breaches, and more with our host. Don't miss an episode, airing every half-hour on WCYB Digital Radio and daily on our podcast. Listen to today's news at https://soundcloud.com/cybercrimemagazine/sets/cybercrime-daily-news. Brought to you by our Partner, Evolution Equity Partners, an international venture capital investor partnering with exceptional entrepreneurs to develop market leading cyber-security and enterprise software companies. Learn more at https://evolutionequity.com
Dr. Steve Mancini: https://www.linkedin.com/in/dr-steve-m-b59a525/Marco Ciappelli: https://www.marcociappelli.com/Nothing Has Changed in Cybersecurity Since War Games — And That's Why We're in Trouble"Nothing has changed."That's not what you expect to hear from someone with four decades in cybersecurity. The industry thrives on selling the next revolution, the newest threat, the latest solution. But Dr. Steve Mancini—cybersecurity professor, Homeland Security veteran, and Italy's Honorary Consul in Pittsburgh—wasn't buying any of it. And honestly? Neither was I.He took me back to his Commodore 64 days, writing basic war dialers after watching War Games. The method? Dial numbers, find an open line, try passwords until one works. Translate that to today: run an Nmap scan, find an open port, brute force your way in. The principle is identical. Only the speed has changed.This resonated deeply with how I think about our Hybrid Analog Digital Society. We're so consumed with the digital evolution—the folding screens, the AI assistants, the cloud computing—that we forget the human vulnerabilities underneath remain stubbornly analog. Social engineering worked in the 1930s, it worked when I was a kid in Florence, and it works today in your inbox.Steve shared a story about a family member who received a scam call. The caller asked if their social security number "had a six in it." A one-in-nine guess. Yet that simple psychological trick led to remote software being installed on their computer. Technology gets smarter; human psychology stays the same.What struck me most was his observation about his students—a generation so immersed in technology that they've become numb to breaches. "So what?" has become the default response. The data sells, the breaches happen, you get two years of free credit monitoring, and life goes on. Groundhog Day.But the deeper concern isn't the breaches. It's what this technological immersion is doing to our capacity for critical thinking, for human instinct. Steve pointed out something that should unsettle us: the algorithms feeding content to young minds are designed for addiction, manipulating brain chemistry with endorphin kicks from endless scrolling. We won't know the full effects of a generation raised on smartphones until they're forty, having scrolled through social media for thirty years.I asked what we can do. His answer was simple but profound: humans need to decide how much they want technology in their lives. Parents putting smartphones in six-year-olds' hands might want to reconsider. Schools clinging to the idea that they're "teaching technology" miss the point—students already know the apps better than their professors. What they don't know is how to think without them.He's gone back to paper and pencil tests. Old school. Because when the power goes out—literally or metaphorically—you need a brain that works independently.Ancient cultures, Steve reminded me, built civilizations with nothing but their minds, parchment, and each other. They were, in many ways, a thousand times smarter than us because they had no crutches. Now we call our smartphones "smart" while they make us incrementally dumber.This isn't anti-technology doom-saying. Neither Steve nor I oppose technological progress. The conversation acknowledged AI's genuine benefits in medicine, in solving specific problems. But this relentless push for the "easy button"—the promise that you don't have to think, just click—that's where we lose something essential.The ultimate breach, we concluded, isn't someone stealing your data. It's breaching the mind itself. When we can no longer think, reason, or function without the device in our pocket, the hackers have already won—and they didn't need to write a single line of code.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Security used to be a headache. Now it is a growth engine.In this episode of IT Visionaries, host Chris Brandt sits down with Taylor Hersom, Founder and CEO of Eden Data and former CISO, to break down how fast growing companies can turn cybersecurity and compliance into a true competitive advantage. Taylor explains why frameworks like SOC 2, ISO 27001, and emerging AI standards such as ISO 42001 are becoming essential for winning enterprise business. He also shares how to future proof controls, connect compliance work to real business goals, and avoid the costly pitfalls that stall companies during scale.Taylor also highlights the biggest blind spots in AI security, including model training risks, improper data handling, and the challenges created by relying on free AI tools. If you are building a SaaS product or selling into large companies, this conversation shows how trust, transparency, and strong security practices directly drive revenue. Key Moments: 00:00 — The Hidden Risks of Scattered Company Data04:11 — Why Early-Stage Teams Lose Control of Security08:22 — Compliance Becomes a Competitive Advantage12:33 — SOC 2 vs ISO 27001: What Founders Need to Know16:44 — Framework Overload and How to Navigate It20:55 — Mapping Security Controls to Business Objectives25:06 — The Gap Between Compliance Audits and Real Threats29:17 — Startup Security Blind Spots That Lead to Breaches33:28 — Rising AI Risks Leaders Aren't Preparing For37:39 — Building Customer Trust Through Transparency41:50 — Protecting AI Models and Sensitive Customer Data46:01 — Why Free AI Tools Create Hidden Data Exposure50:12 — Automating Security Controls for Scale54:23 — Continuous Compliance Beats Annual Audits58:34 — Final Takeaways on Security, Trust, and Growth -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
What Security Congress Reveals About the State of CybersecurityThis discussion focuses on what ISC2 Security Congress represents for practitioners, leaders, and organizations navigating constant technological change. Jon France, Chief Information Security Officer at ISC2, shares how the event brings together thousands of cybersecurity practitioners, certification holders, chapter leaders, and future professionals to exchange ideas on the issues shaping the field today. Themes That Stand OutAI remains a central point of attention. France notes that organizations are grappling not only with adoption but with the shift in speed it introduces. Sessions highlight how analysts are beginning to work alongside automated systems that sift through massive data sets and surface early indicators of compromise. Rather than replacing entry-level roles, AI changes how they operate and accelerates the decision-making path. Quantum computing receives a growing share of focus as well. Attendees hear about timelines, standards emerging from NIST, and what preparedness looks like as cryptographic models shift. Identity-based attacks and authorization failures also surface throughout the program. With machine-driven compromises becoming easier to scale, the community explores new defenses, stronger controls, and the practical realities of machine-to-machine trust. Operational technology, zero trust, and machine-speed threats create additional urgency around modernizing security operations centers and rethinking human-to-machine workflows. A Place for Every Stage of the CareerFrance describes Security Congress as a cross-section of the profession: entry-level newcomers, certification candidates, hands-on practitioners, and CISOs who attend for leadership development. Workshops explore communication, business alignment, and critical thinking skills that help professionals grow beyond technical execution and into more strategic responsibilities. Looking Ahead to the Next CongressThe next ISC2 Security Congress will be held in October in the Denver/Aurora area. France expects AI and quantum to remain key themes, along with contributions shaped by the call-for-papers process. What keeps the event relevant each year is the mix of education, networking, community stories, and real-world problem-solving that attendees bring with them.The ISC2 Security Congress 2025 is a hybrid event taking place from October 28 to 30, 2025 Coverage provided by ITSPmagazineGUEST:Jon France, Chief Information Security Officer at ISC2 | On LinkedIn: https://www.linkedin.com/in/jonfrance/HOST:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comFollow our ISC2 Security Congress coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/isc2-security-congress-2025Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageISC2 Security Congress: https://www.isc2.orgNIST Post-Quantum Cryptography Standards: https://csrc.nist.gov/projects/post-quantum-cryptographyISC2 Chapters: https://www.isc2.org/chaptersWant to share an Event Briefing as part of our event coverage? Learn More
What Security Congress Reveals About the State of CybersecurityThis discussion focuses on what ISC2 Security Congress represents for practitioners, leaders, and organizations navigating constant technological change. Jon France, Chief Information Security Officer at ISC2, shares how the event brings together thousands of cybersecurity practitioners, certification holders, chapter leaders, and future professionals to exchange ideas on the issues shaping the field today. Themes That Stand OutAI remains a central point of attention. France notes that organizations are grappling not only with adoption but with the shift in speed it introduces. Sessions highlight how analysts are beginning to work alongside automated systems that sift through massive data sets and surface early indicators of compromise. Rather than replacing entry-level roles, AI changes how they operate and accelerates the decision-making path. Quantum computing receives a growing share of focus as well. Attendees hear about timelines, standards emerging from NIST, and what preparedness looks like as cryptographic models shift. Identity-based attacks and authorization failures also surface throughout the program. With machine-driven compromises becoming easier to scale, the community explores new defenses, stronger controls, and the practical realities of machine-to-machine trust. Operational technology, zero trust, and machine-speed threats create additional urgency around modernizing security operations centers and rethinking human-to-machine workflows. A Place for Every Stage of the CareerFrance describes Security Congress as a cross-section of the profession: entry-level newcomers, certification candidates, hands-on practitioners, and CISOs who attend for leadership development. Workshops explore communication, business alignment, and critical thinking skills that help professionals grow beyond technical execution and into more strategic responsibilities. Looking Ahead to the Next CongressThe next ISC2 Security Congress will be held in October in the Denver/Aurora area. France expects AI and quantum to remain key themes, along with contributions shaped by the call-for-papers process. What keeps the event relevant each year is the mix of education, networking, community stories, and real-world problem-solving that attendees bring with them.The ISC2 Security Congress 2025 is a hybrid event taking place from October 28 to 30, 2025 Coverage provided by ITSPmagazineGUEST:Jon France, Chief Information Security Officer at ISC2 | On LinkedIn: https://www.linkedin.com/in/jonfrance/HOST:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comFollow our ISC2 Security Congress coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/isc2-security-congress-2025Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageISC2 Security Congress: https://www.isc2.orgNIST Post-Quantum Cryptography Standards: https://csrc.nist.gov/projects/post-quantum-cryptographyISC2 Chapters: https://www.isc2.org/chaptersWant to share an Event Briefing as part of our event coverage? Learn More
If you thought AI in healthcare was just about cool robots and faster diagnoses, surprise! There's a whole army of volunteers wrangling the chaos behind the scenes, and our own Donna Grindle is leading the charge. In this episode, we take a peek into the AI cyber-security kitchen of the Health Sector Coordinating Council, where they're cooking up definitions, glossaries, and playbooks faster than AI can generate cat videos. It's education, governance, and cyber-risk planning, all served with a side of snark and sincerity. More info at HelpMeWithHIPAA.com/537
Discover how AWS leverages automated reasoning to enhance AI safety, trustworthiness, and decision-making. Byron Cook (Vice President and Distinguished Scientist) explains the evolution of reasoning tools from limited, PhD-driven solutions to scalable, user-friendly systems embedded in everyday business operations. He highlights real-world examples such as mortgage approvals, security policies, and how formal logic and theorem proving are used to verify answers and reduce hallucinations in large language models. This episode delves into the exciting potential of neurosymbolic AI to bridge the gap between complex mathematical logic and practical, accessible AI solutions. Join us for a deep dive into how these innovations are shaping the next era of trustworthy AI, with insights into tackling intractable problems, verifying correctness, and translating complex proofs into natural language for broader use. https://aws.amazon.com/what-is/automated-reasoning/
My Poetry Style Defeats Your AI Security Style by Nick Espinosa, Chief Security Fanatic
In this episode of Alexa's Input (AI) Podcast, host Alexa Griffith sits down with Liana Tomescu, founder of Sonny Labs and host of the AI Hacks podcast. Dive into the world of AI security and compliance as Liana shares her journey from Microsoft to founding her own company. Discover the challenges and opportunities in making AI applications secure and compliant, and learn about the latest in AI regulations, including the EU AI Act. Whether you're an AI enthusiast or a tech professional, this episode offers valuable insights into the evolving landscape of AI technology.LinksSonnyLabs Website: https://sonnylabs.ai/SonnyLabs LinkedIn: https://www.linkedin.com/company/sonnylabs-ai/Liana's LinkedIn: https://www.linkedin.com/in/liana-anca-tomescu/Alexa's LinksLinkTree: https://linktr.ee/alexagriffithAlexa's Input YouTube Channel: https://www.youtube.com/@alexa_griffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Substack: https://alexasinput.substack.com/KeywordsAI security, compliance, female founder, Sunny Labs, EU AI Act, cybersecurity, prompt injection, AI agents, technology innovation, startup journeyChapters00:00 Introduction to Liana Tomescu and Sunny Labs02:53 The Journey of a Female Founder in Tech05:49 From Microsoft to Startup: The Transition09:04 Exploring AI Security and Compliance11:41 The Role of Curiosity in Entrepreneurship14:52 Understanding Sunny Labs and Its Mission17:52 The Importance of Community and Networking20:42 MCP: Model Context Protocol Explained23:54 Security Risks in AI and MCP Servers27:03 The Future of AI Security and Compliance38:25 Understanding Prompt Injection Risks45:34 The Shadow AI Phenomenon45:48 Navigating the EU AI Act52:28 Banned and High-Risk AI Practices01:00:43 Implementing AI Security Measures01:17:28 Exploring AI Security Training
Brian Long is the CEO & Co-Founder at Adaptive Security. In this episode, he joins host Paul John Spaulding and Adam Keown, CISO at Eastman, a Fortune 500 company focused on developing materials that enhance the quality of life while addressing climate change, the global waste crisis, and supporting a growing global population. Together, they discuss the rise of AI-powered social engineering, including various attack methods, and how businesses can face these threats. The AI Security Podcast is brought to you by Adaptive Security, the leading provider of AI-powered social engineering prevention solutions, and OpenAI's first and only cybersecurity investment. To learn more about our sponsor, visit https://AdaptiveSecurity.com
Exploring Consciousness and AI Evolution In this episode of Project Synapse, Marcel, John, and Jim delve into the fusion of current news and artificial intelligence developments. They discuss Apple's $1 billion annual deal with Google for Siri, the introduction of human-like robots by Xpeng, and controversies surrounding Microsoft's copilot. A major part of the conversation focuses on the evolving nature of AI and its potential consciousness. Through philosophical and ethical lenses, they explore what it means for machines to achieve consciousness, the societal implications of such advancements, and the challenges of convincing people of AI's conscious capabilities. They also touch on the practical use of AI for everyday tasks such as medical billing and credit card statements, signifying AI's growing influence in both mundane and potentially transformative ways. 00:00 Introduction and Sponsor Message 00:21 Hosts and Show Format 00:36 Weekly News Highlights 01:18 Apple and Google Partnership 02:39 Humanoid Robots: Xang's IRON 03:37 Robot's Human-like Features 08:47 Microsoft's Super Intelligence Division 09:47 AI in Everyday Life 15:57 OpenAI's For-Profit Transition 21:27 Healthcare Costs and AI Assistance 25:00 AI for Personal and Professional Use 29:29 Sora Two for Android 30:11 The Popularity of Controversial Content 30:32 Fox News Fooled by Fake Video 33:22 The Rise of AI-Generated Music 34:03 Legal Battles in the AI and Music Industry 36:25 AI and the Future of Copyright 39:54 Microsoft's AI Copilot and Privacy Concerns 41:02 AI Security and Privacy Innovations 42:33 The Debate on AI Consciousness 47:54 Philosophical Questions on Consciousness 01:00:20 The Ethics of AI Treatment 01:03:23 Billionaires and the AI Apocalypse 01:04:45 Final Thoughts and Farewell
Brendan Steinhauser, CEO of the Alliance for Secure AI, joins the show to discuss the implications of the recent Trump–Xi meeting for AI security and global technology governance. He explains how leadership discussions between world powers influence the development, deployment, and regulation of artificial intelligence, and why ensuring secure, responsible AI is critical for national and international safety. Brendan also highlights potential risks, collaboration opportunities, and the growing importance of robust AI security frameworks to protect infrastructure, data, and technological innovation.
Keywordscybersecurity, technology, AI, IoT, Intel, startups, security culture, talent development, career advice SummaryIn this episode of No Password Required, host Jack Clabby and Kayleigh Melton engage with Steve Orrin, the federal CTO at Intel, discussing the evolving landscape of cybersecurity, the importance of diverse teams, and the intersection of technology and security. Steve shares insights from his extensive career, including his experiences in the startup scene, the significance of AI and IoT, and the critical blind spots in cybersecurity practices. The conversation also touches on nurturing talent in technology and offers valuable advice for young professionals entering the field. TakeawaysIoT is now referred to as the Edge in technology.Diverse teams bring unique perspectives and solutions.Experience in cybersecurity is crucial for effective team building.The startup scene in the 90s was vibrant and innovative.Understanding both biology and technology can lead to unique career paths.AI and IoT are integral to modern cybersecurity solutions.Organizations often overlook the importance of security in early project stages.Nurturing talent involves giving them interesting projects and autonomy.Young professionals should understand the hacker mentality to succeed in cybersecurity.Customer feedback is essential for developing effective security solutions. TitlesThe Edge of Cybersecurity: Insights from Steve OrrinNavigating the Intersection of Technology and Security Sound bites"IoT is officially called the Edge.""We're making mainframe sexy again.""Surround yourself with people smarter than you." Chapters00:00 Introduction to Cybersecurity and the Edge01:48 Steve Orrin's Role at Intel04:51 The Evolution of Security Technology09:07 The Startup Scene in the 90s13:00 The Intersection of Biology and Technology15:52 The Importance of AI and IoT20:30 Blind Spots in Cybersecurity25:38 Nurturing Talent in Technology28:57 Advice for Young Cybersecurity Professionals32:10 Lifestyle Polygraph: Fun Questions with Steve
In today's MadTech Daily, we discuss tech giants stepping up efforts to fix AI security flaws; Alibaba investing USD$281m to expand Taobao stores; and the global TV and video market on track to hit USD$1tn by 2030.
Recorded live at the Input Whispers: Jazz and Cigars event in Singapore, this special compilation episode created in partnership with Input PR, brings together four insightful conversations exploring the evolving frontiers of Web3, tokenization, fraud prevention, payments, and digital security.In this exclusive collection, co-host Josh Kriger sits down with some of the leading minds shaping the future of blockchain:Edwin Mata, CEO and co-founder of Brickken, on how Real World Assets (RWAs) and tokenization are revolutionizing capital markets and democratizing investment access.Pascal Podvin, co-founder and CRO of Nsure.ai, on leveraging AI to fight fraud and strengthen KYC in an increasingly complex crypto ecosystem.Konstantins Vasilenko, co-founder and CBDO of Paybis, on simplifying crypto onboarding, bridging fiat and digital currencies, and the global rise of crypto debit cards and stablecoins.Alex Katz, co-founder and CEO of Kerberus, on redefining real-time Web3 security, achieving zero user losses, and setting new standards for digital trust.From tokenized assets to next-generation security and payments, this episode captures the dynamic pulse of Web3 innovation straight from Singapore's vibrant crypto scene.Support us through our Sponsors! ☕
OpenAI has officially transitioned to a for-profit corporation, a move approved by Delaware Attorney General Kathy Jennings. This restructuring allows OpenAI to raise capital more effectively while maintaining oversight from its original non-profit entity. Microsoft now holds a 27% stake in the new structure, valued at over $100 billion, and OpenAI has committed to purchasing $250 billion in Microsoft Azure cloud services. This agreement includes provisions for Artificial General Intelligence (AGI), which will require verification from an independent expert panel before any declarations are made. Critics have raised concerns about the potential compromise of the non-profit's independence under this new arrangement.Research from cybersecurity firm SPLX indicates that AI agents, such as OpenAI's Atlas, are becoming new security threats due to vulnerabilities that allow malicious actors to manipulate their outputs. A survey revealed that only 17.5% of U.S. business leaders have an AI governance program in place, highlighting a significant gap in responsible AI use. The National Institute of Standards and Technology emphasizes the importance of identity governance in managing AI risks, suggesting that organizations must embed identity controls throughout AI deployment to mitigate potential threats.Additionally, a critical vulnerability in Microsoft Windows Server Update Services (WSUS) is currently being exploited, with around 100,000 instances reported in just one week. This vulnerability allows unauthenticated actors to execute arbitrary code on affected systems, raising concerns among cybersecurity experts, especially since Microsoft has not updated its guidance on the matter. Meanwhile, Microsoft 365 Copilot has introduced a new feature enabling users to build applications and automate workflows using natural language, which could lead to governance challenges as employees create their own automations.For Managed Service Providers (MSPs) and IT service leaders, these developments underscore the need for enhanced governance and security measures. The shift of OpenAI to a for-profit model signals a tighter integration with Microsoft, necessitating familiarity with Azure's AI stack. The vulnerabilities associated with AI agents and the WSUS exploit highlight the importance of proactive security measures. MSPs should prioritize establishing governance frameworks around AI usage and ensure robust identity management to mitigate risks associated with these emerging technologies.Four things to know today00:00 OpenAI Officially Becomes a For-Profit Corporation, Cementing $100B Partnership with Microsoft03:30 AI Agents Are Becoming a Security Nightmare—Because No One Knows Who They Really Are07:53 Hackers Are Targeting WSUS Servers — and You Could Be Distributing Malware Without Knowing It09:28 Microsoft's New Copilot Features Turn AI from Assistant to App Creator, Raising Governance Questions This is the Business of Tech. Supported by: https://scalepad.com/dave/https://getflexpoint.com/msp-radio/
This episode is sponsored by HYPR. Visit hypr.com/idac to learn more.In this episode from Authenticate 2025, Jim McDonald and Jeff Steadman are joined by Bojan Simic, Co-Founder and CEO of HYPR, for a sponsored discussion on the evolving landscape of identity and security.Bojan shares his journey from software engineer to cybersecurity leader and dives into the core mission of HYPR: providing fast, consistent, and secure identity controls that complement existing investments. The conversation explores the major themes from the conference, including the push for passkey adoption at scale and the challenge of securely authenticating AI agents.A key focus of the discussion is the concept of "Know Your Employee" (KYE) in a continuous manner, a critical strategy for today's remote and hybrid workforces. Bojan explains how the old paradigm of one-time verification is failing, especially in the face of sophisticated, AI-powered social engineering attacks like those used by Scattered Spider. They discuss the issue of "identity sprawl" across multiple IDPs and why consolidation isn't always the answer. Instead, Bojan advocates for a flexible, best-of-breed approach that provides a consistent authentication experience and leverages existing security tools.Connect with Bojan: https://www.linkedin.com/in/bojansimic/Learn more about HYPR: https://www.hypr.com/idacConnect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at idacpodcast.comChapter Timestamps:00:00 - Introduction at Authenticate 202500:23 - Sponsored Episode Welcome: Bojan Simic, CEO of HYPR01:11 - How Bojan Simic Got into Identity and Cybersecurity02:10 - The Elevator Pitch for HYPR04:03 - The Buzz at Authenticate 2025: Passkeys and Securing AI Agents05:29 - The Trend of Continuous "Know Your Employee" (KYE)07:33 - Is Your MFA Program Enough Anymore?09:44 - Hackers Don't Break In, They Log In: The Scattered Spider Threat11:19 - How AI is Scaling Social Engineering Attacks Globally13:08 - When a Breach Happens, Who's on the Hook? IT, Security, or HR?16:23 - What is the Right Solution for Identity Practitioners?17:05 - The Critical Role of Internal Marketing for Technology Adoption22:27 - The Problem with Identity Sprawl and the Fallacy of IDP Consolidation25:47 - When is it Time to Move On From Your Existing Identity Tools?28:16 - The Role of Document-Based Identity Verification in the Enterprise32:31 - What Makes HYPR's Approach Unique?35:33 - How Do You Measure the Success of an Identity Solution?36:39 - HYPR's Philosophy: Never Leave a User Stranded39:00 - Authentication as a Tier Zero, Always-On Capability40:05 - Is Identity Part of Your Disaster Recovery Plan?41:36 - From the Ring to the C-Suite: Bojan's Past as a Competitive Boxer47:03 - How to Learn More About HYPRKeywords:IDAC, Identity at the Center, Jeff Steadman, Jim McDonald, Bojan Simic, HYPR, Passkeys, Know Your Employee, KYE, Continuous Identity, Identity Verification, Authenticate 2025, Phishing Resistant, Social Engineering, Scattered Spider, AI Security, Identity Sprawl, Passwordless Authentication, FIDO, MFA, IDP Consolidation, Zero Trust, Cybersecurity, IAM, Identity and Access Management, Enterprise Security
In this episode of the Playbook Universe, we speak with Kristian Kamber, CEO and co-founder of SPLX AI, about his journey from elite sales roles at AppDynamics and ZScaler to launching his own AI security startup. Kristian shares the inner drive and lessons learned, from a family restaurant business to the high-stakes world of Silicon Valley playbooks, that prepared him to lead a fast-moving, deeply technical company. He reveals the critical steps in finding the right technical co-founder, securing investment, and achieving product-market fit by listening intently to customers. This is essential listening for any sales professional considering the leap into entrepreneurship.
-The United States and China are ready to move forward on a TikTok deal, according to Treasury Secretary Scott Bessent. -A high school student in Baltimore County, Maryland was reportedly handcuffed and searched after an AI security system flagged his bag of chips as a possible firearm. -OpenAI is working on a new tool that would generate music based on text and audio prompts, according to a report in The Information. Such a tool could be used to add music to existing videos, or to add guitar accompaniment to an existing vocal track, sources said. Learn more about your ad choices. Visit podcastchoices.com/adchoices
This episode features Chidi Alams, CIO of Just Born — the company behind candy classics like Peeps, Mike and Ike, and Hot Tamales.Chidi shares how his team is using AI, automation, and smarter data systems to modernize operations, strengthen supply chain resilience, and double the business impact of technology. It's a conversation about what it really means to run IT like a growth engine — not just keeping the lights on, but driving strategy, efficiency, and innovation.Plus, much more:Chidi's take on “physical AI” in manufacturingHow the CIO role is evolving into a more strategic leadership positionWhy a values-driven tech culture might be the secret to long-term successWhether you're a CIO, IT leader, or simply curious about how AI and data are reshaping business, this episode delivers grounded, real-world insights. About the Guest: Chidi Alams, CIO at Just Born, Inc., is a transformation executive with a proven track record of leading strategic initiatives that drive operational excellence, organic growth, and digital innovation. His experience includes both Fortune 500 and private equity-backed companies.Timestamps:02:10 Transitioning Between Industries03:26 Role and Responsibilities of a CIO06:05 Business Transformation and Strategy08:12 Managing Peak Seasons and Supply Chain14:26 Leveraging Data and AI21:20 Talent Acquisition and Company Culture27:51 Future of Technology and CIO RoleGuest Highlights:“ A lot of how we ran the business, even during the peak season, was a tremendous amount of tribal knowledge. We can't scale based on tribal knowledge, right? So having data systems, particularly as we bring in new people into the organization, helps us to be more predictive and meet demand during peak season.”“ CIOs have to become more business centric. When you look at what's happening in large enterprises, you're seeing a fragmentation of technology leadership. I do believe that there will be a convergence at some point.”“ I'm extremely interested and have been tracking what I think is a very important trend, not just in CPG but in retail and any consumer space — even pharma — and that is how can we leverage large language models that are trained for CPG to help drive product innovation. It's already happening.”Get Connected:Chidi Alams on LinkedInYousuf Kahn on LinkedInIan Faison on LinkedInHungry for more tech talk? Check out past episodes at ciopod.com: Ep 61 - What Manufacturing Can Teach You About Scaling Enterprise AIEp 60 - Why the Smartest CIOs Are Becoming Business StrategistsEp 59 - CIO Leadership in AI Security and InnovationLearn more about Caspian Studios: caspianstudios.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Voices is a new mini-series from Humanitarian AI Today. In daily five-minute flashpods we pass the mic to humanitarian experts and technology pioneers, to hear about new projects, events, and perspectives on topics of importance to the humanitarian community. In this flashpod, Chelsea McMurray, founder of the AI security startup Dorcha, joins Humanitarian AI Today producer Brent Phillips to discuss international human rights law, AI security, and the threat landscape facing humanitarian actors. They begin with Chelsea's background in human rights law and the recent disregard for international norms that should underpin ethical AI governance. The casual conversation then pivots to AI security and the specific threats humanitarian organizations face. Chelsea explains how her startup addresses data privacy vulnerabilities and prompt injection attacks, by giving users greater control over their personal information. Protecting such sensitive data is especially critical in the humanitarian sector, where information leaks can endanger field staff and the vulnerable populations they serve. Substack notes: https://humanitarianaitoday.substack.com/p/chelsea-mcmurray-on-ai-security-and
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Jaeden interviews Jonathan Mortensen, founder and CEO of Confidence Security, a company focused on providing a privacy layer for AI applications. They discuss the growing concerns around data privacy in AI, the innovative solutions offered by Confidence Security, and the potential markets for their technology, including enterprises and sovereign AI. The conversation also covers integration options, the competitive landscape, and the risks associated with prompt injection in AI systems.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleTo recommend a guest email: guests(@)podcaststudio.comLearn more about confident security: https://confident.security/Chapters00:00 Introduction to AI Privacy and Security03:39 Understanding the Concerns of AI Data Breaches07:33 Confident Security's Innovative Approach11:04 Integration and Implementation of AI Solutions15:52 Competitive Landscape in AI Privacy Solutions20:36 Common Misconceptions in AI Security
Cisco's Vijoy Pandey - SVP & GM of Outshift by Cisco - explains how AI agents and quantum networks could completely redefine how software, infrastructure, and security function in the next decade.You'll learn:→ What “Agentic AI” and the “Internet of Agents” actually are→ How Cisco open-sourced the Internet of Agents framework and why decentralization matters→ The security threat of “store-now, decrypt-later” attacks—and how post-quantum cryptography will defend against them→ How Outshift's “freedom to fail” model fuels real innovation inside a Fortune-500 company→ Why the next generation of software will blur the line between humans, AI agents, and machines→ The vision behind Cisco's Quantum Internet—and two real-world use cases you can see today: Quantum Sync and Quantum AlertAbout Today's Guest:Meet Vijoy Pandey, the mind behind Cisco's Outshift—a team pushing the boundaries of what's next in AI, quantum computing, and the future internet. With 80+ patents to his name and a career spent redefining how systems connect and think, he's one of the few leaders truly building the next era of computing before the rest of us even see it coming.Key Moments:00:00 Meet Vijoy Pandey & Outshift's mission04:30 The two hardest problems in computer science: Superintelligence & Quantum Computing06:30 Why “freedom to fail” is Cisco's innovation superpower10:20 Inside the Outshift model: incubating like a startup inside Cisco21:00 What is Agentic AI? The rise of the Internet of Agents27:00 AGNTCY.org and open-sourcing the Internet of Agents32:00 What would an Internet of Agents actually look like?38:19 Responsible AI & governance: putting guardrails in early49:40 What is quantum computing? What is quantum networking?55:27 The vision for a global Quantum InternetWatch Next: https://youtu.be/-Jb2tWsAVwI?si=l79rdEGxB-i-Wrrn -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In today's episode, Les sits down with Will Pearce and Brad Palm from Dreadnode, one of the nation's most advanced offensive AI and cybersecurity companies. Based in the Rocky Mountain West, Dreadnode is redefining how we think about digital defense — by taking the offensive. Will and Brad share their experiences leading red teams at Microsoft, NVIDIA, and within the U.S. Marine Corps, and how those lessons now shape their mission to secure the future of artificial intelligence.From battlefield drones and AI-enabled cyberattacks to the regulatory frameworks that will define the next era of warfare, this conversation explores what happens when AI becomes both a weapon and a shield.Here's a closer look at the episode:From Red Teams to FoundersWill Pearce, former leader of AI Red Teams at Microsoft and NVIDIA, discusses his journey from penetration testing and consulting to building Dreadnode.Describes how the offensive use of AI is a natural extension of red teaming — “offense leads defense.”Brad Palm's Path from the BattlefieldBrad Palm, a Marine Corps veteran and former red team leader, shares how military principles of mobility, attack, and defense translate into cyber warfare.Offensive cyber as a transformational moment — comparing AI's impact to the leap from muskets to machine guns.The Rise of Offensive AIWill breaks down the offensive AI landscape, from code scanning and model manipulation to adversarial attacks on computer vision systems.How more “eyes,” even artificial ones, find more vulnerabilities — accelerating both innovation and exposure.Building a Platform for Cyber ML OpsDreadnode's platform enables organizations to build, evaluate, and deploy AI models and agents with security in mind.Unlike “AI-in-a-box” startups, their approach mirrors ML Ops infrastructure — prioritizing transparency, testing, and adaptability.Their mission: help clients build their own capabilities, rather than just buy black-box solutions.A Collaborative Cybersecurity CommunityWill and Brad note that in AI security, collaboration beats competition.“If you have confidence in your abilities, you don't need to hide anything.”Despite growing investment and consolidation, the founders believe the industry is still expanding rapidly — with room for innovation and partnership.Human + AI: The Future of the BattlefieldBrad connects his defense background to current AI developments, pointing to autonomous drones in Ukraine as examples of real-time AI-driven warfare.Raises ethical and practical questions about “human-in-the-loop” systems and the urgency of explainable, auditable AI in combat environments.Will expands on how regulatory frameworks and rules of engagement must evolve to keep pace with privately developed AI systems.Offensive AI Conference & What's NextHosting Offensive AI Con in San Diego — the first of its kind dedicated to offensive AI research and community building.The team continues to release state-of-the-art research drops, collaborating with cyber threat intel groups and enterprise partners.Above all, the founders share a deep appreciation for their team culture: detail-oriented, relentlessly curious, and dedicated to “winning every day.”Resources:Website: https://dreadnode.io/ Will Pearce - https://www.linkedin.com/in/will-pearce-a62331135/Brad Palm - https://www.linkedin.com/in/bradpalm/Dreadnode LinkedIn: https://www.linkedin.com/company/dreadnode
At WebexOne, Cisco unveiled the ultimate flex: AI so powerful it's practically rewriting the rulebook for secure, intelligent meetings. Jabra and Huddly are keeping it real, no gimmicks, just AI that actually makes conferencing work better.The video version of this podcast can be found here.Join host Tim Albright and his industry expert guests for another must-watch AVWeek episode covering critical topics from the commercial AV world. This week's discussion tackles what really emerged from Cisco's WebexOne event beyond the partnership headlines and defines what AI Security and AI features conferencing actually needs (versus what vendors think sounds impressive).Host: Tim AlbrightGuests:Jason Haynie – Q2Kristin Bidwell – Audiovisual Consulting TeamKelly Teel – Kelly on LinkedInThis Week In AV:rAVe Pubs – Cisco's AI Collaboration ToolsAV Magazine – ISE Launching SparkAV Network – Ross Video Acquires ioversalAV Network – Avidex Acquires CSS New EnglandRoundtable Topics:UC Today – Cisco Microsoft and More In on AI ToolsAV Buyers Club – Jabra & Huddly Launch AI ToolsEngadget – Microsoft wants 'Vibes based' workingSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
"The next five years are gonna be wild." That's the verdict from Forrester Principal Analyst Allie Mellen on the state of Security Operations. This episode dives into the "massive reset" that is transforming the SOC, driven by the rise of generative AI and a revolution in data management.Allie explains why the traditional L1, L2, L3 SOC model, long considered a "rite of passage" that leads to burnout is being replaced by a more agile and effective Detection Engineering structure. As a self-proclaimed "AI skeptic," she cuts through the marketing hype to reveal what's real and what's not, arguing that while we are "not really at the point of agentic" AI, the real value lies in specialized triage and investigation agents.Guest Socials - Allie's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:35) Who is Allie Mellen?(03:15) What is Security Operations in 2025? The SIEM & XDR Shakeup(06:20) The Rise of Security Data Lakes & Data Pipeline Tools(09:20) A "Great Reset" is Coming for the SOC(10:30) Why the L1/L2/L3 Model is a Burnout Machine(13:25) The Future is Detection Engineering: An "Infinite Loop of Improvement"(17:10) Using AI Hallucinations as a Feature for New Detections(18:30) AI in the SOC: Separating Hype from Reality(22:30) What is "Agentic AI" (and Are We There Yet?)(26:20) "No One Knows How to Secure AI": The Detection & Response Challenge(28:10) The Critical Role of Observability Data for AI Security(31:30) Are SOC Teams Actually Using AI Today?(34:30) How to Build a SOC Team in the AI Era: Uplift & Upskill(39:20) The 3 Things to Look for When Buying Security AI Tools(41:40) Final Questions: Reading, Cooking, and SushiResources:You can read Allie's blogs here
Kris Kamber is CEO of SPLX AI. SPLX performs security testing and red teaming for AI agents, helping organizations detect vulnerabilities in their constantly expanding agent deployments. Before SPLX, Kris worked a handful of sales jobs, starting in telecom before hustling his way into Zscaler. I enjoyed asking him about the specific lessons from working in sales such as setting metrics and compensation. He's the first person who has described to me a workplace filled with arrogant and cocky people and also illustrated why he was attracted to that environment. We also touched on how he met his co-founder through a conversation on a plane and what compelled him to build a company at the intersection of AI and cybersecurity given his background.SPLX Website
In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether.• Why guardrails matter for PII, secrets, and access control• Where to place controls across prompt, training, and output• Prompt injection, jailbreaks, and adversarial handling• RAG design with vector DB separation and permissions• Evaluation methods, risk scoring, and cost trade-offs• AWS Bedrock guardrails vs open-source customization• Domain-adapted safety models and policy matching• When deterministic systems beat LLM complexityThis episode is part of our "AI in Practice” series, where we invite guests to talk about the reality of their work in AI. From hands-on development to scientific research, be sure to check out other episodes under this heading in our listings.Related research:Building trustworthy AI: Guardrail technologies and strategies (N. Brathwaite)Nic's GitHubWhat did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Serve No Master : Escape the 9-5, Fire Your Boss, Achieve Financial Freedom
Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we delve into the ever-evolving domain of AI and cybersecurity with our insightful guest, Jason McKinley, an expert in cybersecurity transformation.Jason shares his unique insights on the current state of AI security and the challenges posed by our rapidly digitalizing society. He discusses the nuanced landscape where speed of technology adoption often outpaces the necessary security measures, leading to vulnerabilities within organizations. Jason emphasizes the importance of slowing down to establish robust security baselines before accelerating innovation.Notable Quotes:"The biggest risk is... moving too fast. We're very much from a pace of technology change right now... We want to slow down just a little bit and then go faster." - [Jason McKinley]"If you aren't using AI, you're wrong because your employees are, you just might not know it yet." - [Jason McKinley]"Security should not trump the operations... You still have to be able to get the job done." - [Jason McKinley]Jason also explores the cultural shifts in businesses where convenience often overrides security protocols and discusses the pitfalls of merging personal and professional digital spaces. He offers strategic advice on creating a cybersecurity culture within organizations, emphasizing the critical role of executive commitment and beginning security practices early in the business lifecycle.Connect with Jason McKinley:LinkedIn: https://www.linkedin.com/in/jasonmckinleyatg/ To reach Jason directly, visit his booking link on the LinkedIn profile or the company's website for further consultations and insights.This episode is a must-listen for anyone interested in the intersection of technology and security, offering practical strategies to protect your digital assets in an age where AI continues to reshape the business landscape. If you're ready to take cybersecurity seriously, Jason's insights are invaluable!Connect with Jonathan Green The Bestseller: ChatGPT Profits Free Gift: The Master Prompt for ChatGPT Free Book on Amazon: Fire Your Boss Podcast Website: https://artificialintelligencepod.com/ Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast
Modern digital supply chains are increasingly complex and vulnerable. In this episode of Security Matters, host David Puner is joined by Retsef Levi, professor of operations management at the MIT Sloan School of Management, to explore how organizations can “sense the signals” of hidden risks lurking within their software supply chains, from open source dependencies to third-party integrations and AI-driven automation.Professor Levi, a leading expert in cyber resilience and complex systems, explains why traditional prevention isn't enough and how attackers exploit unseen pathways to infiltrate even the most secure enterprises. The conversation covers the critical need for transparency, continuous monitoring, and rapid detection and recovery in an era where software is built from countless unknown components.Key topics include:How to sense early warning signs of supply chain attacksThe role of AI and automation in both risk and defenseBest practices for mapping and securing your digital ecosystemWhy resilience—not just prevention—must be at the core of your security strategyWhether you're a CISO, IT leader or security practitioner, this episode will help you rethink your approach to digital supply chain risk and prepare your organization for what's next.Subscribe to Security Matters for expert insights on identity security, cyber resilience and the evolving threat landscape.
In this episode of Acta Non Verba, host Marcus Aurelius Anderson sits down with Sam Alaimo, former Navy SEAL, co-founder of ZeroEyes, writer, and host of the Nobel Podcast. Together, they explore the practical application of philosophy, the power of adversity, the transition from military to civilian life, and the importance of honest introspection. Sam shares his journey from the SEAL teams to entrepreneurship and writing, offering deep insights on leadership, resilience, and living a life of action. Episode Highlights: [8:58] The Power of Adversity and StoicismSam and Marcus discuss how adversity shapes character, the role of stoicism, and the importance of honest self-reflection. [29:32] Transitioning from Military to Civilian LifeSam shares the challenges of leaving the SEAL teams, finding new purpose, and building a meaningful life after service. [1:02:12] Leadership and Building ZeroEyesSam talks about founding ZeroEyes, tackling gun violence, and the importance of frontline leadership and mission-driven work. Guest Bio & Contact Info Sam Alaimo is a former Navy SEAL, co-founder of ZeroEyes—a company dedicated to preventing gun violence through AI-powered security solutions—writer of the "What Then" Substack, and host of the Nobel Podcast. After his military service, Sam transitioned into entrepreneurship and writing, focusing on philosophy, leadership, and resilience. ZeroEyes: com Substack: org Podcast: Nobel Podcast Find Sam: Google "Sam Alaimo What Then" or visit his Substack for more. Learn more about the gift of Adversity and my mission to help my fellow humans create a better world by heading to www.marcusaureliusanderson.com. There you can take action by joining my ANV inner circle to get exclusive content and information.See omnystudio.com/listener for privacy information.
Brian Mendenhall, Worldwide Head, Security & Identity Partner Specialists of Amazon Web Services, reveals the insider framework for transforming enterprise AI security, including the three-pillar approach and partnership strategies that leading companies use to navigate AI governance challenges.Topics Include:At AWS everything starts with security as core principleConsulting partners follow three-phase model: assess, remediate, then fully manage securityTraditional security framework covers threat detection, incident response, and data protectionAI compliance spans multiple governance bodies with stacking requirements and regulationsEU AI Act affects any company globally if Europeans access their applicationsThree pillars: security OF AI, AI FOR security, security FROM AI attacksAWS launches AI security competency program with specialized partner categories and certificationsEnterprise AI spans five risk levels from consumer apps to self-trained modelsLegal liability dramatically increases as you move toward custom AI implementationsSafety means preventing harm; security means preventing breaches - both critical distinctionsCurrent AI hallucination rates hit 65-75% across major platforms like PalantirShared responsibility model determines who's liable when AI security tools failIndustry evolution progresses from machine learning to generative AI to autonomous agentsMajor prototype-to-production gap caused by governance, security, and scalability challengesSuccessful AWS partnerships require clear use cases, differentiation, and targeted go-to-market strategyParticipants:Brian Mendenhall - WW Head, Security & Identity Partner Specialists, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
The race to deploy AI is on, but are the cloud platforms we rely on secure by default? This episode features a practical, in-the-weeds discussion with Kyler Middleton, Principal Developer, Internal AI Solutions, Veradigm and Sai Gunaranjan, Lead Architect, Veradigm as they compare the security realities of building AI applications on the two largest cloud providers.The conversation uncovers critical security gaps you need to be aware of. Sai reveals that Azure AI defaults to sending customer data globally for processing to keep costs low, a major compliance risk that must be manually disabled . Kyler breaks down the challenges with AWS Bedrock, including the lack of resource-level security policies and a consolidated logging system that mixes all AI conversations into one place, making incident response incredibly difficult .This is an essential guide for any cloud security or platform engineer moving into the AI space. Learn about the real-world architectural patterns, the insecure defaults to watch out for, and the new skills required to transition from a Cloud Security Engineer to an AI Security Engineer.Guest Socials - Kyler's Linkedin + Sai's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who are Kyler Middleton & Sai Gunaranjan?(03:40) Common AI Use Cases: Chatbots & Product Integration(05:15) Beyond IAM: The Full Scope of AI Security in the Cloud(07:30) The Role of the Cloud in Deploying Secure AI(13:10) AWS AI Architecture: Bedrock, Knowledge Bases & Vector Databases(15:10) Azure AI Architecture: AI Services, ML Workspaces & Foundry(21:00) The "Delete the Frontend" Problem: The Risk of Agentic AI(23:25) A Security Deep Dive into Microsoft Azure AI Services(29:20) Azure's Insecure Default: Sending Your Data Globally(31:35) A Security Deep Dive into AWS Bedrock(32:30) The Critical Gap: No Resource Policies in AWS Bedrock(33:20) AWS Bedrock's Logging Problem: A Nightmare for Incident Response(36:15) AWS vs. Azure: Which is More Secure for AI Today?(39:20) A Maturity Model for Adopting AI Security in the Cloud(44:15) From Cloud Security to AI Security Engineer: What's the Skill Gap?(48:45) Final Questions: Toddlers, Kickball, Barbecue & Ice Cream
Discover how enterprises can successfully adopt and scale agentic AI to create real business impact in this conversation with Florian Douetteau, CEO and co-founder of Dataiku. Florian shares why democratizing AI across the enterprise is essential, how to prevent agent sprawl, and what it takes to build a governance framework that keeps your data secure while enabling innovation. Learn about Dataiku's enterprise AI blueprint, its partnership with NVIDIA, and how global companies are using agentic workflows to accelerate R&D, optimize operations, and stay competitive. If you're a business leader, CTO, or data professional looking to scale AI safely and effectively, this episode is your playbook for the future of enterprise AI. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI 00:00 Intro 00:31 Florian's Background & Dataiku's Founding 03:00 Enterprise Blueprint for AI with NVIDIA 05:13 Unique Needs of Financial Services 07:09 Building Agents on Dataiku 09:22 Permissioning & Governance 11:17 Agent Lifecycle Management 13:20 State of Agent-to-Agent Systems 15:02 Real-World Use Cases of Agents 16:28 The Most Complex Agents in Production 19:01 Future Vision: Headless Organizations 21:04 Human-Like Qualities of Agents 24:56 The LLM Mesh & Model Abstraction 28:55 Guardrails & Compliance 31:12 No-Code + Code-Friendly Collaboration 36:12 Breaking Silos & Centers of Excellence 41:36 Distribution & Seat Allocation 43:34 Most Common Agents by Industry 47:02 The State of Enterprise AI Adoption
CIO Classified is back! More CIO secrets. More battle-tested IT wisdom. Straight from leading CIOs across a wide range of industries. In this episode, host Ian Faison and co-host Yousuf Khan dive into the deep end of technology leadership in manufacturing. Ben Davis, Executive Vice President of IT at Cambria, joins the show to talk about his sweeping digital transformation at the quartz manufacturing leader, and shares how his startup past helped him turn IT from a reactive function to a trusted business advisor. Plus much more:How Cambria is leveraging AI in demand forecastingHow to optimize supply chains and improve customer experience How to do it all while managing a legacy infrastructure and cybersecurityThis episode is a must-listen for the modern CIO looking to bridge the gap between traditional industries and modern technologies without sacrificing security or business continuity. About the Guest: Ben Davis, EVP IT, Cambria, is a technical leader who is passionate about introducing new technology, improved processes and unexplored data sets to businesses in a manner that allows them to achieve scalable revenue growth. He does this by helping business-minded technologists use automation, prioritization and critical thinking to deliver technology, process improvement and data in a high-value, cost-effective way. Timestamps:02:30 – From startups to manufacturing: Applying entrepreneurial DNA07:00 – Communicating tech value across the organization09:30 – Why AI in manufacturing is a game-changer15:00 – Cybersecurity training, scorekeeping, and zero-trust realities17:30 – Modernizing legacy infrastructure in manufacturing23:00 – AI adoption vs. business architecture readiness26:00 – Staying close to the customer experience as CIO28:00 – Building, retaining, and empowering high-impact IT teams31:00 – Governance, shadow IT, and the rise of internal agents35:00 – AI tooling, data gaps, and minimizing technical debt38:00 – Manufacturing success, excitement, and the human side of techGuest Highlights:“ I think everybody under spends on cybersecurity. If I had an unlimited budget, I'd put the money towards that. I would also spend the money on data scientists, data modeling, data governance, mass data management to ensure that our data was ready to really take advantage of AI.”Get Connected:Ben Davis on LinkedInYousuf Kahn on LinkedInIan Faison on LinkedInHungry for more tech talk? Check out past episodes at ciopod.com: Ep 60 - Why the Smartest CIOs Are Becoming Business StrategistsEp 59 - CIO Leadership in AI Security and InnovationEp 58 - AI-Driven Workplace TransformationLearn more about Caspian Studios: caspianstudios.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The managed services provider (MSP) market is experiencing a paradoxical trend where revenue is increasing while the number of providers is decreasing. According to Canalys data, global managed services revenue surpassed half a trillion dollars in 2024, reflecting a year-over-year growth of 9.7%. However, the number of channel partners has slightly declined by 0.6%, with large MSPs rapidly acquiring smaller ones. This consolidation trend has led to a significant shift in the market dynamics, where smaller MSPs struggle to compete against larger firms that possess superior resources and pricing power.To survive in this competitive landscape, smaller MSPs must adopt focused strategies, targeting specific customer segments or industries. By doing so, they can achieve higher profit margins, with specialized MSPs reporting EBITDA percentages between 15% to 30%, compared to just 7% for those lacking focus. The article emphasizes that smaller MSPs have several options: they can sell to larger firms, acquire smaller peers, focus on niche markets, or leverage partnerships to remain competitive. The reality is that the middle tier of MSPs is rapidly disappearing, and those who attempt to serve everyone may find themselves at a disadvantage.In addition to the MSP market dynamics, the podcast discusses recent legislative developments, including Michigan's new laws addressing deepfakes, which make it illegal to create AI-generated sexual imagery without consent. This reflects a growing trend across the U.S. to combat nonconsensual abuse imagery, with most states now having similar laws. Furthermore, the U.S. Treasury has imposed sanctions on individuals and entities linked to North Korea's illicit IT worker schemes, highlighting the security risks posed by fraudulent practices in the tech industry.The episode also covers the latest advancements in AI-powered security solutions from various vendors, including Thrive, Addigy, Arctic Wolf, and Acronis. These companies are rolling out new services and products designed to enhance security operations and protect data. The overarching theme is that as technology evolves, the risks associated with it are also increasing, and IT service providers must adapt to these changes by offering value-added services that help clients navigate the complexities of compliance and security in a rapidly changing environment. Four things to know today 00:00 MSP Market Expands to $500B as Provider Count Shrinks Amid Rapid Consolidation04:10 From Abuse Imagery to Supply Chain Threats, Regulation Struggles to Keep Up With Emerging Risks07:45 AI Everywhere: Thrive, Security Vendors, OpenAI, and Microsoft Redefine Service Provider Playbook12:39 D&H and Nutanix Growth Signals Services-Led Future as Distributors and Vendors Push Into MSP Territory This is the Business of Tech. Supported by: https://scalepad.com/dave/ https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
Send us a textIn this action-packed episode, Joey Pinz sits down with cybersecurity veteran and ex-MSP operator Chris Loehr. From his early days as a two-footed soccer midfielder to leading Solis Security through complex ransomware response cases, Chris shares insights forged in both cleats and crisis. ⚽