POPULARITY
At Black Hat 2025, Sean Martin sits down with Ofir Stein, CTO and Co-Founder of Apono, to discuss the pressing challenges of identity and access management in today's hybrid, AI-driven environments. Stein's background in technology infrastructure and DevOps, paired with his co-founder's deep cybersecurity expertise, positions the company to address one of the most common yet critical problems in enterprise security: how to secure permissions without slowing the pace of business.Organizations often face a tug-of-war between security teams seeking to minimize risk and engineering or business units pushing for rapid access to systems. Stein explains that traditional approaches to access control — where permissions are either always on or granted through manual processes — create friction and risk. Over-provisioned accounts become prime targets for attackers, while delayed access slows innovation.Apono addresses this through a Zero Standing Privilege approach, where no user — human or non-human — retains permanent permissions. Instead, access is dynamically granted based on business context and automatically revoked when no longer needed. This ensures engineers and systems get the right access at the right time, without exposing unnecessary attack surfaces.The platform integrates seamlessly with existing identity providers, governance systems, and IT workflows, allowing organizations to centralize visibility and control without replacing existing tools. Dynamic, context-based policies replace static rules, enabling access that adapts to changing conditions, including the unpredictable needs of AI agents and automated workflows.Stein also highlights continuous discovery and anomaly detection capabilities, enabling organizations to see and act on changes in privilege usage in real time. By coupling visibility with automated policy enforcement, organizations can not only identify over-privileged accounts but also remediate them immediately — avoiding the cycle of one-off audits followed by privilege creep.The result is a solution that scales with modern enterprise needs, reduces risk, and empowers both security teams and end users. As Stein notes, giving engineers control over their own access — including the ability to revoke it — fosters a culture of shared responsibility for security, rather than one of gatekeeping.Learn more about Apono: https://itspm.ag/apono-1034Note: This story contains promotional content. Learn more.Guest:Ofir Stein, CTO and Co-Founder of Apono | On LinkedIn: https://www.linkedin.com/in/ofir-stein/ResourcesLearn more and catch more stories from Apono: https://www.itspmagazine.com/directory/aponoLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-storyKeywords: sean martin, ofir stein, apono, zero standing privilege, access management, identity security, privilege creep, just in time access, ai security, governance, cloud security, black hat, black hat usa 2025, cybersecurity, permissions
Shay Levi (@shaylevi2, CEO @UnframeAI) & Larissa Schneider (COO @UnframeAI) discuss the complexities of building an enterprise-grade AI platform. Topics include what an AI platform is, the advantages of adoption, and the efficiencies gained.SHOW: 948SHOW TRANSCRIPT: The Cloudcast #948 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST: "CLOUDCAST BASICS"SPONSORS:[VASION] Vasion Print eliminates the need for print servers by enabling secure, cloud-based printing from any device, anywhere. Get a custom demo to see the difference for yourself.[DoIT] Visit doit.com (that's d-o-i-t.com) to unlock intent-aware FinOps at scale with DoiT Cloud Intelligence.SHOW NOTES:Unframe websiteTopic 1 - Shay & Larissa, welcome to the show! Give everyone a brief introduction and a little about your background. Topic 2 - Today, we're discussing AI Security and Enterprise Platforms. What are the problems or issues you see with AI development today?Topic 3 - Is this where an AI platform comes into play? I'm seeing more and more about this term and wondering what it truly means to be a platform. What is your definition of a platform, and what are the advantages?Topic 4 - Shay, considering your background in APIs and API security, how does that knowledge transfer into this space?Topic 5 - Larissa, with your background in operations, where do you see the inefficiencies in AI development and lifecycle management of the AI models and the datasets?Topic 6 - Let's talk about Unframe. Give everyone an overview. Is this a SaaS service? How and where does it fit into your typical AI development stack?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
On todays show AI models, quantum computing, mental health, smartphones, cryptocurrency, Tesla, regenerative braking, oil reserves, wind energy, military applications, open source, electric vehicles, hologram technology, parenting, cybersecurity., Energy transition, nuclear power, fusion, hydrocarbons, solar energy, wind energy, material science, AI, tariffs, manufacturing, redistricting, RNA vaccines, youth violence, Trump, China. Have Fun
David Selinger (aka “Selly”) is the founder and CEO of Deep Sentinel, a security company blending AI with live human monitoring to stop crime in real time. From Amazon to Redfin to AI security, Dave Selinger has built a real-time protection system now scaling fast with $15M in Series B funding from top investors.In this episode, Selly breaks down how Deep Sentinel works—from crime prediction models and real-time police calls to training AI to spot danger before it happens. He explains how the company went from idea to reality, how it stacks up against traditional alarms, and why his military mentors shaped his leadership style.This isn't just about cameras. It's about making AI useful, delivering outcomes that matter, and building a team with zero tolerance for compromise. You'll also hear Selly's thoughts on parenting, college, career detours, and how early obsessions with tech led him from Stanford to Jeff Bezos's office to the front lines of crime prevention.Main Topics• How Deep Sentinel stops crime before it happens using AI and live guards• Why traditional alarm systems fail — and what real security should look like• Lessons from military mentors on leadership, discipline, and zero compromise• The challenge of scaling real-time protection for homes and businesses• How Selly's early work at Amazon (with Jeff Bezos) and Redfin shaped his tech mindset• Raising kids with curiosity, independence, and meaningful support• Why the future of security depends on speed, customization, and trustChapters with Timestamps:[00:00:00] Introduction and Initial Scenario[00:00:42] Podcasting and Audience Engagement[00:02:06] AI and Podcasting Insights[00:03:17] Real-Life Security Challenges[00:03:58] Deep Sentinel's Unique Approach[00:04:49] Customer Experiences and Success Stories[00:11:34] Public-Private Partnerships in Security[00:15:52] Advanced Security Solutions and AI Integration[00:27:45] Exploring Security Challenges and Solutions[00:29:27] Military Influence and No Compromise Mentality[00:33:35] Childhood Passions and Career Pathways[00:36:02] Parental Support and Personal Growth[00:41:43] College Education and Career Advice[00:48:14] Amazon Experience and Innovations[00:54:23] Founding Redfin and Its Impact[00:56:29] Deep Sentinel's Growth and FutureDeep SentinelWebsiteLinkedInYouTubeSeries B FundingRelated Episodes:Ankit Somani | From Google to Conifer: Rare-Earth-Free Motors, $20M Seed, and Rethinking CollegeHow AI Is Changing College Counseling and Admissions with Senan Khawaja, CEO of KollegioAI Content Detection & Digital Ethics with Madeleine LambertEntrepreneur Perspectives is produced by QuietLoud Studios — a modern media network and a KazSource brand.Get in touch with Eric Kasimov:XLinkedInCredits:Music by Jess & Ricky: SoundCloud
Three Buddy Problem - Episode 57: Brandon Dixon (PassiveTotal/RiskIQ, Microsoft) leads a deep-dive into the collision of AI and cybersecurity. We tackle Google's “Big Sleep” project, XBOW's HackerOne automation hype, the long-running tension between big tech ownership of critical security tools and the community's need for open access. Plus, the future of SOC automation to AI-assisted pen testing, how agentic AI could transform the cyber talent bottlenecks and operational inefficiencies, geopolitical debates over backdoors in GPUs and the strategic implications of China's AI model development. Cast: Brandon Dixon (https://www.linkedin.com/in/brandonsdixon/), Juan Andres Guerrero-Saade (https://twitter.com/juanandres_gs), and Ryan Naraine (https://twitter.com/ryanaraine).
Is your company's AI strategy opening you up to massive security risks? In this powerful conversation, Claudionor Coelho, Chief AI Officer at Zscaler, reveals the hidden dangers of agentic AI—and how Fortune 500 companies are unintentionally exposing sensitive data through generative AI tools.Claudionor shares real-world examples of AI vulnerabilities, how attackers can exploit agent systems to extract private data, and why AI security must be treated with the same urgency as data security. From salary leaks to corporate data breaches, this discussion is a wake-up call for executives, CISOs, and AI teams alike.
Artificial intelligence agents are significantly transforming the landscape of software-as-a-service (SaaS) pricing, moving away from traditional per-seat licenses towards usage and outcome-based models. Gartner predicts that by 2030, 40% of enterprise spending on software will shift to these new pricing structures, prompting businesses to reassess how they perceive value in digital operations. As AI features become more prevalent in enterprise software, organizations must navigate potential risks such as data silos and vendor lock-in, emphasizing the need for transparent pricing and governance strategies.The podcast discusses the growing concerns surrounding AI's impact on the workforce, with a survey revealing that 61% of white-collar workers fear job displacement within three years. Despite these fears, many workers report that AI has enhanced their creativity and productivity, suggesting that AI should be viewed as a tool for augmentation rather than replacement. This perspective can facilitate smoother adoption of AI technologies, positioning service providers as trusted change managers.In the realm of security, Microsoft faces scrutiny over a critical vulnerability in its new NLWeb protocol, which could allow unauthorized access to sensitive files. Meanwhile, Cloudflare has accused the AI startup Perplexity of violating no-crawl directives by stealthily scraping content from websites, raising concerns about unauthorized data access. Additionally, Anthropic's research highlights how AI personalities can be influenced by training data, underscoring the importance of data governance in AI development.The episode also touches on Delta Airlines' clarification regarding its AI-assisted dynamic pricing model, which aims to use aggregated data rather than individual customer information. In the fashion industry, the use of AI-generated models has sparked debate over authenticity and representation, as brands seek cost-effective solutions for content creation. Overall, the discussion emphasizes the need for service providers to balance AI's cost-saving potential with the importance of maintaining trust and authenticity in consumer-facing applications.Four things to know today 00:00 AI Agents Drive SaaS Pricing Shift, Raise Vendor Lock-In Concerns, and Reshape Worker Attitudes04:45 OpenAI Releases First Open-Weight Models in Six Years as AI Security and Ethics Face Fresh Challenges09:50 Delta Defends AI Pricing as Non-Personalized, While Fashion Faces Backlash Over AI-Generated Models12:47 Microsoft Ends Windows 11 SE, Debuts Edge AI Copilot, as SentinelOne Expands into Generative AI Security Supported by: https://scalepad.com/dave/https://getflexpoint.com/msp-radio/ Tell us about a newsletter! https://bit.ly/biztechnewsletter/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
Co-hosts Mark Thompson and Steve Little explore OpenAI's groundbreaking ChatGPT Agent, demonstrating how this autonomous tool can research, analyze, and perform complex tasks on your behalf.Next, they address important security concerns to consider in the new world of AI agents, introducing practical guidelines for protecting sensitive family data and avoiding prompt injection attacks.This week's Tip of the Week provides a back-to-basics guide on what AI is and its four core strengths: summarization, extraction, generation, and translation.In RapidFire, they discuss OpenAI's rumored office suite, Microsoft and Google's own efforts to integrate AI into their office suites, and recently announced AI infrastructure investments, including; Meta's Manhattan-sized data center and President Trump's new AI Action Plan.The hosts also announce their new Family History AI Show Academy, a five-week course beginning in October of 2025. See https://tixoom.app/fhaishow/ for more details.Timestamps:In the News:05:20 ChatGPT Agent: Autonomous Research Assistant for Genealogists22:49 Safe and Secure in the Age of AITip of the Week:36:20 What is AI and What is it Good For? Back to BasicsRapidFire:50:57 OpenAI's Office Suite Rumors53:56 Microsoft and Google Bring AI to Their Office Suites60:17 Big AI Infrastructure: Manhattan-Sized Data CentersResource Links:Introduction to Family History AIhttps://tixoom.app/fhaishow/Do agents work in the browser?https://www.bensbites.com/p/do-agents-work-in-the-browserIntroducing ChatGPT agent: bridging research and actionhttps://openai.com/index/introducing-chatgpt-agent/OpenAI's new ChatGPT Agent can control an entire computer and do tasks for youhttps://www.theverge.com/ai-artificial-intelligence/709158/openai-new-release-chatgpt-agent-operator-deep-researchOpenAI's New ChatGPT Agent Tries to Do It Allhttps://www.wired.com/story/openai-chatgpt-agent-launch/Agent demo posthttps://x.com/rowancheung/status/1945896543263080736OpenAI Quietly Designed a Rival to Google Workspace, Microsoft Officehttps://www.theinformation.com/articles/openai-quietly-designed-rival-google-workspace-microsoft-officeOpenAI Is Quietly Creating Tools to Take on Microsoft Office and Google Workspacehttps://www.theglobeandmail.com/investing/markets/stocks/MSFT/pressreleases/33074368/openai-is-quietly-creating-tools-to-take-on-microsoft-office-and-google-workspace-googl/What's new in Microsoft 365 Copilot?https://techcommunity.microsoft.com/blog/microsoft365copilotblog/what%E2%80%99s-new-in-microsoft-365-copilot--june-2025/4427592Google Workspace enables the future of AI-powered work for every businesshttps://workspace.google.com/blog/product-announcements/empowering-businesses-with-AIGoogle Workspace Review: Will it Serve My Needs?https://www.emailtooltester.com/en/blog/google-workspace-review/Tags:Artificial Intelligence, Genealogy, Family History, AI Agents, ChatGPT Agent, OpenAI, Computer Use, AI Security, Prompt Injection, Database Analysis, RootsMagic, Cemetery Records, AI Office Suite, Microsoft 365 Copilot, Google Workspace, Data Centers, AI Infrastructure, Natural Language Processing, Large Language Models, Context Windows, AI Education, Family History AI Show Academy, AI Reasoning Models, Autonomous Research, AI Ethics
Episode 245 : Intro: Welcome to the next episode of Pi Perspectives. Today Matt welcomes David DeJesus of Argus365. David and his team offer an interesting security solution built around Artificial Intelligence. The guys discuss the synergy between private investigations and security consulting. Stick around to the end and learn how you can participate as a qualified partner with this technology. Please welcome David DeJesus and your host, NY Private Investigator, Matt Spaier Links: Matt's email: MatthewS@Satellitepi.com Linkedin: Matthew Spaier www.investigators-toolbox.com David on Linkedin: David Dejesus Email: David@Argus365.com https://www.argus365.com/ PI-Perspectives Youtube link: https://www.youtube.com/channel/UCYB3MaUg8k5w3k7UuvT6s0g Sponsors: https://piinstitute.com/ https://www.skopenow.com https://researchfpr.com/ https://www.trackops.com FBI Tip Line https://tips.fbi.gov/home https://www.fbi.gov/contact-us/field-offices/newyork/about - (212) 384-1000 WAD info https://wad100conf.net/
In deze aflevering van Techzine Talks bespreken Coen en Sander de megaovername van CyberArk door Palo Alto Networks voor 25 miljard dollar. We duiken diep in wat deze consolidatie betekent voor de cybersecurity-industrie en waarom identity access management en ook privilege access management nu zo cruciaal wordt.De overname toont de strategische shift naar identity-first security, gedreven door AI en machine identities. We analyseren of dit goed uitpakt voor bestaande CyberArk-klanten en hoe dit past in de bredere trend van security-platformconsolidatie.Hoofdstukken:0:20 Welkom en technische updates1:03 De 25 miljard dollar overname2:13 Palo Alto's platformstrategie4:21 Wat is Identity en Privilege Access Management?6:18 Waarom nu? De AI en machine identity factor10:09 CyberArk als marktleider in PAM10:15 Is de overname definitief?15:52 Gevolgen voor klanten en innovatie19:21 Sentinel One geruchten en marktconsolidatie21:55 Samenvatting en toekomstperspectief
Kent C. Dodds is back with bold ideas and a game-changing vision for the future of AI and web development. In this episode, we dive into the Model Context Protocol (MCP), the power behind Epic AI Pro, and how developers can start building Jarvis-like assistants today. From replacing websites with MCP servers to reimagining voice interfaces and AI security, Kent lays out the roadmap for what's next, and why it matters right now. Don't miss this fast-paced conversation about the tools and tech reshaping everything. Links Website: https://kentcdodds.com X: https://x.com/kentcdodds Github: https://github.com/kentcdodds YouTube: https://www.youtube.com/c/kentcdodds-vids Twitch: https://www.twitch.tv/kentcdodds LinkedIn: https://www.linkedin.com/in/kentcdodds Resources Please make Jarvis (so I don't have to): https://www.epicai.pro/please-make-jarvis AI Engineering Posts by Kent C. Dodds: https://www.epicai.pro/posts We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Kent C. Dodds.
Vrajesh is co-founder and CEO at Operant AI. Operant AI is a holistic AI security platform, helping organizations discover, detect, and defend AI deployments. Operant raised a $10 million dollar Series A last fall from Felicis and SineWave, and the company continues to expand its offerings within AI security. Before Operant, Vrajesh worked at Apple, Qualcomm, Arm, and Scaled Inference, rounding out an exceptionally technical background with several quality technology companies. In the episode we discuss his career transition from extremely technical, kernel-level engineering to management, how he thinks about timing a market, and how the vision for Operant's product was cemented from day one rather than bolted together over time.Website
Ahead of Black Hat USA 2025, Sean Martin and Marco Ciappelli sit down once again with Rupesh Chokshi, Senior Vice President and General Manager of the Application Security Group at Akamai, for a forward-looking conversation on the state of AI security. From new threat trends to enterprise missteps, Rupesh lays out three focal points for this year's security conversation: protecting generative AI at runtime, addressing the surge in AI scraper bots, and defending the APIs that serve as the foundation for AI systems.Rupesh shares that Akamai is now detecting over 150 billion AI scraping attempts—a staggering signal of the scale and sophistication of machine-to-machine activity. These scraper bots are not only siphoning off data but also undermining digital business models by bypassing monetization channels, especially in publishing, media, and content-driven sectors.While AI introduces productivity gains and operational efficiency, it also introduces new and uncharted risks. Agentic AI, where autonomous systems operate on behalf of users or other systems, is pushing cybersecurity teams to rethink their strategies. Traditional firewalls aren't enough—because these threats don't behave like yesterday's attacks. Prompt injection, toxic output, and AI-generated hallucinations are some of the issues now surfacing in enterprise environments, with over 70% of organizations already experiencing AI-related incidents.This brings the focus to the runtime. Akamai's newly launched Firewall for AI is purpose-built to detect and mitigate risks in generative AI and LLM applications—without disrupting performance. Designed to flag issues like toxic output, remote code execution, or compliance violations, it operates with real-time visibility across inputs and outputs. It's not just about defense—it's about building trust as AI moves deeper into decision-making and workflow automation.CISOs, says Rupesh, need to shift from high-level discussions to deep, tactical understanding of where and how their organizations are deploying AI. This means not only securing AI but also working hand-in-hand with the business to establish governance, drive discovery, and embed security into the fabric of innovation.Learn more about Akamai: https://itspm.ag/akamailbwcNote: This story contains promotional content. Learn more.Guests:Rupesh Chokshi, SVP & General Manager, Application Security, Akamai | https://www.linkedin.com/in/rupeshchokshi/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com______________________ResourcesLearn more and catch more stories from Akamai: https://www.itspmagazine.com/directory/akamaiLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story
This episode is sponsored by Natoma. Visit https://www.natoma.id/ to learn more.Join Jeff from the IDAC Podcast as he dives into a deep conversation with Paresh Bhaya, the co-founder of Natoma. In this sponsored episode, Paresh shares his journey into the identity space, discusses how Natoma helps enterprises accelerate AI adoption without compromising security, and provides insights into the rising importance of MCP and A2A protocols. Learn about the challenges and opportunities at the intersection of AI and security, the importance of dynamic access controls, and the significance of ensuring proper authentication and authorization in the growing world of agentic AI. Paresh also delights us with his memorable hike up Mount Whitney. Don't miss out!00:00 Introduction and Sponsor Announcement00:34 Guest Introduction: Paresh Bhaya from Natoma01:14 Paresh's Journey into Identity04:04 Natoma's Mission and AI Security06:25 The Story Behind Natoma's Name09:29 Natoma's Unique Approach to AI Security18:32 Understanding MCP and A2A Protocols25:20 Community Development and Adoption25:56 Agent Interactions and Security Challenges27:19 Navigating Product Development29:17 Ensuring Secure Connections36:10 Deploying and Managing MCP Servers42:40 Shadow AI and Governance44:17 Personal Anecdotes and ConclusionConnect with Paresh: https://www.linkedin.com/in/paresh-bhaya/Learn more about Natoma: https://www.natoma.id/Connect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at idacpodcast.comKeywords:IDAC, Identity at the Center, Jeff Steadman, Jim McDonald, Natoma, Paresh Bhaya, Artificial Intelligence, AI, AI Security, Identity and Access Management, IAM, Enterprise Security, AI Adoption, Technology, Innovation, Cybersecurity, Machine Learning, AI Risks, Secure AI, #idac
Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes. Show notes:• Agentic AI systems require governance at every step: perception, reasoning, action, and learning• Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps• Two-way information flow creates new security and confidentiality vulnerabilities. For example, targeted prompting to improve awareness comes at the cost of performance. (arXiv, May 24, 2025)• Traditional governance approaches are insufficient for the complexity of agentic systems• Organizations must implement granular monitoring, logging, and validation for each component• Human-in-the-loop oversight is not a substitute for robust governance frameworks• The true cost of agentic systems includes governance overhead, monitoring tools, and human expertiseMake sure you check out Part 1: Mechanism design, Part 2: Utility functions, and Part 3: Linear programming. If you're building agentic AI systems, we'd love to hear your questions and experiences. Contact us.What we're reading:We took reading "break" this episode to celebrate Sid! This month, he successfully defended his Ph.D. Thesis on "Psychological Health and Belief Measurement at Scale Through Language." Say congrats!>>What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
I cover some issues caused by the latest Windows Updates, give an update on the M&S cyber attack, I discuss a landmark for Google's AI security platform and much more! Reference Links: https://www.rorymon.com/blog/google-ai-security-milestone-in-box-windows-apps-to-be-updated-issues-caused-by-updates/
Curl is a widely used open source tool and library for transferring data. On today’s Day Two DevOps we talk with curl creator Daniel Stenberg. Daniel gives us a brief history of curl and where it’s used (practically everywhere). We also discuss the impact of AI on curl. Open source projects are often starved for... Read more »
Curl is a widely used open source tool and library for transferring data. On today’s Day Two DevOps we talk with curl creator Daniel Stenberg. Daniel gives us a brief history of curl and where it’s used (practically everywhere). We also discuss the impact of AI on curl. Open source projects are often starved for... Read more »
Curl is a widely used open source tool and library for transferring data. On today’s Day Two DevOps we talk with curl creator Daniel Stenberg. Daniel gives us a brief history of curl and where it’s used (practically everywhere). We also discuss the impact of AI on curl. Open source projects are often starved for... Read more »
Dive deep into the complex world of AI adoption with host Kevin Dean and special guest Kodjo Hogan, a cybersecurity leader and Director of Information Security GRC for Chainalysis Inc. This episode explores the crucial, often-overlooked intersection of Artificial Intelligence, robust security, and human psychology.You want to talk AI strategy for your company? Contact us at: https://www.manobyte.com/contact-usGet in touch with Kevin at: https://kevinjdean.com/Get in touch with guest, Kodjo Hogan here: https://www.linkedin.com/in/kodjohogan/In this insightful conversation, Kodjo cuts through the hype to address the serious concerns around AI risk, governance, and data integrity. You'll learn:What true AI governance means and why it must be deeply embedded into your adoption strategy.How to define specific, auditable AI use cases to ensure reliable and non-biased outcomes.The hidden dangers of "AI psychosis" and how AI's "yes-man" tendency can impact human judgment and decision-making.The critical risks of model poisoning and data hallucination, and their real-world consequences.The surprising, yet powerful role blockchain could play in preserving data integrity and combating deepfakes in the AI era.Actionable steps for business leaders to build a security-first culture around AI, involving cross-functional teams including HR and legal.If you're a leader grappling with how to make smart, secure, and strategic decisions about AI in your organization, this episode provides the clarity and practical insights you need to move forward with confidence.
Send us a textIn this thoughtful and grounded episode of Joey Pinz Discipline Conversations, Joey sits down with Ken Tripp of Netwrix to discuss the evolving challenges MSPs face — and how true partner-led collaboration can help solve them. Recorded live at Pax8 Beyond 2025, this conversation weaves cybersecurity, personal transformation, and the need for industry-wide unity.Ken explains how Netwrix helps MSPs secure and profit from Microsoft, especially in relation to Copilot rollouts, compliance obligations, and scaling client environments without adding technical overhead. He discusses the shared responsibility model and how Netwrix streamlines identity, permissions, and data classification through AI — reducing labor costs and delivering predictable value to MSPs managing dozens or hundreds of tenants.The conversation also turns personal: Ken shares his 120-pound weight loss journey following a major health scare and how discipline and routine helped him reshape his life. That same clarity, he says, is needed in the MSP space — not just from vendors, but through shared change and joint accountability across the ecosystem.
For episode 540 of the BlockHash Podcast, host Brandon Zemp is joined by Mike Lieberman CTO and Co-Founder of Kusari.Kusari began in 2022 with the goal to secure the software supply chain. They are passionate about this problem, as they constantly faced the same issue: identifying the software they're using and protecting against threats to that software. This led to slow response to security vulnerabilities, uncertainty about licensing and compliance, and even basic maintenance challenges. Kusari brings transparency and security to software supply chains, providing clarity and actionable insights. ⏳ Timestamps: 0:00 | Introduction1:10 | Who is Mike Lieberman?6:10 | What is Kusari?15:37 | Open-source software GUAC20:00 | Threat landscape in 202528:43 | AI for software security31:03 | Decentralized AI models32:40 | Quantum computing39:27 | Kusari roadmap 202544:32 | Kusari website, socials & community
In this episode of Reality 2.0, Doc and Katherine return after a long hiatus to discuss a range of topics including AI and security concerns, the evolution of cloud-native technologies, and the growing complexity of AI-related projects within various Linux Foundation groups. The conversation also touches on approaches to AI and privacy, the potential for AI to assist in personal and professional tasks, and the importance of standardizing and simplifying best practices for AI deployment. The episode wraps up with insights on the innovative 'My Terms' project aimed at flipping the cookie consent model to better respect user privacy. The hosts also emphasize the importance of constructive conversations and maintaining optimism about the future of technology. 00:00 Welcome Back to Reality 2.0 00:36 Upcoming Open Source Summit 01:03 Linux Foundation and AI Initiatives 04:20 Apple's Approach to Personal AI 05:11 Challenges of AI and Data Privacy 07:16 Potential of Personal AI Models 11:10 Human Interaction with AI 26:50 Innovations in Cookie Consent 31:08 Commitment to More Frequent Episodes 33:16 Closing Remarks and Future Plans Site/Blog/Newsletter (https://www.reality2cast.com) FaceBook (https://www.facebook.com/reality2cast) Twitter (https://twitter.com/reality2cast) Mastodon (https://linuxrocks.online/@reality2cast)
Want to master the art of negotiation in business and life? In this episode of the Grow, Sell and Retire podcast, I sat down with Molly Bloomquist—a 20-year US government vet turned negotiation expert. Molly shared game-changing insights on preparing for negotiations, the power of silence, reading body language, and the importance of empathy and sincerity. Her secret? Active listening, staying flexible, and making real human connections to achieve the best outcomes. If you want to level up your negotiation game, you'll want to hear Molly's top tips!www.mollyblomquist.com https://www.linkedin.com/in/molly-blomquist/
In this episode of 'Cybersecurity Today,' host Jim Love discusses urgent cybersecurity threats and concerns. Cisco has issued emergency patches for two maximum severity vulnerabilities in its Identity Services Engine (ISE) that could allow complete network takeover; organizations are urged to update immediately. A popular WordPress theme, Motors, has a critical vulnerability leading to mass exploitation and unauthorized admin account creation. A new ransomware group, Dire Wolf, has emerged, targeting manufacturing and technology sectors with sophisticated double extortion tactics. Lastly, an Accenture report reveals a dangerous gap between executive confidence and actual AI security preparedness, suggesting most major companies are not ready to handle AI-driven threats. The episode emphasizes the urgent need for immediate action and heightened awareness in the cybersecurity landscape. 00:00 Introduction and Headlines 00:26 Cisco's Critical Security Flaws 03:06 WordPress Theme Vulnerability Exploitation 05:57 Dire Wolf Ransomware Group Emerges 08:27 Accenture Report on AI Security Overconfidence 11:00 Conclusion and Upcoming Schedule
In this episode of Security Matters, host David Puner sits down with Deepak Taneja, co-founder of Zilla Security and General Manager of Identity Governance at CyberArk, to explore why 2025 marks a pivotal moment for identity security. From the explosion of machine identities—now outnumbering human identities 80 to 1—to the convergence of IGA, PAM, and AI-driven automation, Deepak shares insights from his decades-long career at the forefront of identity innovation.Listeners will learn:Why legacy identity governance models are breaking under cloud scaleHow AI agents are reshaping entitlement management and threat detectionWhat organizations must do to secure non-human identities and interlinked dependenciesWhy time-to-value and outcome-driven metrics are essential for modern IGA successWhether you're a CISO, identity architect, or security strategist, this episode delivers actionable guidance for navigating the evolving identity security landscape.
Ahmad Shadid is the Founder and CEO of O.XYZ, an ecosystem with a mission to build the world's first sovereign super intelligence. As Ahmad put it, "AI must be a tool for the people, not a weapon for profit." O.XYZ is a complex ecosystem, starting with it's core – O.Super Intelligence which will help guide decisions, solve complex problems, and interact with people in the ecosystem; a toolbox with AI-powered products — tools that help you solve various problems using artificial intelligence; O.REASEARCH, O.INFRA, O.CHARITY, O.CAPITAL, and O.CHAIN as parts of the ecosystem.Previously, Ahmad was CEO of IO.net, leading the company to a $4.5 billion valuation in under a year. His leadership propelled IO.net to secure $2 million in a seed round with a $10 million fully diluted valuation in June 2023, followed by a groundbreaking $40 million Series A round at a $1 billion FDV in March 2024. This rapid growth culminated in the successful launch of the $IO coin on Binance, with a remarkable $4.5 billion FDV in June 2024.Ahmad is a visionary behind the DeAIO – an Autonomous AI Organization, the next step in the evolution of DAOs aiming to revolutionize AI governance and development. Demonstrating his commitment to innovation, he has personally invested $130M into the development of DeAIO. O.XYZ builds on Ahmad's legacy, aiming to redefine AI and showcase how decentralized technology can drive common progress and serve people.In this conversation, we discuss:- Creating the First AI CEO- O.CAPTAIN- The future of AI & Crypto - Flipping the Narrative: AI that Helps, Not Replaces- Your EGO vs AI - Security ops and code review will become increasingly important - The feeling of working for AI and not a person - Building a company fully managed by AI - Living in Doha, Qatar- The future of AI & Crypto - An AI CEO that adapts workloads based on your energy and well-beingO.XYZWebsite: www.o.xyzX: @o_fndnTelegram: t.me/oxyz_communityAhmad ShadidX: @shadid_ioLinkedIn: Ahmad Shadid--------------------------------------------------------------------------------- This episode is brought to you by PrimeXBT. PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers. PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50This promotion is available for a month after activation. Click the link below:PrimeXBT x CRYPTONEWS50
Cyber threats are not static—and HITRUST knows assurance can't be either. That's why HITRUST's Michael Moore is leading efforts to ensure the HITRUST framework evolves in step with the threat environment, business needs, and the technologies teams are using to respond.In this episode, Moore outlines how the HITRUST Cyber Threat Adaptive (CTA) program transforms traditional assessment models into something far more dynamic. Instead of relying on outdated frameworks or conducting audits that only capture a point-in-time view, HITRUST is using real-time threat intelligence, breach data, and frameworks like MITRE ATT&CK and MITRE ATLAS to continuously evaluate and update its assessment requirements.The E1 and I1 assessments—designed for organizations at different points in their security maturity—serve as flexible baselines that shift with current risk. Moore explains that by leveraging CTA, HITRUST can add or update controls in response to rising attack patterns, such as the resurgence of phishing or the emergence of AI-driven exploits. These updates are informed by a broad ecosystem of signals, including insurance claims data and AI-parsed breach reports, offering both frequency and impact context.One of the key advantages Moore highlights is the ability for security teams to benefit from these updates without having to conduct their own exhaustive analysis. As Moore puts it, “You get it by proxy of using our frameworks.” In addition to streamlining how teams manage and demonstrate compliance, the evolving assessments also support conversations with business leaders and boards—giving them visibility into how well the organization is prepared for the threats that matter most right now.HITRUST is also planning to bring more of this intelligence into its assessment platform and reports, including showing how individual assessments align with the top threats at the time of certification. This not only strengthens third-party assurance but also enables more confident internal decision-making—whether that's about improving phishing defenses or updating incident response playbooks.From AI-enabled moderation of threats to proactive regulatory mapping, HITRUST is building the connective tissue between risk intelligence and real-world action.Note: This story contains promotional content. Learn more.Guest: Michael Moore, Senior Manager, Digital Innovation at HITRUST | On LinkedIn: https://www.linkedin.com/in/mhmoore04/Hosts:Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | https://www.seanmartin.com/Marco Ciappelli, Co-Founder at ITSPmagazine and Host of Redefining Society Podcast & Audio Signals Podcast | https://www.marcociappelli.com/______________________Keywords: sean martin, marco ciappelli, michael moore, hitrust, cybersecurity, threat intelligence, risk management, compliance, assurance, ai security, brand story, brand marketing, marketing podcast, brand story podcast______________________ResourcesVisit the HITRUST Website to learn more: https://itspm.ag/itsphitwebLearn more and catch more stories from HITRUST on ITSPmagazine: https://www.itspmagazine.com/directory/hitrustLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story
In this thought-provoking episode of Project Synapse, host Jim and his friends Marcel Gagne and John Pinard delve into the complexities of artificial intelligence, especially in the context of cybersecurity. The discussion kicks off by revisiting a blog post by Sam Altman about reaching a 'Gentle Singularity' in AI development, where the progress towards artificial superintelligence seems inevitable. They explore the idea of AI surpassing human intelligence and the implications of machines learning to write their own code. Throughout their engaging conversation, they emphasize the need to integrate security into AI systems from the start, rather than as an afterthought, citing recent vulnerabilities like Echo Leak and Microsoft Copilot's Zero Click vulnerability. Derailing into stories from the past and pondering philosophical questions, they wrap up by urging for a balanced approach where speed and thoughtful planning coexist, and to prioritize human welfare in technological advancements. This episode serves as a captivating blend of storytelling, technical insights, and ethical debates. 00:00 Introduction to Project Synapse 00:38 AI Vulnerabilities and Cybersecurity Concerns 02:22 The Gentle Singularity and AI Evolution 04:54 Human and AI Intelligence: A Comparison 07:05 AI Hallucinations and Emotional Intelligence 12:10 The Future of AI and Its Limitations 27:53 Security Flaws in AI Systems 30:20 The Need for Robust AI Security 32:22 The Ubiquity of AI in Modern Society 32:49 Understanding Neural Networks and Model Security 34:11 Challenges in AI Security and Human Behavior 36:45 The Evolution of Steganography and Prompt Injection 39:28 AI in Automation and Manufacturing 40:49 Crime as a Business and Security Implications 42:49 Balancing Speed and Security in AI Development 53:08 Corporate Responsibility and Ethical Considerations 57:31 The Future of AI and Human Values
This special remastered episode of the Lawyerist Podcast features Stephanie's conversation with Geoff Woods, author of The AI-Driven Leader. We're re-releasing it due to positive feedback on the depth of this discussion, ensuring you'll gain new insights and "aha!" moments with every listen. In this episode, we explore AI's transformative power, viewing it not as a threat, but as a liberator that enhances our work. We dive into the five core human skills to emphasize in an AI-driven world: strategic thinking, problem-solving, communication, collaboration, and creation. We demonstrate how to leverage AI strategically, from evaluating business plans to acting as a growth-minded board member, and you'll hear how we're integrating AI into our own leadership meetings. Geoff shares real-world examples of using AI as a "thought partner" to stress-test major strategic decisions, even creating an "AI board of advisors." He also provides practical applications for lawyers, such as using AI to review NDAs, stress-test legal arguments, and role-play closing arguments with AI as your jury. To guide your own AI journey, Geoff outlines his "CRIT" framework (Context, Role, Interview, Task) for effective prompting and highlights the importance of understanding AI model settings for data privacy and confidentiality. Listen to our other episodes on the AI revolution: #555: How to Use AI and Universal Design to Empower Diverse Thinkers with Susan Tanner Apple Podcasts | Spotify | Lawyerist #553: AI Tools and Processes Every Lawyer Should Use with Catherine Sanders Reach Apple Podcasts Spotify Lawyerist #550: Beyond Content: How AI is Changing Law Firm Marketing, with Gyi Tsakalaki and Conrad Saam: Apple Podcasts | Spotify | Lawyerist Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X! If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. Access more resources from Lawyerist at lawyerist.com. The AI-Driven Leader Chapters/Timestamps: 0:00 - Episode Introduction and Why This Remastered Version is Special 1:22 - AI as the Next Big Shift for Lawyers 6:28 - Geoff Woods: Redefining Leadership in the AI Era 9:11 - The Five Core Human Skills Enhanced by AI 10:36 - Strategic AI: Beyond Basic Tasks 14:24 - AI as Your Strategic Thought Partner 19:47 - Navigating AI: Threat vs. Opportunity for Lawyers 20:56 - Practical AI Applications: NDA Review and Valuation 28:51 - Building Your AI Habit: The "CRIT" Framework 32:19 - AI Security and Data Privacy for Legal Professionals 34:40 - The Risk of Inaction and Building the Future Firm
This special remastered episode of the Lawyerist Podcast features Stephanie's conversation with Geoff Woods, author of The AI-Driven Leader. We're re-releasing it due to positive feedback on the depth of this discussion, ensuring you'll gain new insights and "aha!" moments with every listen. In this episode, we explore AI's transformative power, viewing it not as a threat, but as a liberator that enhances our work. We dive into the five core human skills to emphasize in an AI-driven world: strategic thinking, problem-solving, communication, collaboration, and creation. We demonstrate how to leverage AI strategically, from evaluating business plans to acting as a growth-minded board member, and you'll hear how we're integrating AI into our own leadership meetings. Geoff shares real-world examples of using AI as a "thought partner" to stress-test major strategic decisions, even creating an "AI board of advisors." He also provides practical applications for lawyers, such as using AI to review NDAs, stress-test legal arguments, and role-play closing arguments with AI as your jury. To guide your own AI journey, Geoff outlines his "CRIT" framework (Context, Role, Interview, Task) for effective prompting and highlights the importance of understanding AI model settings for data privacy and confidentiality. Listen to our other episodes on the AI revolution: #555: How to Use AI and Universal Design to Empower Diverse Thinkers with Susan Tanner Apple Podcasts | Spotify | Lawyerist #553: AI Tools and Processes Every Lawyer Should Use with Catherine Sanders Reach Apple Podcasts Spotify Lawyerist #550: Beyond Content: How AI is Changing Law Firm Marketing, with Gyi Tsakalaki and Conrad Saam: Apple Podcasts | Spotify | Lawyerist Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X! If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. Access more resources from Lawyerist at lawyerist.com. The AI-Driven Leader Chapters/Timestamps: 0:00 - Episode Introduction and Why This Remastered Version is Special 1:22 - AI as the Next Big Shift for Lawyers 6:28 - Geoff Woods: Redefining Leadership in the AI Era 9:11 - The Five Core Human Skills Enhanced by AI 10:36 - Strategic AI: Beyond Basic Tasks 14:24 - AI as Your Strategic Thought Partner 19:47 - Navigating AI: Threat vs. Opportunity for Lawyers 20:56 - Practical AI Applications: NDA Review and Valuation 28:51 - Building Your AI Habit: The "CRIT" Framework 32:19 - AI Security and Data Privacy for Legal Professionals 34:40 - The Risk of Inaction and Building the Future Firm Learn more about your ad choices. Visit megaphone.fm/adchoices
Windows Hello's Facial Authentication UpdateMicrosoft updated Windows Hello to require both infrared and color cameras for facial authentication, addressing a spoofing vulnerability. This enhances security but disables functionality in low-light settings, potentially inconveniencing users and pushing some toward alternatives like Linux for flexible authentication.EchoLeak and AI Security'EchoLeak' is a zero-click vulnerability in Microsoft 365 Copilot, discovered by Aim Labs, allowing data exfiltration via malicious emails exploiting an "LLM Scope Violation." It reveals risks in AI systems combining external inputs with internal data, emphasizing the need for robust guardrails.Denmark's Shift to LibreOffice and LinuxDenmark is adopting LibreOffice and Linux to boost digital sovereignty, reduce reliance on foreign tech like Microsoft, and mitigate geopolitical and cost-related risks. This follows a 72% rise in Microsoft software costs over five years.Chinese AI Firms Bypassing U.S. Chip ControlsChinese AI companies evade U.S. chip export restrictions by processing data in third countries like Malaysia, using tactics like physically transporting data and setting up shell entities to access high-end chips and return trained AI models.Mattel and OpenAI PartnershipMattel's collaboration with OpenAI to create AI-enhanced toys introduces engaging, safe experiences for kids but raises privacy and security concerns, highlighting the need for "Zero trust" models in handling children's data.Apple's Passkey Import/Export FeatureApple's new FIDO-based passkey import/export feature allows secure credential transfers across platforms, enhancing security and convenience. It uses biometric or PIN authentication, replacing less secure methods and improving interoperability.Airlines Selling Passenger Data to DHSThe Airlines Reporting Corporation, owned by U.S. airlines, sold domestic flight data to DHS's CBP, including names and itineraries, with a clause hiding the source. This raises privacy concerns about government tracking without transparency.WhatsApp's New Ad PolicyWhatsApp's introduction of ads in its "Updates" section deviates from its original "no ads" philosophy. While limited and preserving chat encryption, this shift alters the ad-free experience that attracted its two billion users.https://rprescottstearns.blogspot.com/2025/06/broken-windows-it-privacy-and-security.html
In this episode, Dr. Dave Chatterjee is joined by Burnie Legette, Director of IoT and AI at Intel Corporation and former professional football player. Their conversation explores the evolving landscape of AI deployment within the public sector, with a particular focus on the security challenges and governance strategies required to harness AI responsibly. Drawing on his cross-sectoral experience, Burnie offers insights into the cultural, technical, and ethical nuances of AI adoption. Dr. Chatterjee brings in his empirically grounded Commitment-Preparedness-Discipline (CPD) cybersecurity governance framework to emphasize the importance of planning, transparency, and stakeholder engagement.To access and download the entire podcast summary with discussion highlights -- https://www.dchatte.com/episode-87-ai-security-in-the-public-sector-balancing-innovation-and-risk/Connect with Host Dr. Dave Chatterjee and Subscribe to the PodcastPlease subscribe to the podcast so you don't miss any new episodes! And please leave the show a rating if you like what you hear. New episodes are released every two weeks. Connect with Dr. Chatterjee on these platforms: LinkedIn: https://www.linkedin.com/in/dchatte/ Website: https://dchatte.com/Cybersecurity Readiness Book: https://www.amazon.com/Cybersecurity-Readiness-Holistic-High-Performance-Approach/dp/1071837338https://us.sagepub.com/en-us/nam/cybersecurity-readiness/book275712Latest Publications & Press Releases:“Meet Dr. Dave Chatterjee, the mind behind the CommitmentPreparedness-Discipline method for cybersecurity,” Chicago Tribune, February 24, 2025."Dr. Dave Chatterjee On A Proactive Behavioral Approach To Cyber Readiness," Forbes, February 21, 2025.Ignorance is not bliss: A human-centered whole-of-enterprise approach to cybersecurity preparednessDr. Dave Chatterjee Hosts Global Podcast Series on Cyber Readiness, Yahoo!Finance, Dec 16, 2024Dr. Dave Chatterjee Hosts Global Podcast Series on Cyber Readiness, Marketers Media, Dec 12, 2024.
Guest: Daniel Fabian, Principal Digital Arsonist, Google Topic: Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process? What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems? Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it? What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle? What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field? Resources: Video (LinkedIn, YouTube) Google's AI Red Team: the ethical hackers making AI safer EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons Lessons from AI Red Teaming – And How to Apply Them Proactively [RSA 2025]
On the latest episode of After Earnings, we spoke with Okta CEO Todd McKinnon about how his company aims to become the one-stop shop for digital ID across all businesses and applications. Highlights include: • How Okta is working toward a password-free future. • Why - when it comes to security - Okta sees itself as the superior choice versus Microsoft every time. • How Okta is responding to the emerging security threats posed by AI and quantum computing. 00:00 START 01:30 Okta vs. Microsoft on security 04:10 Selling to both developers and IT teams 08:34 The future of identity verification 12:55 Security challenges posed by quantum computing 17:40 Developing standards for AI security 19:31 Okta's strong earnings and the market's response $OKTA After Earnings is brought to you by Stakeholder Labs and Morning Brew. For more go to https://www.afterearnings.com Follow Us X: https://twitter.com/AfterEarnings TikTok: https://www.tiktok.com/@AfterEarnings Instagram: https://www.instagram.com/afterearnings_/ Reach OutEmail: afterearnings@morningbrew.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Send us a textSecurity is increasingly viewed as a strategic business advantage rather than just a necessary cost center. The dialogue explores how companies are leveraging their security posture to gain competitive advantages in sales cycles and build customer trust.• Taylor's journey from aspiring physical therapist to cybersecurity expert through a chance college course• The importance of diverse experience across different security domains for career longevity• How healthcare organizations have become prime targets due to valuable data and outdated security• The emerging AI arms race creating unprecedented security challenges and opportunities• Voice cloning technology enabling sophisticated social engineering attacks, including an almost successful $20 million fraud• Emerging trends in security validation with tools pulling data directly from security systems• The shift from viewing security as a cost center to leveraging it as a sales advantage• Why enterprises are driving security standards more effectively than regulatorsEden Data provides outsourced security, compliance, and privacy services for technology companies at all stages, from pre-revenue startups to publicly traded enterprises, helping them build robust security programs aligned with regulatory frameworks and customer expectations.Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts SpotifySupport the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast
In this episode of Cybersecurity Today, host Jim Love discusses critical AI-related security issues, such as the Echo Leak vulnerability in Microsoft's AI, MCP's universal integration risks, and Meta's privacy violations in Europe. The episode also explores the dangers of internet-exposed cameras as discovered by BitSight, highlighting the urgent need for enhanced AI security and the legal repercussions for companies like Meta. 00:00 Introduction to AI Security Issues 00:24 Echo Leak: The Zero-Click AI Vulnerability 03:17 MCP Protocol: Universal Interface, Universal Vulnerabilities 07:01 Meta's Privacy Scandal: Local Host Tracking 10:11 The Peep Show: Internet-Connected Cameras Exposed 12:08 Conclusion and Call to Action
Podcast: KBKAST (LS 31 · TOP 5% what is this?)Episode: Episode 314 Deep Dive: Imran Husain | Cybersecurity Threats in the Manufacturing WorldPub date: 2025-06-11Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, we sit down with Imran Husain, Chief Information Security Officer at MillerKnoll, as he discusses the evolving landscape of cybersecurity threats in the manufacturing sector. Imran explores the challenges that arise as manufacturing increasingly integrates with online technologies and IoT, highlighting the unique vulnerabilities posed by legacy systems and operational technology (OT). He shares insights on high-profile incidents like the Norsk Hydro ransomware attack, emphasizing the importance of cyber resilience, data backup, and incident recovery. Imran also offers a candid look at why critical tasks like backing up data are often neglected, the complexities of securing aging infrastructure, and the need for creative solutions such as network segmentation and IT/OT convergence. A dedicated and trusted senior Cyber security professional, Imran Husain has over 22 years of Fortune 1000 experience that covers a broad array of domains which includes risk management, cloud security, SecDevOps, AI Security and OT Cyber practices. A critical, action-oriented leader Imran brings strategic and technical expertise with a proven ability to build cyber program to be proactive in their threat detection, identifying and engaging in critical areas to the business while upholding their security posture. He specializes in Manufacturing and Supply Chain Distribution focusing on how to best use security controls and processes to maximize coverage and reduce risk in a complex multi-faceted environment. A skilled communicator and change agent with bias to action who cultivates an environment of learning and creative thinking, Imran champions open communication and collaboration to empower and inspire teams to exceed in their respective cyber commitments. He is currently the Global Chief Information Security Officer (CISO) at MillerKnoll, a publicly traded American company that produces office furniture, equipment, and home furnishings.The podcast and artwork embedded on this page are from KBI.Media, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
The Spring 2025 issue of AI Cyber Magazine details some of 2024's major AI security vulnerabilities and sheds light on the funding landscape. Confidence Staveley, Africa's most celebrated female cybersecurity leader, is the founder of the Cybersafe Foundation, a Non-Governmental Organization on a mission to facilitate pockets of changes that ensure a safer internet for everyone with digital access in Africa. In this episode, Confidence joins host Amanda Glassner to discuss. To learn more about Confidence, visit her website at https://confidencestaveley.com, and for more on the CyberSafe Foundation, visit https://cybersafefoundation.org.
AI agents hold the promise of continuously testing, scanning, and fixing code for security vulnerabilities, but we're still progressing toward that vision. Startups like Aptori are helping bridge the gap by building AI-powered security engineers for enterprises. Aptori maps an organization's codebase, APIs, and cloud infrastructure in real time to understand data flows and authorization logic, allowing it to detect and eventually remediate security issues. At Google Cloud Next, Aptori CEO Sumeet Singh discussed how earlier tools merely alerted developers to issues—often overwhelming them—but newer models like Gemini 2.5 Flash and Claude Sonnet 4 are improving automated code fixes, making them more practical. Singh and co-founder Travis Newhouse previously built AppFormix, which automated OpenStack cloud operations before being acquired by Juniper Networks. Their experiences with slow release cycles due to security bottlenecks inspired Aptori's focus. While the goal is autonomous agents, Singh emphasizes the need for transparency and deterministic elements in AI tools to ensure trust and reliability in enterprise security workflows.Learn more from The New Stack about the latest insights in AI application security: AI Is Changing Cybersecurity Fast and Most Analysts Aren't ReadyAI Security Agents Combat AI-Generated Code RisksDevelopers Are Embracing AI To Streamline Threat Detection and Stay AheadJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
A recent report from SailPoint reveals a significant contradiction in the IT sector: while 96% of IT professionals view artificial intelligence agents as a security risk, an overwhelming 98% still plan to expand their use within organizations over the next year. The study highlights that although 84% of respondents currently utilize AI agents, only 44% have established governance policies for their behavior. This lack of oversight is concerning, especially as 80% of respondents reported that these agents have acted in unexpected and potentially harmful ways. The need for stringent governance and security protocols for AI agents is becoming increasingly urgent.In the realm of cloud computing, dissatisfaction is on the rise among organizations, with Gartner estimating that up to 25% may face significant disappointment due to unexpected costs and management complexities. Many organizations lack coherent cloud strategies, leading to issues like vendor lock-in. A notable example is 37Signals, which faced a $3.2 million bill for cloud services, prompting a migration back to on-premises infrastructure. As organizations adopt multi-cloud strategies, Gartner warns that more than half may not achieve their expected outcomes, further complicating the landscape.The podcast also discusses a new Texas law requiring Apple and Google to verify the ages of users accessing their app stores, a move that shifts the liability of age enforcement onto these tech giants. This trend reflects a broader governmental push to redefine digital intermediaries as compliance gatekeepers, which could lead to increased regulatory burdens for tech companies. As data sovereignty becomes a priority, organizations are urged to adapt their strategies to align with new privacy and age verification mandates.Lastly, the episode touches on intriguing revelations, such as the CIA's covert use of a Star Wars fan site for secure communications and the persistence of outdated operating systems like Windows XP in various sectors. These stories underscore the complexities of digital infrastructure and the importance of understanding data privacy implications. As reliance on voice-activated technologies grows, the need for IT providers to educate clients about data retention and privacy policies becomes critical, especially in a landscape where everyday devices can act as silent data hoarders. Four things to know today 00:00 IT Leaders Expand AI Agent Use Despite Governance Gaps and Cloud Disillusionment06:08 Dell Surges on AI Server Demand While HP Struggles With Tariffs and Consumer Weakness09:17 Texas Law Forces Apple and Google to Enforce Age Verification, Marking Shift in Platform Liability10:50 CIA Spy Site, Smart Speaker Surveillance, and Legacy Software Reveal Overlooked Digital Threat Surfaces Supported by: https://afi.ai/office-365-backup/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
Is Artificial Intelligence the ultimate security dragon, we need to slay, or a powerful ally we must train? Recorded LIVE at BSidesSF, this special episode dives headfirst into the most pressing debates around AI security.Join host Ashish Rajan as he navigates the complex landscape of AI threats and opportunities with two leading experts:Jackie Bow (Anthropic): Championing the "How to Train Your Dragon" approach, Jackie reveals how we can leverage AI, and even its 'hallucinations,' for advanced threat detection, response, and creative security solutions.Kane Narraway (Canva): Taking the "Knight/Wizard" stance, Kane illuminates the critical challenges in securing AI systems, understanding the new layers of risk, and the complexities of AI threat modeling.
As Artificial Intelligence reshapes our world, understanding the new threat landscape and how to secure AI-driven systems is more crucial than ever. We spoke to Ankur Shah, Co-Founder and CEO of Straiker about navigating this rapidly evolving frontier.In this episode, we unpack the complexities of securing AI, from the fundamental shifts in application architecture to the emerging attack vectors. Discover why Ankur believes "you can only secure AI with AI" and how organizations can prepare for a future where "your imagination is the new limit," but so too are the potential vulnerabilities.Guest Socials - Ankur's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Cybersecurity PodcastQuestions asked:(00:00) Introduction(00:30) Meet Ankur Shah (CEO, Straiker)(01:54) Current AI Deployments in Organizations (Copilots & Agents)(04:48) AI vs. Traditional Security: Why Old Methods Fail for AI Apps(07:07) AI Application Types: Native, Immigrant & Explorer Explained(10:49) AI's Impact on the Evolving Cyber Threat Landscape(17:34) Ankur Shah on Core AI Security Principles (Visibility, Governance, Guardrails)(22:26) The AI Security Vendor Landscape (Acquisitions & Startups)(24:20) Current AI Security Practices in Organizations: What's Working?(25:42) AI Security & Hyperscalers (AWS, Azure, Google Cloud): Pros & Cons(26:56) What is AI Inference? Explained for Cybersecurity Pros(33:51) Overlooked AI Attack Surfaces: Hidden Risks in AI Security(35:12) How to Uplift Your Security Program for AI(37:47) Rapid Fire: Fun Questions with Ankur ShahThank you to this episode's sponsor - Straiker.ai
While the global health community wrenches its clothes and gnashes its teeth in Switzerland at the 78th World Health Assembly, Dr Mike Reid, Associate Director of the Center for Global Health Diplomacy, UCSF joins Ben in an entertaining and wide ranging exploration of a positive, forward-looking agenda for global health. Topics include global health security, one health, mis- and disinformation in the doctor-patient relationship, health technology and specific future uses and pitfalls of AI to improve access to healthcare in developing countries. Mike offers a promise of a future episode on channelling philanthropic dollars into sovereign wealth funds for global health investments. And finally they reflect on their upbringing in the UK with its “free at the point of delivery” National Health Service, and argue over which of the modern Cambridge University Colleges they went to most resembles a multi-story car park. 00:00 Introduction and Overview 00:09 World Health Assembly Insights 01:18 Guest Introduction: Dr. Mike Reed 03:40 Mike Reid's Background and Career 05:58 Global Health Security and Solidarity 11:28 The One Health Agenda 14:12 Artificial Intelligence in Global Health 37:26 Navigating Healthcare Systems 43:48 Closing Remarks and Future Topics Mike's Substack: https://reimaginingglobalhealth.substack.com/
Guest: Christine Sizemore, Cloud Security Architect, Google Cloud Topics: Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain? I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain? We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains? We've talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development? We are all hearing about agentic security – so can we just ask the AI to secure itself? Top 3 things to do to secure AI software supply chain for a typical org? Resources: Video “Securing AI Supply Chain: Like Software, Only Not” blog (and paper) “Securing the AI software supply chain” webcast EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments Protect AI issue database “Staying on top of AI Developments” “Office of the CISO 2024 Year in Review: AI Trust and Security” “Your Roadmap to Secure AI: A Recap” (2024) "RSA 2025: AI's Promise vs. Security's Past — A Reality Check" (references our "data as code" presentation)
Guest: Diana Kelley, CSO at Protect AI Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? Top differences between LLM/chatbot AI security vs AI agent security? Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem' Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
➡ Get full visibility, risk insights, red teaming, and governance for your AI models, AI agents, RAGs, and more—so you can securely deploy AI powered applications with ul.live/mend In this episode, I speak with Bar-El Tayouri, Head of AI Security at Mend.io, about the rapidly evolving landscape of application and AI security—especially as multi-agent systems and fuzzy interfaces redefine the attack surface. We talk about: • Modern AppSec Meets AI Agents How traditional AppSec falls short when it comes to AI-era components like agents, MCP servers, system prompts, and model artifacts—and why security now depends on mapping, monitoring, and understanding this entire stack. • Threat Discovery, Simulation, and Mitigation How Mend’s AI security suite identifies unknown AI usage across an org, simulates dynamic attacks (like prompt injection via PDFs), and provides developers with precise, in-code guidance to reduce risk without slowing innovation. • Why We’re Rethinking Identity, Risk, and GovernanceWhy securing AI systems isn’t just about new threats—it’s about re-implementing old lessons: identity access, separation of duties, and system modeling. And why every CISO needs to integrate security into the dev workflow instead of relying on blunt-force blocking. Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiessler Chapters: 00:00 - From Game Hacking to AI Security: Barel’s Tech Journey03:51 - Why Application Security Is Still the Most Exciting Challenge04:39 - The Real AppSec Bottleneck: Prioritization, Not Detection06:25 - Explosive Growth of AI Components Inside Applications12:48 - Why MCP Servers Are a Massive Blind Spot in AI Security15:02 - Guardrails Aren’t Keeping Up With Agent Power16:15 - Why AI Security Is Maturing Faster Than Previous Tech Waves20:59 - Traditional AppSec Tools Can’t Handle AI Risk Detection26:01 - How Mend Maps, Discovers, and Simulates AI Threats34:02 - What Ideal Customers Ask For When Securing AI38:01 - Beyond Guardrails: Mend’s Guide Rails for In-Code Mitigation41:49 - Multi-Agent Systems Are the Next Security Nightmare45:47 - Final Advice for CISOs: Enable, Don’t Disable DevelopersBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.
Helen Oakley, Senior Director of Product Security at SAP, and Dmitry Raidman, Co-founder and CTO of Cybeats, joined us live at the RSAC Conference to bring clarity to one of the most urgent topics in cybersecurity: transparency in the software and AI supply chain. Their message is direct—organizations not only need to understand what's in their software, they need to understand the origin, integrity, and impact of those components, especially as artificial intelligence becomes more deeply integrated into business operations.SBOMs Are Not Optional AnymoreSoftware Bills of Materials (SBOMs) have long been a recommended best practice, but they're now reaching a point of necessity. As Dmitry noted, organizations are increasingly requiring SBOMs before making purchase decisions—“If you're not going to give me an SBOM, I'm not going to buy your product.” With regulatory pressure mounting through frameworks like the EU Cyber Resilience Act (CRA), the demand for transparency is being driven not just by compliance, but by real operational value. Companies adopting SBOMs are seeing tangible returns—saving hundreds of hours on risk analysis and response, while also improving internal visibility.Bringing AI into the SBOM FoldBut what happens when the software includes AI models, data pipelines, and autonomous agents? Helen and Dmitry are leading a community-driven initiative to create AI-specific SBOMs—referred to as AI SBOMs or AISBOMs—to capture critical metadata beyond just the code. This includes model architectures, training data, energy consumption, and more. These elements are vital for risk management, especially when organizations may be unknowingly deploying models with embedded vulnerabilities or opaque dependencies.A Tool for the Community, Built by the CommunityIn an important milestone for the industry, Helen and Dmitry also introduced the first open source tool capable of generating CycloneDX-formatted AISBOMs for models hosted on Hugging Face. This practical step bridges the gap between standards and implementation—helping organizations move from theoretical compliance to actionable insight. The community's response has been overwhelmingly positive, signaling a clear demand for tools that turn complexity into clarity.Why Security Leaders Should Pay AttentionThe real value of an SBOM—whether for software or AI—is not just external compliance. It's about knowing what you have, recognizing your crown jewels, and understanding where your risks lie. As AI compounds existing vulnerabilities and introduces new ones, starting with transparency is no longer a suggestion—it's a strategic necessity.Want to see how this all fits together? Hear it directly from Helen and Dmitry in this episode.___________Guests: Helen Oakley, Senior Director of Product Security at SAP | https://www.linkedin.com/in/helen-oakley/Dmitry Raidman, Co-founder and CTO of Cybeats | https://www.linkedin.com/in/draidman/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974Akamai: https://itspm.ag/akamailbwcBlackCloak: https://itspm.ag/itspbcwebSandboxAQ: https://itspm.ag/sandboxaq-j2enArcher: https://itspm.ag/rsaarchwebDropzone AI: https://itspm.ag/dropzoneai-641ISACA: https://itspm.ag/isaca-96808ObjectFirst: https://itspm.ag/object-first-2gjlEdera: https://itspm.ag/edera-434868___________ResourcesLinkedIn Post with Links: https://www.linkedin.com/posts/helen-oakley_ai-sbom-aisbom-activity-7323123172852015106-TJeaLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsa-conference-usa-2025-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverage______________________KEYWORDShelen oakley, dmitry raidman, sean martin, rsac 2025, sbom, aisbom, ai security, software supply chain, transparency, open source, event coverage, on location, conference______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
AI Governance, the next frontier for AI Security. But what framework should you use? ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. But how do you get certified? What's the process look like? Martin Tschammer, Head of Security at Synthesia, joins Business Security Weekly to share his ISO 42001 certification journey. From corporate culture to the witness audit, Martin walks us through the certification process and the benefits they have gained from the certification. If you're considering ISO 42001 certification, this interview is a must see. In the leadership and communications section, Are 2 CEOs Better Than 1? Here Are The Benefits and Drawbacks You Must Consider, CISOs rethink hiring to emphasize skills over degrees and experience, Why Clear Executive Communication Is a Silent Driver of Organizational Success, and more! Visit https://www.securityweekly.com/bsw for all the latest episodes! Show Notes: https://securityweekly.com/bsw-392