Podcasts about ai security

  • 222PODCASTS
  • 353EPISODES
  • 38mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jul 29, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai security

Latest podcast episodes about ai security

PodRocket - A web development podcast from LogRocket
Building Jarvis: MCP and the future of AI with Kent C Dodds

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Jul 29, 2025 37:15


Kent C. Dodds is back with bold ideas and a game-changing vision for the future of AI and web development. In this episode, we dive into the Model Context Protocol (MCP), the power behind Epic AI Pro, and how developers can start building Jarvis-like assistants today. From replacing websites with MCP servers to reimagining voice interfaces and AI security, Kent lays out the roadmap for what's next, and why it matters right now. Don't miss this fast-paced conversation about the tools and tech reshaping everything. Links Website: https://kentcdodds.com X: https://x.com/kentcdodds Github: https://github.com/kentcdodds YouTube: https://www.youtube.com/c/kentcdodds-vids Twitch: https://www.twitch.tv/kentcdodds LinkedIn: https://www.linkedin.com/in/kentcdodds Resources Please make Jarvis (so I don't have to): https://www.epicai.pro/please-make-jarvis AI Engineering Posts by Kent C. Dodds: https://www.epicai.pro/posts We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Kent C. Dodds.

ITSPmagazine | Technology. Cybersecurity. Society
Bots, APIs, and Runtime Risk: What Exposures Are Driving AI Security Innovation in 2025 | An Akamai Pre-Event Coverage of Black Hat USA 2025 Las Vegas | Brand Story with Rupesh Chokshi

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jul 25, 2025 21:47


Ahead of Black Hat USA 2025, Sean Martin and Marco Ciappelli sit down once again with Rupesh Chokshi, Senior Vice President and General Manager of the Application Security Group at Akamai, for a forward-looking conversation on the state of AI security. From new threat trends to enterprise missteps, Rupesh lays out three focal points for this year's security conversation: protecting generative AI at runtime, addressing the surge in AI scraper bots, and defending the APIs that serve as the foundation for AI systems.Rupesh shares that Akamai is now detecting over 150 billion AI scraping attempts—a staggering signal of the scale and sophistication of machine-to-machine activity. These scraper bots are not only siphoning off data but also undermining digital business models by bypassing monetization channels, especially in publishing, media, and content-driven sectors.While AI introduces productivity gains and operational efficiency, it also introduces new and uncharted risks. Agentic AI, where autonomous systems operate on behalf of users or other systems, is pushing cybersecurity teams to rethink their strategies. Traditional firewalls aren't enough—because these threats don't behave like yesterday's attacks. Prompt injection, toxic output, and AI-generated hallucinations are some of the issues now surfacing in enterprise environments, with over 70% of organizations already experiencing AI-related incidents.This brings the focus to the runtime. Akamai's newly launched Firewall for AI is purpose-built to detect and mitigate risks in generative AI and LLM applications—without disrupting performance. Designed to flag issues like toxic output, remote code execution, or compliance violations, it operates with real-time visibility across inputs and outputs. It's not just about defense—it's about building trust as AI moves deeper into decision-making and workflow automation.CISOs, says Rupesh, need to shift from high-level discussions to deep, tactical understanding of where and how their organizations are deploying AI. This means not only securing AI but also working hand-in-hand with the business to establish governance, drive discovery, and embed security into the fabric of innovation.Learn more about Akamai: https://itspm.ag/akamailbwcNote: This story contains promotional content. Learn more.Guests:Rupesh Chokshi, SVP & General Manager, Application Security, Akamai | https://www.linkedin.com/in/rupeshchokshi/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com______________________ResourcesLearn more and catch more stories from Akamai: https://www.itspmagazine.com/directory/akamaiLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story

Identity At The Center
#363 - Sponsor Spotlight - Natoma

Identity At The Center

Play Episode Listen Later Jul 23, 2025 50:03


This episode is sponsored by Natoma. Visit https://www.natoma.id/ to learn more.Join Jeff from the IDAC Podcast as he dives into a deep conversation with Paresh Bhaya, the co-founder of Natoma. In this sponsored episode, Paresh shares his journey into the identity space, discusses how Natoma helps enterprises accelerate AI adoption without compromising security, and provides insights into the rising importance of MCP and A2A protocols. Learn about the challenges and opportunities at the intersection of AI and security, the importance of dynamic access controls, and the significance of ensuring proper authentication and authorization in the growing world of agentic AI. Paresh also delights us with his memorable hike up Mount Whitney. Don't miss out!00:00 Introduction and Sponsor Announcement00:34 Guest Introduction: Paresh Bhaya from Natoma01:14 Paresh's Journey into Identity04:04 Natoma's Mission and AI Security06:25 The Story Behind Natoma's Name09:29 Natoma's Unique Approach to AI Security18:32 Understanding MCP and A2A Protocols25:20 Community Development and Adoption25:56 Agent Interactions and Security Challenges27:19 Navigating Product Development29:17 Ensuring Secure Connections36:10 Deploying and Managing MCP Servers42:40 Shadow AI and Governance44:17 Personal Anecdotes and ConclusionConnect with Paresh: https://www.linkedin.com/in/paresh-bhaya/Learn more about Natoma: https://www.natoma.id/Connect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at idacpodcast.comKeywords:IDAC, Identity at the Center, Jeff Steadman, Jim McDonald, Natoma, Paresh Bhaya, Artificial Intelligence, AI, AI Security, Identity and Access Management, IAM, Enterprise Security, AI Adoption, Technology, Innovation, Cybersecurity, Machine Learning, AI Risks, Secure AI, #idac

The AI Fundamentalists
AI governance: Building smarter AI agents from the fundamentals, part 4

The AI Fundamentalists

Play Episode Listen Later Jul 22, 2025 37:25 Transcription Available


Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes. Show notes:• Agentic AI systems require governance at every step: perception, reasoning, action, and learning• Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps• Two-way information flow creates new security and confidentiality vulnerabilities. For example, targeted prompting to improve awareness comes at the cost of performance. (arXiv, May 24, 2025)• Traditional governance approaches are insufficient for the complexity of agentic systems• Organizations must implement granular monitoring, logging, and validation for each component• Human-in-the-loop oversight is not a substitute for robust governance frameworks• The true cost of agentic systems includes governance overhead, monitoring tools, and human expertiseMake sure you check out Part 1: Mechanism design, Part 2: Utility functions, and Part 3: Linear programming. If you're building agentic AI systems, we'd love to hear your questions and experiences. Contact us.What we're reading:We took reading "break" this episode to celebrate Sid! This month, he successfully defended his Ph.D. Thesis on "Psychological Health and Belief Measurement at Scale Through Language." Say congrats!>>What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

5bytespodcast
Google AI Security Milestone! In-box Windows Apps to be Updated! Issues Caused by Updates!

5bytespodcast

Play Episode Listen Later Jul 17, 2025 21:33


I cover some issues caused by the latest Windows Updates, give an update on the M&S cyber attack, I discuss a landmark for Google's AI security platform and much more! Reference Links: https://www.rorymon.com/blog/google-ai-security-milestone-in-box-windows-apps-to-be-updated-issues-caused-by-updates/

Packet Pushers - Full Podcast Feed
D2DO277: AI Security Submissions at Curl Dev

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Jul 16, 2025 35:10


Curl is a widely used open source tool and library for transferring data. On today’s Day Two DevOps we talk with curl creator Daniel Stenberg. Daniel gives us a brief history of curl and where it’s used (practically everywhere). We also discuss the impact of AI on curl. Open source projects are often starved for... Read more »

Packet Pushers - Fat Pipe
D2DO277: AI Security Submissions at Curl Dev

Packet Pushers - Fat Pipe

Play Episode Listen Later Jul 16, 2025 35:10


Curl is a widely used open source tool and library for transferring data. On today’s Day Two DevOps we talk with curl creator Daniel Stenberg. Daniel gives us a brief history of curl and where it’s used (practically everywhere). We also discuss the impact of AI on curl. Open source projects are often starved for... Read more »

Day 2 Cloud
D2DO277: AI Security Submissions at Curl Dev

Day 2 Cloud

Play Episode Listen Later Jul 16, 2025 35:10


Curl is a widely used open source tool and library for transferring data. On today’s Day Two DevOps we talk with curl creator Daniel Stenberg. Daniel gives us a brief history of curl and where it’s used (practically everywhere). We also discuss the impact of AI on curl. Open source projects are often starved for... Read more »

Building Materials Marketing Unboxed
AI, Security, and Human Psychology

Building Materials Marketing Unboxed

Play Episode Listen Later Jul 16, 2025 53:39


Dive deep into the complex world of AI adoption with host Kevin Dean and special guest Kodjo Hogan, a cybersecurity leader and Director of Information Security GRC for Chainalysis Inc. This episode explores the crucial, often-overlooked intersection of Artificial Intelligence, robust security, and human psychology.You want to talk AI strategy for your company? Contact us at: https://www.manobyte.com/contact-usGet in touch with Kevin at: https://kevinjdean.com/Get in touch with guest, Kodjo Hogan here: https://www.linkedin.com/in/kodjohogan/In this insightful conversation, Kodjo cuts through the hype to address the serious concerns around AI risk, governance, and data integrity. You'll learn:What true AI governance means and why it must be deeply embedded into your adoption strategy.How to define specific, auditable AI use cases to ensure reliable and non-biased outcomes.The hidden dangers of "AI psychosis" and how AI's "yes-man" tendency can impact human judgment and decision-making.The critical risks of model poisoning and data hallucination, and their real-world consequences.The surprising, yet powerful role blockchain could play in preserving data integrity and combating deepfakes in the AI era.Actionable steps for business leaders to build a security-first culture around AI, involving cross-functional teams including HR and legal.If you're a leader grappling with how to make smart, secure, and strategic decisions about AI in your organization, this episode provides the clarity and practical insights you need to move forward with confidence.

Joey Pinz Discipline Conversations
#680 Pax8 Beyond-Ken Tripp:

Joey Pinz Discipline Conversations

Play Episode Listen Later Jul 9, 2025 20:11


Send us a textIn this thoughtful and grounded episode of Joey Pinz Discipline Conversations, Joey sits down with Ken Tripp of Netwrix to discuss the evolving challenges MSPs face — and how true partner-led collaboration can help solve them. Recorded live at Pax8 Beyond 2025, this conversation weaves cybersecurity, personal transformation, and the need for industry-wide unity.Ken explains how Netwrix helps MSPs secure and profit from Microsoft, especially in relation to Copilot rollouts, compliance obligations, and scaling client environments without adding technical overhead. He discusses the shared responsibility model and how Netwrix streamlines identity, permissions, and data classification through AI — reducing labor costs and delivering predictable value to MSPs managing dozens or hundreds of tenants.The conversation also turns personal: Ken shares his 120-pound weight loss journey following a major health scare and how discipline and routine helped him reshape his life. That same clarity, he says, is needed in the MSP space — not just from vendors, but through shared change and joint accountability across the ecosystem. 

BlockHash: Exploring the Blockchain
Ep. 540 Mike Lieberman | Open-source Software, Threat Prevention and AI Security with Kusari

BlockHash: Exploring the Blockchain

Play Episode Listen Later Jul 8, 2025 46:14


For episode 540 of the BlockHash Podcast, host Brandon Zemp is joined by Mike Lieberman CTO and Co-Founder of Kusari.Kusari began in 2022 with the goal to secure the software supply chain. They are passionate about this problem, as they constantly faced the same issue: identifying the software they're using and protecting against threats to that software. This led to slow response to security vulnerabilities, uncertainty about licensing and compliance, and even basic maintenance challenges. Kusari brings transparency and security to software supply chains, providing clarity and actionable insights. ⏳ Timestamps: 0:00 | Introduction1:10 | Who is Mike Lieberman?6:10 | What is Kusari?15:37 | Open-source software GUAC20:00 | Threat landscape in 202528:43 | AI for software security31:03 | Decentralized AI models32:40 | Quantum computing39:27 | Kusari roadmap 202544:32 | Kusari website, socials & community 

Reality 2.0
Episode 158: Reality 2025: Bridging AI, Security, and Open Source Challenges

Reality 2.0

Play Episode Listen Later Jul 3, 2025 34:08


In this episode of Reality 2.0, Doc and Katherine return after a long hiatus to discuss a range of topics including AI and security concerns, the evolution of cloud-native technologies, and the growing complexity of AI-related projects within various Linux Foundation groups. The conversation also touches on approaches to AI and privacy, the potential for AI to assist in personal and professional tasks, and the importance of standardizing and simplifying best practices for AI deployment. The episode wraps up with insights on the innovative 'My Terms' project aimed at flipping the cookie consent model to better respect user privacy. The hosts also emphasize the importance of constructive conversations and maintaining optimism about the future of technology. 00:00 Welcome Back to Reality 2.0 00:36 Upcoming Open Source Summit 01:03 Linux Foundation and AI Initiatives 04:20 Apple's Approach to Personal AI 05:11 Challenges of AI and Data Privacy 07:16 Potential of Personal AI Models 11:10 Human Interaction with AI 26:50 Innovations in Cookie Consent 31:08 Commitment to More Frequent Episodes 33:16 Closing Remarks and Future Plans Site/Blog/Newsletter (https://www.reality2cast.com) FaceBook (https://www.facebook.com/reality2cast) Twitter (https://twitter.com/reality2cast) Mastodon (https://linuxrocks.online/@reality2cast)

Grow Sell and Retire
Negotiator's Playbook: Body Language, Sincerity, and AI Security with Molly Bloomquist

Grow Sell and Retire

Play Episode Listen Later Jun 30, 2025 29:08


Want to master the art of negotiation in business and life? In this episode of the Grow, Sell and Retire podcast, I sat down with Molly Bloomquist—a 20-year US government vet turned negotiation expert. Molly shared game-changing insights on preparing for negotiations, the power of silence, reading body language, and the importance of empathy and sincerity. Her secret? Active listening, staying flexible, and making real human connections to achieve the best outcomes. If you want to level up your negotiation game, you'll want to hear Molly's top tips!www.mollyblomquist.com https://www.linkedin.com/in/molly-blomquist/

Cyber Security Today
Max Severity Flaws, Massive Exploits, and AI Security: A Cybersecurity Briefing

Cyber Security Today

Play Episode Listen Later Jun 27, 2025 11:23 Transcription Available


In this episode of 'Cybersecurity Today,' host Jim Love discusses urgent cybersecurity threats and concerns. Cisco has issued emergency patches for two maximum severity vulnerabilities in its Identity Services Engine (ISE) that could allow complete network takeover; organizations are urged to update immediately. A popular WordPress theme, Motors, has a critical vulnerability leading to mass exploitation and unauthorized admin account creation. A new ransomware group, Dire Wolf, has emerged, targeting manufacturing and technology sectors with sophisticated double extortion tactics. Lastly, an Accenture report reveals a dangerous gap between executive confidence and actual AI security preparedness, suggesting most major companies are not ready to handle AI-driven threats. The episode emphasizes the urgent need for immediate action and heightened awareness in the cybersecurity landscape. 00:00 Introduction and Headlines 00:26 Cisco's Critical Security Flaws 03:06 WordPress Theme Vulnerability Exploitation 05:57 Dire Wolf Ransomware Group Emerges 08:27 Accenture Report on AI Security Overconfidence 11:00 Conclusion and Upcoming Schedule

Trust Issues
EP 10 - A new identity crisis: governance in the AI age

Trust Issues

Play Episode Listen Later Jun 26, 2025 36:20


In this episode of Security Matters, host David Puner sits down with Deepak Taneja, co-founder of Zilla Security and General Manager of Identity Governance at CyberArk, to explore why 2025 marks a pivotal moment for identity security. From the explosion of machine identities—now outnumbering human identities 80 to 1—to the convergence of IGA, PAM, and AI-driven automation, Deepak shares insights from his decades-long career at the forefront of identity innovation.Listeners will learn:Why legacy identity governance models are breaking under cloud scaleHow AI agents are reshaping entitlement management and threat detectionWhat organizations must do to secure non-human identities and interlinked dependenciesWhy time-to-value and outcome-driven metrics are essential for modern IGA successWhether you're a CISO, identity architect, or security strategist, this episode delivers actionable guidance for navigating the evolving identity security landscape.

CryptoNews Podcast
#451: Ahmad Shadid, CEO of O.XYZ, on Creating the First AI CEO, O.CAPTAIN, and Your EGO vs. AI

CryptoNews Podcast

Play Episode Listen Later Jun 26, 2025 39:09


Ahmad Shadid is the Founder and CEO of O.XYZ, an ecosystem with a mission to build the world's first sovereign super intelligence. As Ahmad put it, "AI must be a tool for the people, not a weapon for profit." O.XYZ is a complex ecosystem, starting with it's core – O.Super Intelligence which will help guide decisions, solve complex problems, and interact with people in the ecosystem; a toolbox with AI-powered products — tools that help you solve various problems using artificial intelligence; O.REASEARCH, O.INFRA, O.CHARITY, O.CAPITAL, and O.CHAIN as parts of the ecosystem.Previously, Ahmad was CEO of IO.net, leading the company to a $4.5 billion valuation in under a year. His leadership propelled IO.net to secure $2 million in a seed round with a $10 million fully diluted valuation in June 2023, followed by a groundbreaking $40 million Series A round at a $1 billion FDV in March 2024. This rapid growth culminated in the successful launch of the $IO coin on Binance, with a remarkable $4.5 billion FDV in June 2024.Ahmad is a visionary behind the DeAIO – an Autonomous AI Organization, the next step in the evolution of DAOs aiming to revolutionize AI governance and development. Demonstrating his commitment to innovation, he has personally invested $130M into the development of DeAIO. O.XYZ builds on Ahmad's legacy, aiming to redefine AI and showcase how decentralized technology can drive common progress and serve people.In this conversation, we discuss:- Creating the First AI CEO- O.CAPTAIN- The future of AI & Crypto - Flipping the Narrative: AI that Helps, Not Replaces- Your EGO vs AI - Security ops and code review will become increasingly important - The feeling of working for AI and not a person  - Building a company fully managed by AI - Living in Doha, Qatar- The future of AI & Crypto - An AI CEO that adapts workloads based on your energy and well-beingO.XYZWebsite: www.o.xyzX: @o_fndnTelegram: t.me/oxyz_communityAhmad ShadidX: @shadid_ioLinkedIn: Ahmad Shadid--------------------------------------------------------------------------------- This episode is brought to you by PrimeXBT. PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers.    PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50This promotion is available for a month after activation. Click the link below:PrimeXBT x CRYPTONEWS50

ITSPmagazine | Technology. Cybersecurity. Society
Building a Dynamic Framework for Cyber Risk and Control Alignment: A Threat-Adaptive Approach to Cybersecurity Readiness | A HITRUST Brand Story with Michael Moore

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jun 25, 2025 35:41


Cyber threats are not static—and HITRUST knows assurance can't be either. That's why HITRUST's Michael Moore is leading efforts to ensure the HITRUST framework evolves in step with the threat environment, business needs, and the technologies teams are using to respond.In this episode, Moore outlines how the HITRUST Cyber Threat Adaptive (CTA) program transforms traditional assessment models into something far more dynamic. Instead of relying on outdated frameworks or conducting audits that only capture a point-in-time view, HITRUST is using real-time threat intelligence, breach data, and frameworks like MITRE ATT&CK and MITRE ATLAS to continuously evaluate and update its assessment requirements.The E1 and I1 assessments—designed for organizations at different points in their security maturity—serve as flexible baselines that shift with current risk. Moore explains that by leveraging CTA, HITRUST can add or update controls in response to rising attack patterns, such as the resurgence of phishing or the emergence of AI-driven exploits. These updates are informed by a broad ecosystem of signals, including insurance claims data and AI-parsed breach reports, offering both frequency and impact context.One of the key advantages Moore highlights is the ability for security teams to benefit from these updates without having to conduct their own exhaustive analysis. As Moore puts it, “You get it by proxy of using our frameworks.” In addition to streamlining how teams manage and demonstrate compliance, the evolving assessments also support conversations with business leaders and boards—giving them visibility into how well the organization is prepared for the threats that matter most right now.HITRUST is also planning to bring more of this intelligence into its assessment platform and reports, including showing how individual assessments align with the top threats at the time of certification. This not only strengthens third-party assurance but also enables more confident internal decision-making—whether that's about improving phishing defenses or updating incident response playbooks.From AI-enabled moderation of threats to proactive regulatory mapping, HITRUST is building the connective tissue between risk intelligence and real-world action.Note: This story contains promotional content. Learn more.Guest: Michael Moore, Senior Manager, Digital Innovation at HITRUST | On LinkedIn: https://www.linkedin.com/in/mhmoore04/Hosts:Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | https://www.seanmartin.com/Marco Ciappelli, Co-Founder at ITSPmagazine and Host of Redefining Society Podcast & Audio Signals Podcast | https://www.marcociappelli.com/______________________Keywords: sean martin, marco ciappelli, michael moore, hitrust, cybersecurity, threat intelligence, risk management, compliance, assurance, ai security, brand story, brand marketing, marketing podcast, brand story podcast______________________ResourcesVisit the HITRUST Website to learn more: https://itspm.ag/itsphitwebLearn more and catch more stories from HITRUST on ITSPmagazine: https://www.itspmagazine.com/directory/hitrustLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story

Cyber Security Today
AI Vulnerabilities and the Gentle Singularity: A Deep Dive with Project Synapse

Cyber Security Today

Play Episode Listen Later Jun 21, 2025 60:59 Transcription Available


In this thought-provoking episode of Project Synapse, host Jim and his friends Marcel Gagne and John Pinard delve into the complexities of artificial intelligence, especially in the context of cybersecurity. The discussion kicks off by revisiting a blog post by Sam Altman about reaching a 'Gentle Singularity' in AI development, where the progress towards artificial superintelligence seems inevitable. They explore the idea of AI surpassing human intelligence and the implications of machines learning to write their own code. Throughout their engaging conversation, they emphasize the need to integrate security into AI systems from the start, rather than as an afterthought, citing recent vulnerabilities like Echo Leak and Microsoft Copilot's Zero Click vulnerability. Derailing into stories from the past and pondering philosophical questions, they wrap up by urging for a balanced approach where speed and thoughtful planning coexist, and to prioritize human welfare in technological advancements. This episode serves as a captivating blend of storytelling, technical insights, and ethical debates. 00:00 Introduction to Project Synapse 00:38 AI Vulnerabilities and Cybersecurity Concerns 02:22 The Gentle Singularity and AI Evolution 04:54 Human and AI Intelligence: A Comparison 07:05 AI Hallucinations and Emotional Intelligence 12:10 The Future of AI and Its Limitations 27:53 Security Flaws in AI Systems 30:20 The Need for Robust AI Security 32:22 The Ubiquity of AI in Modern Society 32:49 Understanding Neural Networks and Model Security 34:11 Challenges in AI Security and Human Behavior 36:45 The Evolution of Steganography and Prompt Injection 39:28 AI in Automation and Manufacturing 40:49 Crime as a Business and Security Implications 42:49 Balancing Speed and Security in AI Development 53:08 Corporate Responsibility and Ethical Considerations 57:31 The Future of AI and Human Values

Lawyerist Podcast
#565: Becoming the AI Driven Leader, with Geoff Woods (Remastered)

Lawyerist Podcast

Play Episode Listen Later Jun 19, 2025 39:08


This special remastered episode of the Lawyerist Podcast features Stephanie's conversation with Geoff Woods, author of The AI-Driven Leader. We're re-releasing it due to positive feedback on the depth of this discussion, ensuring you'll gain new insights and "aha!" moments with every listen.  In this episode, we explore AI's transformative power, viewing it not as a threat, but as a liberator that enhances our work. We dive into the five core human skills to emphasize in an AI-driven world: strategic thinking, problem-solving, communication, collaboration, and creation. We demonstrate how to leverage AI strategically, from evaluating business plans to acting as a growth-minded board member, and you'll hear how we're integrating AI into our own leadership meetings.  Geoff shares real-world examples of using AI as a "thought partner" to stress-test major strategic decisions, even creating an "AI board of advisors." He also provides practical applications for lawyers, such as using AI to review NDAs, stress-test legal arguments, and role-play closing arguments with AI as your jury. To guide your own AI journey, Geoff outlines his "CRIT" framework (Context, Role, Interview, Task) for effective prompting and highlights the importance of understanding AI model settings for data privacy and confidentiality.  Listen to our other episodes on the AI revolution:  #555: How to Use AI and Universal Design to Empower Diverse Thinkers with Susan Tanner Apple Podcasts | Spotify | Lawyerist   #553: AI Tools and Processes Every Lawyer Should Use with Catherine Sanders Reach Apple Podcasts  Spotify  Lawyerist     #550: Beyond Content: How AI is Changing Law Firm Marketing, with Gyi Tsakalaki and Conrad Saam: Apple Podcasts | Spotify | Lawyerist    Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X!  If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you.  Access more resources from Lawyerist at lawyerist.com.  The AI-Driven Leader   Chapters/Timestamps:  0:00 - Episode Introduction and Why This Remastered Version is Special   1:22 - AI as the Next Big Shift for Lawyers   6:28 - Geoff Woods: Redefining Leadership in the AI Era   9:11 - The Five Core Human Skills Enhanced by AI   10:36 - Strategic AI: Beyond Basic Tasks   14:24 - AI as Your Strategic Thought Partner   19:47 - Navigating AI: Threat vs. Opportunity for Lawyers   20:56 - Practical AI Applications: NDA Review and Valuation   28:51 - Building Your AI Habit: The "CRIT" Framework   32:19 - AI Security and Data Privacy for Legal Professionals   34:40 - The Risk of Inaction and Building the Future Firm    

Legal Talk Network - Law News and Legal Topics
#565: Becoming the AI Driven Leader, with Geoff Woods (Remastered)

Legal Talk Network - Law News and Legal Topics

Play Episode Listen Later Jun 19, 2025 39:08


This special remastered episode of the Lawyerist Podcast features Stephanie's conversation with Geoff Woods, author of The AI-Driven Leader. We're re-releasing it due to positive feedback on the depth of this discussion, ensuring you'll gain new insights and "aha!" moments with every listen.  In this episode, we explore AI's transformative power, viewing it not as a threat, but as a liberator that enhances our work. We dive into the five core human skills to emphasize in an AI-driven world: strategic thinking, problem-solving, communication, collaboration, and creation. We demonstrate how to leverage AI strategically, from evaluating business plans to acting as a growth-minded board member, and you'll hear how we're integrating AI into our own leadership meetings.  Geoff shares real-world examples of using AI as a "thought partner" to stress-test major strategic decisions, even creating an "AI board of advisors." He also provides practical applications for lawyers, such as using AI to review NDAs, stress-test legal arguments, and role-play closing arguments with AI as your jury. To guide your own AI journey, Geoff outlines his "CRIT" framework (Context, Role, Interview, Task) for effective prompting and highlights the importance of understanding AI model settings for data privacy and confidentiality.  Listen to our other episodes on the AI revolution:  #555: How to Use AI and Universal Design to Empower Diverse Thinkers with Susan Tanner Apple Podcasts | Spotify | Lawyerist   #553: AI Tools and Processes Every Lawyer Should Use with Catherine Sanders Reach Apple Podcasts  Spotify  Lawyerist     #550: Beyond Content: How AI is Changing Law Firm Marketing, with Gyi Tsakalaki and Conrad Saam: Apple Podcasts | Spotify | Lawyerist    Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X!  If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you.  Access more resources from Lawyerist at lawyerist.com.  The AI-Driven Leader   Chapters/Timestamps:  0:00 - Episode Introduction and Why This Remastered Version is Special   1:22 - AI as the Next Big Shift for Lawyers   6:28 - Geoff Woods: Redefining Leadership in the AI Era   9:11 - The Five Core Human Skills Enhanced by AI   10:36 - Strategic AI: Beyond Basic Tasks   14:24 - AI as Your Strategic Thought Partner   19:47 - Navigating AI: Threat vs. Opportunity for Lawyers   20:56 - Practical AI Applications: NDA Review and Valuation   28:51 - Building Your AI Habit: The "CRIT" Framework   32:19 - AI Security and Data Privacy for Legal Professionals   34:40 - The Risk of Inaction and Building the Future Firm     Learn more about your ad choices. Visit megaphone.fm/adchoices

IT Privacy and Security Weekly update.
EP 247.5 Deep Dive Broken Windows. The IT Privacy and Security Weekly Update for the Week Ending June 17th., 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Jun 19, 2025 14:48


Windows Hello's Facial Authentication UpdateMicrosoft updated Windows Hello to require both infrared and color cameras for facial authentication, addressing a spoofing vulnerability. This enhances security but disables functionality in low-light settings, potentially inconveniencing users and pushing some toward alternatives like Linux for flexible authentication.EchoLeak and AI Security'EchoLeak' is a zero-click vulnerability in Microsoft 365 Copilot, discovered by Aim Labs, allowing data exfiltration via malicious emails exploiting an "LLM Scope Violation." It reveals risks in AI systems combining external inputs with internal data, emphasizing the need for robust guardrails.Denmark's Shift to LibreOffice and LinuxDenmark is adopting LibreOffice and Linux to boost digital sovereignty, reduce reliance on foreign tech like Microsoft, and mitigate geopolitical and cost-related risks. This follows a 72% rise in Microsoft software costs over five years.Chinese AI Firms Bypassing U.S. Chip ControlsChinese AI companies evade U.S. chip export restrictions by processing data in third countries like Malaysia, using tactics like physically transporting data and setting up shell entities to access high-end chips and return trained AI models.Mattel and OpenAI PartnershipMattel's collaboration with OpenAI to create AI-enhanced toys introduces engaging, safe experiences for kids but raises privacy and security concerns, highlighting the need for "Zero trust" models in handling children's data.Apple's Passkey Import/Export FeatureApple's new FIDO-based passkey import/export feature allows secure credential transfers across platforms, enhancing security and convenience. It uses biometric or PIN authentication, replacing less secure methods and improving interoperability.Airlines Selling Passenger Data to DHSThe Airlines Reporting Corporation, owned by U.S. airlines, sold domestic flight data to DHS's CBP, including names and itineraries, with a clause hiding the source. This raises privacy concerns about government tracking without transparency.WhatsApp's New Ad PolicyWhatsApp's introduction of ads in its "Updates" section deviates from its original "no ads" philosophy. While limited and preserving chat encryption, this shift alters the ad-free experience that attracted its two billion users.https://rprescottstearns.blogspot.com/2025/06/broken-windows-it-privacy-and-security.html

The Cybersecurity Readiness Podcast Series
AI Security in the Public Sector: Balancing Innovation and Risk

The Cybersecurity Readiness Podcast Series

Play Episode Listen Later Jun 17, 2025 35:56


In this episode, Dr. Dave Chatterjee is joined by Burnie Legette, Director of IoT and AI at Intel Corporation and former professional football player. Their conversation explores the evolving landscape of AI deployment within the public sector, with a particular focus on the security challenges and governance strategies required to harness AI responsibly. Drawing on his cross-sectoral experience, Burnie offers insights into the cultural, technical, and ethical nuances of AI adoption. Dr. Chatterjee brings in his empirically grounded Commitment-Preparedness-Discipline (CPD) cybersecurity governance framework to emphasize the importance of planning, transparency, and stakeholder engagement.To access and download the entire podcast summary with discussion highlights -- https://www.dchatte.com/episode-87-ai-security-in-the-public-sector-balancing-innovation-and-risk/Connect with Host Dr. Dave Chatterjee and Subscribe to the PodcastPlease subscribe to the podcast so you don't miss any new episodes! And please leave the show a rating if you like what you hear. New episodes are released every two weeks. Connect with Dr. Chatterjee on these platforms: LinkedIn: https://www.linkedin.com/in/dchatte/ Website: https://dchatte.com/Cybersecurity Readiness Book: https://www.amazon.com/Cybersecurity-Readiness-Holistic-High-Performance-Approach/dp/1071837338https://us.sagepub.com/en-us/nam/cybersecurity-readiness/book275712Latest Publications & Press Releases:“Meet Dr. Dave Chatterjee, the mind behind the CommitmentPreparedness-Discipline method for cybersecurity,” Chicago Tribune, February 24, 2025."Dr. Dave Chatterjee On A Proactive Behavioral Approach To Cyber Readiness," Forbes, February 21, 2025.Ignorance is not bliss: A human-centered whole-of-enterprise approach to cybersecurity preparednessDr. Dave Chatterjee Hosts Global Podcast Series on Cyber Readiness, Yahoo!Finance, Dec 16, 2024Dr. Dave Chatterjee Hosts Global Podcast Series on Cyber Readiness, Marketers Media, Dec 12, 2024.

Cloud Security Podcast by Google
EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google

Cloud Security Podcast by Google

Play Episode Listen Later Jun 16, 2025 26:11


Guest: Daniel Fabian, Principal Digital Arsonist, Google Topic: Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process? What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems? Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it? What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle? What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field? Resources: Video (LinkedIn, YouTube) Google's AI Red Team: the ethical hackers making AI safer EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons Lessons from AI Red Teaming – And How to Apply Them Proactively  [RSA 2025]

After Earnings
Okta CEO on Building a Password-Free Future & Developing Standards for AI Security

After Earnings

Play Episode Listen Later Jun 16, 2025 26:37


On the latest episode of After Earnings, we spoke with Okta CEO Todd McKinnon about how his company aims to become the one-stop shop for digital ID across all businesses and applications. Highlights include: • How Okta is working toward a password-free future. • Why - when it comes to security - Okta sees itself as the superior choice versus Microsoft every time. • How Okta is responding to the emerging security threats posed by AI and quantum computing. 00:00 START 01:30 Okta vs. Microsoft on security 04:10 Selling to both developers and IT teams 08:34 The future of identity verification 12:55 Security challenges posed by quantum computing 17:40 Developing standards for AI security 19:31 Okta's strong earnings and the market's response $OKTA After Earnings is brought to you by Stakeholder Labs and Morning Brew. For more go to https://www.afterearnings.com Follow Us X: https://twitter.com/AfterEarnings TikTok: https://www.tiktok.com/@AfterEarnings Instagram: https://www.instagram.com/afterearnings_/ Reach OutEmail: afterearnings@morningbrew.com Learn more about your ad choices. Visit megaphone.fm/adchoices

Security Unfiltered
The SHOCKING Truth About AI Security in Hospitals

Security Unfiltered

Play Episode Listen Later Jun 16, 2025 45:33 Transcription Available


Send us a textSecurity is increasingly viewed as a strategic business advantage rather than just a necessary cost center. The dialogue explores how companies are leveraging their security posture to gain competitive advantages in sales cycles and build customer trust.• Taylor's journey from aspiring physical therapist to cybersecurity expert through a chance college course• The importance of diverse experience across different security domains for career longevity• How healthcare organizations have become prime targets due to valuable data and outdated security• The emerging AI arms race creating unprecedented security challenges and opportunities• Voice cloning technology enabling sophisticated social engineering attacks, including an almost successful $20 million fraud• Emerging trends in security validation with tools pulling data directly from security systems• The shift from viewing security as a cost center to leveraging it as a sales advantage• Why enterprises are driving security standards more effectively than regulatorsEden Data provides outsourced security, compliance, and privacy services for technology companies at all stages, from pre-revenue startups to publicly traded enterprises, helping them build robust security programs aligned with regulatory frameworks and customer expectations.Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts SpotifySupport the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast

Cyber Security Today
AI Security Threats: Echo Leak, MCP Vulnerabilities, Meta's Privacy Scandal, and the 'Peep Show'

Cyber Security Today

Play Episode Listen Later Jun 13, 2025 12:55 Transcription Available


  In this episode of Cybersecurity Today, host Jim Love discusses critical AI-related security issues, such as the Echo Leak vulnerability in Microsoft's AI, MCP's universal integration risks, and Meta's privacy violations in Europe. The episode also explores the dangers of internet-exposed cameras as discovered by BitSight, highlighting the urgent need for enhanced AI security and the legal repercussions for companies like Meta. 00:00 Introduction to AI Security Issues 00:24 Echo Leak: The Zero-Click AI Vulnerability 03:17 MCP Protocol: Universal Interface, Universal Vulnerabilities 07:01 Meta's Privacy Scandal: Local Host Tracking 10:11 The Peep Show: Internet-Connected Cameras Exposed 12:08 Conclusion and Call to Action

@BEERISAC: CPS/ICS Security Podcast Playlist
Episode 314 Deep Dive: Imran Husain | Cybersecurity Threats in the Manufacturing World

@BEERISAC: CPS/ICS Security Podcast Playlist

Play Episode Listen Later Jun 13, 2025 41:00


Podcast: KBKAST (LS 31 · TOP 5% what is this?)Episode: Episode 314 Deep Dive: Imran Husain | Cybersecurity Threats in the Manufacturing WorldPub date: 2025-06-11Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, we sit down with Imran Husain, Chief Information Security Officer at MillerKnoll, as he discusses the evolving landscape of cybersecurity threats in the manufacturing sector. Imran explores the challenges that arise as manufacturing increasingly integrates with online technologies and IoT, highlighting the unique vulnerabilities posed by legacy systems and operational technology (OT). He shares insights on high-profile incidents like the Norsk Hydro ransomware attack, emphasizing the importance of cyber resilience, data backup, and incident recovery. Imran also offers a candid look at why critical tasks like backing up data are often neglected, the complexities of securing aging infrastructure, and the need for creative solutions such as network segmentation and IT/OT convergence. A dedicated and trusted senior Cyber security professional, Imran Husain has over 22 years of Fortune 1000 experience that covers a broad array of domains which includes risk management, cloud security, SecDevOps, AI Security and OT Cyber practices. A critical, action-oriented leader Imran brings strategic and technical expertise with a proven ability to build cyber program to be proactive in their threat detection, identifying and engaging in critical areas to the business while upholding their security posture. He specializes in Manufacturing and Supply Chain Distribution focusing on how to best use security controls and processes to maximize coverage and reduce risk in a complex multi-faceted environment. A skilled communicator and change agent with bias to action who cultivates an environment of learning and creative thinking, Imran champions open communication and collaboration to empower and inspire teams to exceed in their respective cyber commitments. He is currently the Global Chief Information Security Officer (CISO) at MillerKnoll, a publicly traded American company that produces office furniture, equipment, and home furnishings.The podcast and artwork embedded on this page are from KBI.Media, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.

Cybercrime Magazine Podcast
AI Security. Major Vulnerabilities & Funding Landscape. Confidence Staveley, CyberSafe Foundation.

Cybercrime Magazine Podcast

Play Episode Listen Later Jun 10, 2025 16:22


The Spring 2025 issue of AI Cyber Magazine details some of 2024's major AI security vulnerabilities and sheds light on the funding landscape. Confidence Staveley, Africa's most celebrated female cybersecurity leader, is the founder of the Cybersafe Foundation, a Non-Governmental Organization on a mission to facilitate pockets of changes that ensure a safer internet for everyone with digital access in Africa. In this episode, Confidence joins host Amanda Glassner to discuss. To learn more about Confidence, visit her website at https://confidencestaveley.com, and for more on the CyberSafe Foundation, visit https://cybersafefoundation.org.

The New Stack Podcast
Aptori Is Building an Agentic AI Security Engineer

The New Stack Podcast

Play Episode Listen Later Jun 3, 2025 18:01


AI agents hold the promise of continuously testing, scanning, and fixing code for security vulnerabilities, but we're still progressing toward that vision. Startups like Aptori are helping bridge the gap by building AI-powered security engineers for enterprises. Aptori maps an organization's codebase, APIs, and cloud infrastructure in real time to understand data flows and authorization logic, allowing it to detect and eventually remediate security issues. At Google Cloud Next, Aptori CEO Sumeet Singh discussed how earlier tools merely alerted developers to issues—often overwhelming them—but newer models like Gemini 2.5 Flash and Claude Sonnet 4 are improving automated code fixes, making them more practical. Singh and co-founder Travis Newhouse previously built AppFormix, which automated OpenStack cloud operations before being acquired by Juniper Networks. Their experiences with slow release cycles due to security bottlenecks inspired Aptori's focus. While the goal is autonomous agents, Singh emphasizes the need for transparency and deterministic elements in AI tools to ensure trust and reliability in enterprise security workflows.Learn more from The New Stack about the latest insights in AI application security: AI Is Changing Cybersecurity Fast and Most Analysts Aren't ReadyAI Security Agents Combat AI-Generated Code RisksDevelopers Are Embracing AI To Streamline Threat Detection and Stay AheadJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.

Business of Tech
AI Security Risks Rise as IT Leaders Expand Use; Cloud Backlash and Texas Age Verification Law

Business of Tech

Play Episode Listen Later May 30, 2025 15:59


A recent report from SailPoint reveals a significant contradiction in the IT sector: while 96% of IT professionals view artificial intelligence agents as a security risk, an overwhelming 98% still plan to expand their use within organizations over the next year. The study highlights that although 84% of respondents currently utilize AI agents, only 44% have established governance policies for their behavior. This lack of oversight is concerning, especially as 80% of respondents reported that these agents have acted in unexpected and potentially harmful ways. The need for stringent governance and security protocols for AI agents is becoming increasingly urgent.In the realm of cloud computing, dissatisfaction is on the rise among organizations, with Gartner estimating that up to 25% may face significant disappointment due to unexpected costs and management complexities. Many organizations lack coherent cloud strategies, leading to issues like vendor lock-in. A notable example is 37Signals, which faced a $3.2 million bill for cloud services, prompting a migration back to on-premises infrastructure. As organizations adopt multi-cloud strategies, Gartner warns that more than half may not achieve their expected outcomes, further complicating the landscape.The podcast also discusses a new Texas law requiring Apple and Google to verify the ages of users accessing their app stores, a move that shifts the liability of age enforcement onto these tech giants. This trend reflects a broader governmental push to redefine digital intermediaries as compliance gatekeepers, which could lead to increased regulatory burdens for tech companies. As data sovereignty becomes a priority, organizations are urged to adapt their strategies to align with new privacy and age verification mandates.Lastly, the episode touches on intriguing revelations, such as the CIA's covert use of a Star Wars fan site for secure communications and the persistence of outdated operating systems like Windows XP in various sectors. These stories underscore the complexities of digital infrastructure and the importance of understanding data privacy implications. As reliance on voice-activated technologies grows, the need for IT providers to educate clients about data retention and privacy policies becomes critical, especially in a landscape where everyday devices can act as silent data hoarders. Four things to know today 00:00 IT Leaders Expand AI Agent Use Despite Governance Gaps and Cloud Disillusionment06:08 Dell Surges on AI Server Demand While HP Struggles With Tariffs and Consumer Weakness09:17 Texas Law Forces Apple and Google to Enforce Age Verification, Marking Shift in Platform Liability10:50 CIA Spy Site, Smart Speaker Surveillance, and Legacy Software Reveal Overlooked Digital Threat Surfaces Supported by:  https://afi.ai/office-365-backup/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech

Cloud Security Podcast
Securing AI: Threat Modeling & Detection

Cloud Security Podcast

Play Episode Listen Later May 27, 2025 37:32


Is Artificial Intelligence the ultimate security dragon, we need to slay, or a powerful ally we must train? Recorded LIVE at BSidesSF, this special episode dives headfirst into the most pressing debates around AI security.Join host Ashish Rajan as he navigates the complex landscape of AI threats and opportunities with two leading experts:Jackie Bow (Anthropic): Championing the "How to Train Your Dragon" approach, Jackie reveals how we can leverage AI, and even its 'hallucinations,' for advanced threat detection, response, and creative security solutions.Kane Narraway (Canva): Taking the "Knight/Wizard" stance, Kane illuminates the critical challenges in securing AI systems, understanding the new layers of risk, and the complexities of AI threat modeling.

Cloud Security Podcast
CYBERSECURITY for AI: The New Threat Landscape & How Do We Secure It?

Cloud Security Podcast

Play Episode Listen Later May 20, 2025 40:43


As Artificial Intelligence reshapes our world, understanding the new threat landscape and how to secure AI-driven systems is more crucial than ever. We spoke to Ankur Shah, Co-Founder and CEO of Straiker about navigating this rapidly evolving frontier.In this episode, we unpack the complexities of securing AI, from the fundamental shifts in application architecture to the emerging attack vectors. Discover why Ankur believes "you can only secure AI with AI" and how organizations can prepare for a future where "your imagination is the new limit," but so too are the potential vulnerabilities.Guest Socials - ⁠Ankur's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Cybersecurity PodcastQuestions asked:(00:00) Introduction(00:30) Meet Ankur Shah (CEO, Straiker)(01:54) Current AI Deployments in Organizations (Copilots & Agents)(04:48) AI vs. Traditional Security: Why Old Methods Fail for AI Apps(07:07) AI Application Types: Native, Immigrant & Explorer Explained(10:49) AI's Impact on the Evolving Cyber Threat Landscape(17:34) Ankur Shah on Core AI Security Principles (Visibility, Governance, Guardrails)(22:26) The AI Security Vendor Landscape (Acquisitions & Startups)(24:20) Current AI Security Practices in Organizations: What's Working?(25:42) AI Security & Hyperscalers (AWS, Azure, Google Cloud): Pros & Cons(26:56) What is AI Inference? Explained for Cybersecurity Pros(33:51) Overlooked AI Attack Surfaces: Hidden Risks in AI Security(35:12) How to Uplift Your Security Program for AI(37:47) Rapid Fire: Fun Questions with Ankur ShahThank you to this episode's sponsor - ⁠Straiker.ai

A Shot in the Arm Podcast with Ben Plumley
A New Dawn in Global Health: Technology & AI, Security & Solidarity, One Health and Governance, with Dr Mike Reid IGHS, UCSF

A Shot in the Arm Podcast with Ben Plumley

Play Episode Listen Later May 19, 2025 44:13


While the global health community wrenches its clothes and gnashes its teeth in Switzerland at the 78th World Health Assembly, Dr Mike Reid, Associate Director of the Center for Global Health Diplomacy, UCSF joins Ben in an entertaining and wide ranging exploration of a positive, forward-looking agenda for global health. Topics include global health security, one health, mis- and disinformation in the doctor-patient relationship, health technology and specific future uses and pitfalls of AI to improve access to healthcare in developing countries.  Mike offers a promise of a future episode on channelling philanthropic dollars into sovereign wealth funds for global health investments. And finally they reflect on their upbringing in the UK with its “free at the point of delivery” National Health Service, and argue over which of the modern Cambridge University Colleges they went to most resembles a multi-story car park.  00:00 Introduction and Overview 00:09 World Health Assembly Insights 01:18 Guest Introduction: Dr. Mike Reed 03:40 Mike Reid's  Background and Career 05:58 Global Health Security and Solidarity 11:28 The One Health Agenda 14:12 Artificial Intelligence in Global Health 37:26 Navigating Healthcare Systems 43:48 Closing Remarks and Future Topics Mike's Substack:  https://reimaginingglobalhealth.substack.com/

Cloud Security Podcast by Google
EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams

Cloud Security Podcast by Google

Play Episode Listen Later May 19, 2025 24:39


Guest: Christine Sizemore, Cloud Security Architect, Google Cloud  Topics: Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain?  I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain?  We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains? We've talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development?  We are all hearing about agentic security – so can we just ask the AI to secure itself?  Top 3 things to do to secure AI software supply chain for a typical org?   Resources: Video “Securing AI Supply Chain: Like Software, Only Not” blog (and paper) “Securing the AI software supply chain” webcast EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments Protect AI issue database “Staying on top of AI Developments”  “Office of the CISO 2024 Year in Review: AI Trust and Security” “Your Roadmap to Secure AI: A Recap” (2024) "RSA 2025: AI's Promise vs. Security's Past — A Reality Check" (references our "data as code" presentation)

Cloud Security Podcast by Google
EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps

Cloud Security Podcast by Google

Play Episode Listen Later May 12, 2025 30:40


Guest: Diana Kelley, CSO at Protect AI  Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better  when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks?  Top differences between LLM/chatbot AI security vs AI agent security?  Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem' Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents  (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes

CIO Classified
Why the Smartest CIOs Are Becoming Business Strategists with Eric Johnson of PagerDuty

CIO Classified

Play Episode Listen Later May 8, 2025 42:18


Eric Johnson, CIO at PagerDuty, shares why today's most impactful CIOs are evolving into strategic business leaders. He explains how AI is driving a fundamental shift in how IT organizations operate—moving from reactive support functions to proactive, value-creating business enablers.About the Guest: Eric Johnson is the Chief Information Officer at PagerDuty, responsible for PagerDuty's critical IT infrastructure, data management and enterprise systems. Prior to joining PagerDuty, he was the CIO at SurveyMonkey, DocuSign and Talend. Before that, Eric spent 12 years at Informatica driving the information technology vision and strategy as the company scaled to a modern SaaS architecture. He is an active advisor and board member to several early-stage companies and a regular contributor to IT thought leadership.Timestamps:*(05:20) -  Embrace shadow IT and AI tools*(18:40) -  Changing role of the CIO*(30:00) -  Security and cybersecurity awareness*(33:35) -   Future of automation and AIGuest Highlights:“In the CIO org, they need to be business experts as much as the partners that they work with… because AI and the use of it and finding those high value use cases, it's gonna take folks in the CIO org to be a lot more knowledgeable about how the company operates and processes.”“Obviously, certain roles are going to change much more than others, but I think across the board, roles are going to change.”“As these changes come, how do you reorient the organization—the humans in the organization—to be able to find that higher value work?”Get Connected:Eric Johnson on LinkedInIan Faison on LinkedInResources:Learn more about PagerDuty: www.pagerduty.comHungry for more tech talk? Check out these past episodes:Ep 59 - CIO Leadership in AI Security and InnovationEp 58 - AI-Driven Workplace TransformationEp 57 - The CIO Roadmap to Executive LeadershipLearn more about Caspian Studios: caspianstudios.comCan't get enough AI? Check out The New Automation Mindset Podcast for more in-depth conversations about strategies leadership in AI, automation, and orchestration. Brought to you by the automation experts at Workato. Start Listening: www.workato.com/podcast

Unsupervised Learning
A Conversation with Bar-El Tayouri from Mend.io

Unsupervised Learning

Play Episode Listen Later May 6, 2025 45:53 Transcription Available


➡ Get full visibility, risk insights, red teaming, and governance for your AI models, AI agents, RAGs, and more—so you can securely deploy AI powered applications with ul.live/mend In this episode, I speak with Bar-El Tayouri, Head of AI Security at Mend.io, about the rapidly evolving landscape of application and AI security—especially as multi-agent systems and fuzzy interfaces redefine the attack surface. We talk about: • Modern AppSec Meets AI Agents How traditional AppSec falls short when it comes to AI-era components like agents, MCP servers, system prompts, and model artifacts—and why security now depends on mapping, monitoring, and understanding this entire stack. • Threat Discovery, Simulation, and Mitigation How Mend’s AI security suite identifies unknown AI usage across an org, simulates dynamic attacks (like prompt injection via PDFs), and provides developers with precise, in-code guidance to reduce risk without slowing innovation. • Why We’re Rethinking Identity, Risk, and GovernanceWhy securing AI systems isn’t just about new threats—it’s about re-implementing old lessons: identity access, separation of duties, and system modeling. And why every CISO needs to integrate security into the dev workflow instead of relying on blunt-force blocking. Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiessler Chapters: 00:00 - From Game Hacking to AI Security: Barel’s Tech Journey03:51 - Why Application Security Is Still the Most Exciting Challenge04:39 - The Real AppSec Bottleneck: Prioritization, Not Detection06:25 - Explosive Growth of AI Components Inside Applications12:48 - Why MCP Servers Are a Massive Blind Spot in AI Security15:02 - Guardrails Aren’t Keeping Up With Agent Power16:15 - Why AI Security Is Maturing Faster Than Previous Tech Waves20:59 - Traditional AppSec Tools Can’t Handle AI Risk Detection26:01 - How Mend Maps, Discovers, and Simulates AI Threats34:02 - What Ideal Customers Ask For When Securing AI38:01 - Beyond Guardrails: Mend’s Guide Rails for In-Code Mitigation41:49 - Multi-Agent Systems Are the Next Security Nightmare45:47 - Final Advice for CISOs: Enable, Don’t Disable DevelopersBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

Telecom Reseller
Koshee Protect: Real-Time AI Security Without Compromising Privacy, Podcast

Telecom Reseller

Play Episode Listen Later May 5, 2025


This podcast is a part of a collection of podcasts recorded at ISC West 2025 and previously shared on social media. “We don't look for faces—we look for behaviors.” — Corbin Uselton, Koshee Security, speaking with Doug Green at ISC West 2025 At ISC West 2025, Technology Reseller News publisher Doug Green met with Corbin Uselton of Koshee Security to explore how the company is using AI to elevate surveillance while respecting privacy. Koshee's flagship product, Koshee Protect, uses on-site AI detection to monitor security camera feeds in real time—identifying suspicious behaviors such as theft, weapon detection, perimeter breaches, and more. “We're not doing facial recognition,” Uselton emphasized. “We're focused on behaviors—jumping a fence, concealing an item, or pulling a weapon—not identities.” The system is designed for a range of customers, from small retailers and gas stations to global logistics companies. It works by processing video locally, maintaining privacy compliance while sending immediate alerts with image frames when predefined threats or behaviors are detected. The platform is highly configurable, allowing users to set different detection parameters by camera, location, and time of day. Koshee also provides role-based alerts, enabling specific employees to receive notifications depending on the context—such as detecting weapons in restricted zones or after-hours movement on remote sites. With integration options for both direct customers and channel partners (like MDI), Koshee is enabling smarter, more responsive security without compromising data ethics. To learn more, visit koshee.ai.

Telemetry Now
Telemetry News Now: Palo Alto Boosts AI Security, Qwen 3 Released, Llama API, Spanish Internet Outage, IPv6 Making Headway

Telemetry Now

Play Episode Listen Later May 1, 2025 25:30


In this episode, Phil Gervasi and Justin Ryburn cover major developments in AI and networking, including Palo Alto Networks' $650M push into AI security, Alibaba's release of Qwen 3, and Meta's new Llama API. They also discuss Microsoft's AI-generated code stats, Asia's IPv6 milestone, and the massive Iberian power outage that disrupted internet traffic across multiple countries.

ITSPmagazine | Technology. Cybersecurity. Society
Building Trust Through AI and Software Transparency: The Real Value of SBOMs and AISBOMs | An RSAC Conference 2025 Conversation with Helen Oakley and Dmitry Raidman | On Location Coverage with Sean Martin and Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 30, 2025 19:37


Helen Oakley, Senior Director of Product Security at SAP, and Dmitry Raidman, Co-founder and CTO of Cybeats, joined us live at the RSAC Conference to bring clarity to one of the most urgent topics in cybersecurity: transparency in the software and AI supply chain. Their message is direct—organizations not only need to understand what's in their software, they need to understand the origin, integrity, and impact of those components, especially as artificial intelligence becomes more deeply integrated into business operations.SBOMs Are Not Optional AnymoreSoftware Bills of Materials (SBOMs) have long been a recommended best practice, but they're now reaching a point of necessity. As Dmitry noted, organizations are increasingly requiring SBOMs before making purchase decisions—“If you're not going to give me an SBOM, I'm not going to buy your product.” With regulatory pressure mounting through frameworks like the EU Cyber Resilience Act (CRA), the demand for transparency is being driven not just by compliance, but by real operational value. Companies adopting SBOMs are seeing tangible returns—saving hundreds of hours on risk analysis and response, while also improving internal visibility.Bringing AI into the SBOM FoldBut what happens when the software includes AI models, data pipelines, and autonomous agents? Helen and Dmitry are leading a community-driven initiative to create AI-specific SBOMs—referred to as AI SBOMs or AISBOMs—to capture critical metadata beyond just the code. This includes model architectures, training data, energy consumption, and more. These elements are vital for risk management, especially when organizations may be unknowingly deploying models with embedded vulnerabilities or opaque dependencies.A Tool for the Community, Built by the CommunityIn an important milestone for the industry, Helen and Dmitry also introduced the first open source tool capable of generating CycloneDX-formatted AISBOMs for models hosted on Hugging Face. This practical step bridges the gap between standards and implementation—helping organizations move from theoretical compliance to actionable insight. The community's response has been overwhelmingly positive, signaling a clear demand for tools that turn complexity into clarity.Why Security Leaders Should Pay AttentionThe real value of an SBOM—whether for software or AI—is not just external compliance. It's about knowing what you have, recognizing your crown jewels, and understanding where your risks lie. As AI compounds existing vulnerabilities and introduces new ones, starting with transparency is no longer a suggestion—it's a strategic necessity.Want to see how this all fits together? Hear it directly from Helen and Dmitry in this episode.___________Guests: Helen Oakley, Senior Director of Product Security at SAP | https://www.linkedin.com/in/helen-oakley/Dmitry Raidman, Co-founder and CTO of Cybeats | https://www.linkedin.com/in/draidman/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974Akamai: https://itspm.ag/akamailbwcBlackCloak: https://itspm.ag/itspbcwebSandboxAQ: https://itspm.ag/sandboxaq-j2enArcher: https://itspm.ag/rsaarchwebDropzone AI: https://itspm.ag/dropzoneai-641ISACA: https://itspm.ag/isaca-96808ObjectFirst: https://itspm.ag/object-first-2gjlEdera: https://itspm.ag/edera-434868___________ResourcesLinkedIn Post with Links: https://www.linkedin.com/posts/helen-oakley_ai-sbom-aisbom-activity-7323123172852015106-TJeaLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsa-conference-usa-2025-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverage______________________KEYWORDShelen oakley, dmitry raidman, sean martin, rsac 2025, sbom, aisbom, ai security, software supply chain, transparency, open source, event coverage, on location, conference______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More

Cybercrime Magazine Podcast
Microcast: AI + Security: The Past, Present, and Future. A Documentary.

Cybercrime Magazine Podcast

Play Episode Listen Later Apr 28, 2025 3:33


Artificial Intelligence is everywhere. Seemingly overnight, the technology has transitioned from a sci-fi concept to a foundational pillar of modern business. While new developments are brimming with extraordinary potential, we can't ignore the looming shadow of unexpected risks. A new mini-documentary, produced by Cybercrime Magazine, and sponsored by Applied Quantum and Secure Quantum, features tech industry icons and experts who share insights around the past, present, and future of AI and security. This audio-only microcast is a preview. To watch the full documentary, visit https://www.youtube.com/watch?v=StI0tJFgU2o.

#ShiftHappens Podcast
Ep. 101: Scaling Smarter: How MSPs Can Leverage AI, Security, and Strategic Partnerships

#ShiftHappens Podcast

Play Episode Listen Later Apr 24, 2025 52:27


The Managed Service Provider (MSP) industry is undergoing a major shift as AI, automation, and cybersecurity redefine business operations. In this #shifthappens episode, Jorn Wittendorp, Founder of Ydentic, and Mario Carvajal, Chief Strategy and Marketing Officer at AvePoint, discuss the Ydentic-AvePoint acquisition, the trends affecting the industry, and strategies for MSPs to stay ahead.

Paul's Security Weekly
ISO 42001 Certification, CIOs Struggle to Align Strategies, and CISOs Rethink Hiring - Martin Tschammer - BSW #392

Paul's Security Weekly

Play Episode Listen Later Apr 23, 2025 63:55


AI Governance, the next frontier for AI Security. But what framework should you use? ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. But how do you get certified? What's the process look like? Martin Tschammer, Head of Security at Synthesia, joins Business Security Weekly to share his ISO 42001 certification journey. From corporate culture to the witness audit, Martin walks us through the certification process and the benefits they have gained from the certification. If you're considering ISO 42001 certification, this interview is a must see. In the leadership and communications section, Are 2 CEOs Better Than 1? Here Are The Benefits and Drawbacks You Must Consider, CISOs rethink hiring to emphasize skills over degrees and experience, Why Clear Executive Communication Is a Silent Driver of Organizational Success, and more! Visit https://www.securityweekly.com/bsw for all the latest episodes! Show Notes: https://securityweekly.com/bsw-392

Cyber Security Today
Cybersecurity Today: Virtual Employees, AI Security Agents, and CVE Program Updates

Cyber Security Today

Play Episode Listen Later Apr 23, 2025 7:47 Transcription Available


In this episode of 'Cybersecurity Today,' host Jim Love discusses various pressing topics in the realm of cybersecurity. Highlights include Anthropic's prediction on AI-powered virtual employees and their potential security risks, Microsoft's introduction of AI security agents to mitigate workforce gaps and analyst burnout, and a pivotal court ruling allowing a data privacy class action against Shopify to proceed in California. Additionally, the show covers the last-minute extension of funding for the Common Vulnerabilities and Exposures (CVE) program by the US Cybersecurity and Infrastructure Security Agency, averting a potential crisis in cybersecurity coordination. These discussions underscore the evolving challenges and solutions within the cybersecurity landscape. 00:00 Introduction and Overview 00:26 AI Employees: Opportunities and Risks 01:48 Microsoft's AI Security Agents 03:58 Shopify's Legal Battle Over Data Privacy 05:12 CVE Program's Funding Crisis Averted 07:24 Conclusion and Contact Information

ITSPmagazine | Technology. Cybersecurity. Society
Quantum Security, Real Problems, and the Unifying Layer Behind It All | A Brand Story Conversation with Marc Manzano, General Manager of the Cybersecurity Group at SandboxAQ | A RSAC Conference 2025 Brand Story Pre-Event Conversation

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 21, 2025 9:31


We're on the road to RSAC 2025 — or maybe on a quantum-powered highway — and this time, Sean and I had the pleasure of chatting with someone who's not just riding the future wave, but actually building it.Marc Manzano, General Manager of the Cybersecurity Group at SandboxAQ, joined us for this Brand Story conversation ahead of the big conference in San Francisco. For those who haven't heard of SandboxAQ yet, here's a quick headline: they're a spin-out from Google, operating at the intersection of AI and quantum technologies. Yes — that intersection.But let's keep our feet on the ground for a second, because this story isn't just about tech that sounds cool. It's about solving the very real, very painful problems that security teams face every day.Marc laid out their mission clearly: Active Guard, their flagship platform, is built to simplify and modernize two massive pain points in enterprise security — cryptographic asset management and non-human identity management. Think: rotating certificates without manual effort. Managing secrets and keys across cloud-native infrastructure. Automating compliance reporting for quantum-readiness. No fluff — just value, right out of the box.And it's not just about plugging a new tool into your already overloaded stack. What impressed us is how SandboxAQ sees themselves as the unifying layer — enhancing interoperability across existing systems, extracting more intelligence from the tools you already use, and giving teams a unified view through a single pane of glass.And yes, we also touched on AI SecOps — because as AI becomes a standard part of infrastructure, so must security for it. Active Guard is already poised to give security teams visibility and control over this evolving layer.Want to see it in action? Booth 6578, North Expo Hall. Swag will be there. Demos will be live. Conversations will be real.We'll be there too — recording a deeper Brand Story episode On Location during the event.Until then, enjoy this preview — and get ready to meet the future of cybersecurity.⸻Keywords:sandboxaq, active guard, rsa conference 2025, quantum cybersecurity, ai secops, cryptographic asset management, non-human identity, cybersecurity automation, security compliance, rsa 2025, cybersecurity innovation, certificate lifecycle management, secrets management, security operations, quantum readiness, rsa sandbox, cybersecurity saas, devsecops, interoperability, digital transformation______________________Guest: Marc Manzano,, General Manager of the Cybersecurity Group at SandboxAQMarc Manzano on LinkedIn

ITSPmagazine | Technology. Cybersecurity. Society
Why “Permit by Exception” Might Be the Key to Business Resilience | A Brand Story with Rob Allen, Chief Product Officer at ThreatLocker | A RSAC Conference 2025 Brand Story Pre-Event Conversation

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 21, 2025 18:58


At this year's RSAC Conference, the team from ThreatLocker isn't just bringing tech—they're bringing a challenge. Rob Allen, Chief Product Officer at ThreatLocker, joins Sean Martin and Marco Ciappelli for a lively pre-conference episode that previews what attendees can expect at booth #854 in the South Expo Hall.From rubber ducky hacks to reframing how we think about Zero Trust, the conversation highlights the ways ThreatLocker moves beyond the industry's typical focus on reactive detection. Allen shares how most cybersecurity approaches still default to allowing access unless a threat is known, and why that mindset continues to leave organizations vulnerable. Instead, ThreatLocker's philosophy is to “deny by default and permit by exception”—a strategy that, when managed effectively, provides maximum protection without slowing down business operations.ThreatLocker's presence at the conference will feature live demos, short presentations, and hands-on challenges—including their popular Ducky Challenge, where participants test whether their endpoint defenses can prevent a rogue USB (disguised as a keyboard) from stealing their data. If your system passes, you win the rubber ducky. If it doesn't? They (temporarily) get your data. It's a simple but powerful reminder that what you think is secure might not be.The booth won't just be about tech. The team is focused on conversations—reconnecting with customers, engaging new audiences, and exploring how the community is responding to a threat landscape that's growing more sophisticated by the day. Allen emphasizes the importance of in-person dialogue, not only to share what ThreatLocker is building but to learn how security leaders are adapting and where gaps still exist.And yes, there will be merch—high-quality socks, t-shirts, and even a few surprise giveaways dropped at hotel doors (if you resist the temptation to open the envelope before visiting the booth).For those looking to rethink endpoint protection or better understand how proactive controls can complement detection-based tools, this episode is your preview into a very different kind of cybersecurity conversation—one that starts with a challenge and ends with community.Learn more about ThreatLocker: https://itspm.ag/threatlocker-r974Guest: Rob Allen, Chief Product Officer, ThreatLocker | https://www.linkedin.com/in/threatlockerrob/ResourcesLearn more and catch more stories from ThreatLocker: https://www.itspmagazine.com/directory/threatlockerLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsa-conference-usa-2025-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverage______________________Keywords: rsac conference, cybersecurity, endpoint, zero trust, rubber ducky, threat detection, data exfiltration, security strategy, deny by default, permit by exception, proactive security, security demos, usb attack, cyber resilience, network control, security mindset, rsac 2025, event coverage, on location, conference____________________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageTo see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastTo see and hear more Redefining Society stories on ITSPmagazine, visit:https://www.itspmagazine.com/redefining-society-podcastWant to tell your Brand Story Briefing as part of our event coverage? Learn More

ITSPmagazine | Technology. Cybersecurity. Society
AI, Security, and the Hybrid World: Akamai's Vision for RSAC 2025 With Rupesh Chokshi, SVP & GM Application Security Akamai | A RSAC Conference 2025 Brand Story Pre-Event Conversation

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 18, 2025 21:50


The RSA Conference has long served as a meeting point for innovation and collaboration in cybersecurity—and in this pre-RSAC episode, ITSPmagazine co-founders Marco Ciappelli and Sean Martin welcome Akamai's Rupesh Chokshi to the conversation. With RSAC 2025 on the horizon, they discuss Akamai's presence at the event and dig into the challenges and opportunities surrounding AI, threat intelligence, and enterprise security.Chokshi, who leads Akamai's Application Security business, describes a landscape marked by explosive growth in web and API attacks—and a parallel shift as enterprises embrace generative AI. The double-edged nature of AI is central to the discussion: while it offers breakthrough productivity and automation, it also creates new vulnerabilities. Akamai's dual focus, says Chokshi, is both using AI to strengthen defenses and securing AI-powered applications themselves.The conversation touches on the scale and sophistication of modern threats, including an eye-opening stat: Akamai is now tracking over 500 million large language model (LLM)-driven scraping requests per day. As these threats extend from e-commerce to healthcare and beyond, Chokshi emphasizes the need for layered defense strategies and real-time adaptability.Ciappelli brings a sociological lens to the AI discussion, noting the hype-to-reality shift the industry is experiencing. “We're no longer asking if AI will change the game,” he suggests. “We're asking how to implement it responsibly—and how to protect it.”At RSAC 2025, Akamai will showcase a range of innovations, including updates to its Guardicore platform and new App & API Protection Hybrid solutions. Their booth (6245) will feature interactive demos, theater sessions, and one-on-one briefings. The Akamai team will also release a new edition of their State of the Internet report, packed with actionable threat data and insights.The episode closes with a reminder: in a world that's both accelerating and fragmenting, cybersecurity must serve not just as a barrier—but as a catalyst. “Security,” says Chokshi, “has to enable innovation, not hinder it.”⸻Keywords: RSAC 2025, Akamai, cybersecurity, generative AI, API protection, web attacks, application security, LLM scraping, Guardicore, State of the Internet report, Zero Trust, hybrid digital world, enterprise resilience, AI security, threat intelligence, prompt injection, data privacy, RSA Conference, Sean Martin, Marco Ciappelli______________________Guest: Rupesh Chokshi, SVP & GM, Akamai https://www.linkedin.com/in/rupeshchokshi/Hosts:Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine:  https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast & Audio Signals Podcast | On ITSPmagazine: https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________This Episode's SponsorsAKAMAI:https://itspm.ag/akamailbwc____________________________ResourcesLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsa-conference-usa-2025-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverageRupesh Chokshi Session at RSAC 2025The New Attack Frontier: Research Shows Apps & APIs Are the Targets - [PART1-W09]____________________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageTo see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastTo see and hear more Redefining Society stories on ITSPmagazine, visit:https://www.itspmagazine.com/redefining-society-podcastWant to tell your Brand Story Briefing as part of our event coverage? Learn More

Govcon Giants Podcast
They Solved the #1 AI Security Threat the Government Couldn't Fix!

Govcon Giants Podcast

Play Episode Listen Later Apr 7, 2025 8:07


In this episode we have an intriguing conversation with Jim, and Jerry. We discuss the challenges and innovative solutions in the realm of artificial intelligence (AI) and software development. Discover how this innovative approach opens doors for professionals from various fields to contribute to AI and no-code development efforts. Tune in to this captivating episode and learn how these cutting-edge technologies are transforming the landscape of business and technology. Don't miss out on this episode of The Daily Windup, where you'll find insights, inspiration, and practical applications in under 10 minutes!

The Lawfare Podcast
Lawfare Daily: Alexandra Reeve Givens, Courtney Lang, and Nema Milaninia on the Paris AI Summit and the Pivot to AI Security

The Lawfare Podcast

Play Episode Listen Later Feb 25, 2025 48:17


Alexandra Reeve Givens, CEO of the Center for Democracy & Technology; Courtney Lang, Vice President of Policy for Trust, Data, and Technology at ITI and a Non-Resident Senior Fellow at the Atlantic Council GeoTech Center; and Nema Milaninia, a partner on the Special Matters & Government Investigations team at King & Spalding, join Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to discuss the Paris AI Action Summit and whether it marks a formal pivot away from AI safety to AI security and, if so, what an embrace of AI security means for domestic and international AI governance.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.