Podcasts about ai security

  • 199PODCASTS
  • 320EPISODES
  • 38mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 27, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai security

Latest podcast episodes about ai security

Cloud Security Podcast
Securing AI: Threat Modeling & Detection

Cloud Security Podcast

Play Episode Listen Later May 27, 2025 37:32


Is Artificial Intelligence the ultimate security dragon, we need to slay, or a powerful ally we must train? Recorded LIVE at BSidesSF, this special episode dives headfirst into the most pressing debates around AI security.Join host Ashish Rajan as he navigates the complex landscape of AI threats and opportunities with two leading experts:Jackie Bow (Anthropic): Championing the "How to Train Your Dragon" approach, Jackie reveals how we can leverage AI, and even its 'hallucinations,' for advanced threat detection, response, and creative security solutions.Kane Narraway (Canva): Taking the "Knight/Wizard" stance, Kane illuminates the critical challenges in securing AI systems, understanding the new layers of risk, and the complexities of AI threat modeling.

Cloud Security Podcast
CYBERSECURITY for AI: The New Threat Landscape & How Do We Secure It?

Cloud Security Podcast

Play Episode Listen Later May 20, 2025 40:43


As Artificial Intelligence reshapes our world, understanding the new threat landscape and how to secure AI-driven systems is more crucial than ever. We spoke to Ankur Shah, Co-Founder and CEO of Straiker about navigating this rapidly evolving frontier.In this episode, we unpack the complexities of securing AI, from the fundamental shifts in application architecture to the emerging attack vectors. Discover why Ankur believes "you can only secure AI with AI" and how organizations can prepare for a future where "your imagination is the new limit," but so too are the potential vulnerabilities.Guest Socials - ⁠Ankur's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Cybersecurity PodcastQuestions asked:(00:00) Introduction(00:30) Meet Ankur Shah (CEO, Straiker)(01:54) Current AI Deployments in Organizations (Copilots & Agents)(04:48) AI vs. Traditional Security: Why Old Methods Fail for AI Apps(07:07) AI Application Types: Native, Immigrant & Explorer Explained(10:49) AI's Impact on the Evolving Cyber Threat Landscape(17:34) Ankur Shah on Core AI Security Principles (Visibility, Governance, Guardrails)(22:26) The AI Security Vendor Landscape (Acquisitions & Startups)(24:20) Current AI Security Practices in Organizations: What's Working?(25:42) AI Security & Hyperscalers (AWS, Azure, Google Cloud): Pros & Cons(26:56) What is AI Inference? Explained for Cybersecurity Pros(33:51) Overlooked AI Attack Surfaces: Hidden Risks in AI Security(35:12) How to Uplift Your Security Program for AI(37:47) Rapid Fire: Fun Questions with Ankur ShahThank you to this episode's sponsor - ⁠Straiker.ai

Cloud Security Podcast by Google
EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams

Cloud Security Podcast by Google

Play Episode Listen Later May 19, 2025 24:39


Guest: Christine Sizemore, Cloud Security Architect, Google Cloud  Topics: Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain?  I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain?  We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains? We've talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development?  We are all hearing about agentic security – so can we just ask the AI to secure itself?  Top 3 things to do to secure AI software supply chain for a typical org?   Resources: Video “Securing AI Supply Chain: Like Software, Only Not” blog (and paper) “Securing the AI software supply chain” webcast EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments Protect AI issue database “Staying on top of AI Developments”  “Office of the CISO 2024 Year in Review: AI Trust and Security” “Your Roadmap to Secure AI: A Recap” (2024) "RSA 2025: AI's Promise vs. Security's Past — A Reality Check" (references our "data as code" presentation)

Cloud Security Podcast by Google
EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps

Cloud Security Podcast by Google

Play Episode Listen Later May 12, 2025 30:40


Guest: Diana Kelley, CSO at Protect AI  Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better  when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks?  Top differences between LLM/chatbot AI security vs AI agent security?  Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem' Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents  (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes

CIO Classified
Why the Smartest CIOs Are Becoming Business Strategists with Eric Johnson of PagerDuty

CIO Classified

Play Episode Listen Later May 8, 2025 42:18


Eric Johnson, CIO at PagerDuty, shares why today's most impactful CIOs are evolving into strategic business leaders. He explains how AI is driving a fundamental shift in how IT organizations operate—moving from reactive support functions to proactive, value-creating business enablers.About the Guest: Eric Johnson is the Chief Information Officer at PagerDuty, responsible for PagerDuty's critical IT infrastructure, data management and enterprise systems. Prior to joining PagerDuty, he was the CIO at SurveyMonkey, DocuSign and Talend. Before that, Eric spent 12 years at Informatica driving the information technology vision and strategy as the company scaled to a modern SaaS architecture. He is an active advisor and board member to several early-stage companies and a regular contributor to IT thought leadership.Timestamps:*(05:20) -  Embrace shadow IT and AI tools*(18:40) -  Changing role of the CIO*(30:00) -  Security and cybersecurity awareness*(33:35) -   Future of automation and AIGuest Highlights:“In the CIO org, they need to be business experts as much as the partners that they work with… because AI and the use of it and finding those high value use cases, it's gonna take folks in the CIO org to be a lot more knowledgeable about how the company operates and processes.”“Obviously, certain roles are going to change much more than others, but I think across the board, roles are going to change.”“As these changes come, how do you reorient the organization—the humans in the organization—to be able to find that higher value work?”Get Connected:Eric Johnson on LinkedInIan Faison on LinkedInResources:Learn more about PagerDuty: www.pagerduty.comHungry for more tech talk? Check out these past episodes:Ep 59 - CIO Leadership in AI Security and InnovationEp 58 - AI-Driven Workplace TransformationEp 57 - The CIO Roadmap to Executive LeadershipLearn more about Caspian Studios: caspianstudios.comCan't get enough AI? Check out The New Automation Mindset Podcast for more in-depth conversations about strategies leadership in AI, automation, and orchestration. Brought to you by the automation experts at Workato. Start Listening: www.workato.com/podcast

Unsupervised Learning
A Conversation with Bar-El Tayouri from Mend.io

Unsupervised Learning

Play Episode Listen Later May 6, 2025 45:53 Transcription Available


➡ Get full visibility, risk insights, red teaming, and governance for your AI models, AI agents, RAGs, and more—so you can securely deploy AI powered applications with ul.live/mend In this episode, I speak with Bar-El Tayouri, Head of AI Security at Mend.io, about the rapidly evolving landscape of application and AI security—especially as multi-agent systems and fuzzy interfaces redefine the attack surface. We talk about: • Modern AppSec Meets AI Agents How traditional AppSec falls short when it comes to AI-era components like agents, MCP servers, system prompts, and model artifacts—and why security now depends on mapping, monitoring, and understanding this entire stack. • Threat Discovery, Simulation, and Mitigation How Mend’s AI security suite identifies unknown AI usage across an org, simulates dynamic attacks (like prompt injection via PDFs), and provides developers with precise, in-code guidance to reduce risk without slowing innovation. • Why We’re Rethinking Identity, Risk, and GovernanceWhy securing AI systems isn’t just about new threats—it’s about re-implementing old lessons: identity access, separation of duties, and system modeling. And why every CISO needs to integrate security into the dev workflow instead of relying on blunt-force blocking. Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiessler Chapters: 00:00 - From Game Hacking to AI Security: Barel’s Tech Journey03:51 - Why Application Security Is Still the Most Exciting Challenge04:39 - The Real AppSec Bottleneck: Prioritization, Not Detection06:25 - Explosive Growth of AI Components Inside Applications12:48 - Why MCP Servers Are a Massive Blind Spot in AI Security15:02 - Guardrails Aren’t Keeping Up With Agent Power16:15 - Why AI Security Is Maturing Faster Than Previous Tech Waves20:59 - Traditional AppSec Tools Can’t Handle AI Risk Detection26:01 - How Mend Maps, Discovers, and Simulates AI Threats34:02 - What Ideal Customers Ask For When Securing AI38:01 - Beyond Guardrails: Mend’s Guide Rails for In-Code Mitigation41:49 - Multi-Agent Systems Are the Next Security Nightmare45:47 - Final Advice for CISOs: Enable, Don’t Disable DevelopersBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

Telecom Reseller
Koshee Protect: Real-Time AI Security Without Compromising Privacy, Podcast

Telecom Reseller

Play Episode Listen Later May 5, 2025


This podcast is a part of a collection of podcasts recorded at ISC West 2025 and previously shared on social media. “We don't look for faces—we look for behaviors.” — Corbin Uselton, Koshee Security, speaking with Doug Green at ISC West 2025 At ISC West 2025, Technology Reseller News publisher Doug Green met with Corbin Uselton of Koshee Security to explore how the company is using AI to elevate surveillance while respecting privacy. Koshee's flagship product, Koshee Protect, uses on-site AI detection to monitor security camera feeds in real time—identifying suspicious behaviors such as theft, weapon detection, perimeter breaches, and more. “We're not doing facial recognition,” Uselton emphasized. “We're focused on behaviors—jumping a fence, concealing an item, or pulling a weapon—not identities.” The system is designed for a range of customers, from small retailers and gas stations to global logistics companies. It works by processing video locally, maintaining privacy compliance while sending immediate alerts with image frames when predefined threats or behaviors are detected. The platform is highly configurable, allowing users to set different detection parameters by camera, location, and time of day. Koshee also provides role-based alerts, enabling specific employees to receive notifications depending on the context—such as detecting weapons in restricted zones or after-hours movement on remote sites. With integration options for both direct customers and channel partners (like MDI), Koshee is enabling smarter, more responsive security without compromising data ethics. To learn more, visit koshee.ai.

Telemetry Now
Telemetry News Now: Palo Alto Boosts AI Security, Qwen 3 Released, Llama API, Spanish Internet Outage, IPv6 Making Headway

Telemetry Now

Play Episode Listen Later May 1, 2025 25:30


In this episode, Phil Gervasi and Justin Ryburn cover major developments in AI and networking, including Palo Alto Networks' $650M push into AI security, Alibaba's release of Qwen 3, and Meta's new Llama API. They also discuss Microsoft's AI-generated code stats, Asia's IPv6 milestone, and the massive Iberian power outage that disrupted internet traffic across multiple countries.

ITSPmagazine | Technology. Cybersecurity. Society
Building Trust Through AI and Software Transparency: The Real Value of SBOMs and AISBOMs | An RSAC Conference 2025 Conversation with Helen Oakley and Dmitry Raidman | On Location Coverage with Sean Martin and Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 30, 2025 19:37


Helen Oakley, Senior Director of Product Security at SAP, and Dmitry Raidman, Co-founder and CTO of Cybeats, joined us live at the RSAC Conference to bring clarity to one of the most urgent topics in cybersecurity: transparency in the software and AI supply chain. Their message is direct—organizations not only need to understand what's in their software, they need to understand the origin, integrity, and impact of those components, especially as artificial intelligence becomes more deeply integrated into business operations.SBOMs Are Not Optional AnymoreSoftware Bills of Materials (SBOMs) have long been a recommended best practice, but they're now reaching a point of necessity. As Dmitry noted, organizations are increasingly requiring SBOMs before making purchase decisions—“If you're not going to give me an SBOM, I'm not going to buy your product.” With regulatory pressure mounting through frameworks like the EU Cyber Resilience Act (CRA), the demand for transparency is being driven not just by compliance, but by real operational value. Companies adopting SBOMs are seeing tangible returns—saving hundreds of hours on risk analysis and response, while also improving internal visibility.Bringing AI into the SBOM FoldBut what happens when the software includes AI models, data pipelines, and autonomous agents? Helen and Dmitry are leading a community-driven initiative to create AI-specific SBOMs—referred to as AI SBOMs or AISBOMs—to capture critical metadata beyond just the code. This includes model architectures, training data, energy consumption, and more. These elements are vital for risk management, especially when organizations may be unknowingly deploying models with embedded vulnerabilities or opaque dependencies.A Tool for the Community, Built by the CommunityIn an important milestone for the industry, Helen and Dmitry also introduced the first open source tool capable of generating CycloneDX-formatted AISBOMs for models hosted on Hugging Face. This practical step bridges the gap between standards and implementation—helping organizations move from theoretical compliance to actionable insight. The community's response has been overwhelmingly positive, signaling a clear demand for tools that turn complexity into clarity.Why Security Leaders Should Pay AttentionThe real value of an SBOM—whether for software or AI—is not just external compliance. It's about knowing what you have, recognizing your crown jewels, and understanding where your risks lie. As AI compounds existing vulnerabilities and introduces new ones, starting with transparency is no longer a suggestion—it's a strategic necessity.Want to see how this all fits together? Hear it directly from Helen and Dmitry in this episode.___________Guests: Helen Oakley, Senior Director of Product Security at SAP | https://www.linkedin.com/in/helen-oakley/Dmitry Raidman, Co-founder and CTO of Cybeats | https://www.linkedin.com/in/draidman/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974Akamai: https://itspm.ag/akamailbwcBlackCloak: https://itspm.ag/itspbcwebSandboxAQ: https://itspm.ag/sandboxaq-j2enArcher: https://itspm.ag/rsaarchwebDropzone AI: https://itspm.ag/dropzoneai-641ISACA: https://itspm.ag/isaca-96808ObjectFirst: https://itspm.ag/object-first-2gjlEdera: https://itspm.ag/edera-434868___________ResourcesLinkedIn Post with Links: https://www.linkedin.com/posts/helen-oakley_ai-sbom-aisbom-activity-7323123172852015106-TJeaLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsa-conference-usa-2025-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverage______________________KEYWORDShelen oakley, dmitry raidman, sean martin, rsac 2025, sbom, aisbom, ai security, software supply chain, transparency, open source, event coverage, on location, conference______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More

Microsoft Mechanics Podcast
Protect AI apps with Microsoft Defender

Microsoft Mechanics Podcast

Play Episode Listen Later Apr 29, 2025 15:00 Transcription Available


Stay in control with Microsoft Defender. You can identify which AI apps and cloud services are in use across your environment, evaluate their risk levels, and allow or block them as needed—all from one place. Whether it's a sanctioned tool or a shadow AI app, you're equipped to set the right policies and respond fast to emerging threats. Defender XDR gives you the visibility to track complex attack paths—linking signals across endpoints, identities, and cloud apps. Investigate real-time alerts, protect sensitive data from misuse in AI tools like Copilot, and enforce controls even for in-house developed apps using system prompts and Azure AI Foundry. Rob Lefferts, Microsoft Security CVP, joins Jeremy Chapman to share how you can safeguard your AI-powered environment with a unified security approach. ► QUICK LINKS: 00:00 - Stay in control with Microsoft Defender 00:39 - Identify and protect AI apps 02:04 - View cloud apps and website in use 04:14 - Allow or block cloud apps 07:14 - Address security risks of internally developed apps 08:44 - Example in-house developed app 09:40 - System prompt 10:39 - Controls in Azure AI Foundry 12:28 - Defender XDR 14:19 - Wrap up ► Link References Get started at https://aka.ms/ProtectAIapps ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics   

Cybercrime Magazine Podcast
Microcast: AI + Security: The Past, Present, and Future. A Documentary.

Cybercrime Magazine Podcast

Play Episode Listen Later Apr 28, 2025 3:33


Artificial Intelligence is everywhere. Seemingly overnight, the technology has transitioned from a sci-fi concept to a foundational pillar of modern business. While new developments are brimming with extraordinary potential, we can't ignore the looming shadow of unexpected risks. A new mini-documentary, produced by Cybercrime Magazine, and sponsored by Applied Quantum and Secure Quantum, features tech industry icons and experts who share insights around the past, present, and future of AI and security. This audio-only microcast is a preview. To watch the full documentary, visit https://www.youtube.com/watch?v=StI0tJFgU2o.

#ShiftHappens Podcast
Ep. 101: Scaling Smarter: How MSPs Can Leverage AI, Security, and Strategic Partnerships

#ShiftHappens Podcast

Play Episode Listen Later Apr 24, 2025 52:27


The Managed Service Provider (MSP) industry is undergoing a major shift as AI, automation, and cybersecurity redefine business operations. In this #shifthappens episode, Jorn Wittendorp, Founder of Ydentic, and Mario Carvajal, Chief Strategy and Marketing Officer at AvePoint, discuss the Ydentic-AvePoint acquisition, the trends affecting the industry, and strategies for MSPs to stay ahead.

Paul's Security Weekly
ISO 42001 Certification, CIOs Struggle to Align Strategies, and CISOs Rethink Hiring - Martin Tschammer - BSW #392

Paul's Security Weekly

Play Episode Listen Later Apr 23, 2025 63:55


AI Governance, the next frontier for AI Security. But what framework should you use? ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. But how do you get certified? What's the process look like? Martin Tschammer, Head of Security at Synthesia, joins Business Security Weekly to share his ISO 42001 certification journey. From corporate culture to the witness audit, Martin walks us through the certification process and the benefits they have gained from the certification. If you're considering ISO 42001 certification, this interview is a must see. In the leadership and communications section, Are 2 CEOs Better Than 1? Here Are The Benefits and Drawbacks You Must Consider, CISOs rethink hiring to emphasize skills over degrees and experience, Why Clear Executive Communication Is a Silent Driver of Organizational Success, and more! Visit https://www.securityweekly.com/bsw for all the latest episodes! Show Notes: https://securityweekly.com/bsw-392

Cyber Security Today
Cybersecurity Today: Virtual Employees, AI Security Agents, and CVE Program Updates

Cyber Security Today

Play Episode Listen Later Apr 23, 2025 7:47 Transcription Available


In this episode of 'Cybersecurity Today,' host Jim Love discusses various pressing topics in the realm of cybersecurity. Highlights include Anthropic's prediction on AI-powered virtual employees and their potential security risks, Microsoft's introduction of AI security agents to mitigate workforce gaps and analyst burnout, and a pivotal court ruling allowing a data privacy class action against Shopify to proceed in California. Additionally, the show covers the last-minute extension of funding for the Common Vulnerabilities and Exposures (CVE) program by the US Cybersecurity and Infrastructure Security Agency, averting a potential crisis in cybersecurity coordination. These discussions underscore the evolving challenges and solutions within the cybersecurity landscape. 00:00 Introduction and Overview 00:26 AI Employees: Opportunities and Risks 01:48 Microsoft's AI Security Agents 03:58 Shopify's Legal Battle Over Data Privacy 05:12 CVE Program's Funding Crisis Averted 07:24 Conclusion and Contact Information

Paul's Security Weekly TV
ISO 42001 Certification, CIOs Struggle to Align Strategies, and CISOs Rethink Hiring - Martin Tschammer - BSW #392

Paul's Security Weekly TV

Play Episode Listen Later Apr 23, 2025 63:55


AI Governance, the next frontier for AI Security. But what framework should you use? ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. But how do you get certified? What's the process look like? Martin Tschammer, Head of Security at Synthesia, joins Business Security Weekly to share his ISO 42001 certification journey. From corporate culture to the witness audit, Martin walks us through the certification process and the benefits they have gained from the certification. If you're considering ISO 42001 certification, this interview is a must see. In the leadership and communications section, Are 2 CEOs Better Than 1? Here Are The Benefits and Drawbacks You Must Consider, CISOs rethink hiring to emphasize skills over degrees and experience, Why Clear Executive Communication Is a Silent Driver of Organizational Success, and more! Show Notes: https://securityweekly.com/bsw-392

Business Security Weekly (Audio)
ISO 42001 Certification, CIOs Struggle to Align Strategies, and CISOs Rethink Hiring - Martin Tschammer - BSW #392

Business Security Weekly (Audio)

Play Episode Listen Later Apr 23, 2025 63:55


AI Governance, the next frontier for AI Security. But what framework should you use? ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. But how do you get certified? What's the process look like? Martin Tschammer, Head of Security at Synthesia, joins Business Security Weekly to share his ISO 42001 certification journey. From corporate culture to the witness audit, Martin walks us through the certification process and the benefits they have gained from the certification. If you're considering ISO 42001 certification, this interview is a must see. In the leadership and communications section, Are 2 CEOs Better Than 1? Here Are The Benefits and Drawbacks You Must Consider, CISOs rethink hiring to emphasize skills over degrees and experience, Why Clear Executive Communication Is a Silent Driver of Organizational Success, and more! Visit https://www.securityweekly.com/bsw for all the latest episodes! Show Notes: https://securityweekly.com/bsw-392

Hashtag Trending
AI Security Risks, US Immigration Policies, and Apple's AI Missteps

Hashtag Trending

Play Episode Listen Later Apr 23, 2025 9:33 Transcription Available


In this episode of 'Hashtag Trending,' host Jim Love discusses warnings from Anthropic about the security risks posed by AI virtual employees expected to integrate into corporate networks next year. The episode also explores the potential impact of recent US immigration policies on its tech leadership and global competitiveness, especially concerning Chinese and other international students. Additionally, Apple faces scrutiny for misleading AI marketing claims, leading to changes in their promotional material. The show delves into how these developments could shape the future landscape of technology and innovation. 00:00 AI Virtual Employees: Security Risks Ahead 01:52 US Tech Leadership Under Threat 03:03 Impact of US Immigration Policies on Science 04:57 China's Rise in Scientific Research 06:00 Canada's Growing Appeal for STEM Talent 07:45 Apple's Misleading AI Promotions 09:08 Conclusion and Contact Information

Business Security Weekly (Video)
ISO 42001 Certification, CIOs Struggle to Align Strategies, and CISOs Rethink Hiring - Martin Tschammer - BSW #392

Business Security Weekly (Video)

Play Episode Listen Later Apr 23, 2025 63:55


AI Governance, the next frontier for AI Security. But what framework should you use? ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. But how do you get certified? What's the process look like? Martin Tschammer, Head of Security at Synthesia, joins Business Security Weekly to share his ISO 42001 certification journey. From corporate culture to the witness audit, Martin walks us through the certification process and the benefits they have gained from the certification. If you're considering ISO 42001 certification, this interview is a must see. In the leadership and communications section, Are 2 CEOs Better Than 1? Here Are The Benefits and Drawbacks You Must Consider, CISOs rethink hiring to emphasize skills over degrees and experience, Why Clear Executive Communication Is a Silent Driver of Organizational Success, and more! Show Notes: https://securityweekly.com/bsw-392

ITSPmagazine | Technology. Cybersecurity. Society
Quantum Security, Real Problems, and the Unifying Layer Behind It All | A Brand Story Conversation with Marc Manzano, General Manager of the Cybersecurity Group at SandboxAQ | A RSAC Conference 2025 Brand Story Pre-Event Conversation

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 21, 2025 9:31


We're on the road to RSAC 2025 — or maybe on a quantum-powered highway — and this time, Sean and I had the pleasure of chatting with someone who's not just riding the future wave, but actually building it.Marc Manzano, General Manager of the Cybersecurity Group at SandboxAQ, joined us for this Brand Story conversation ahead of the big conference in San Francisco. For those who haven't heard of SandboxAQ yet, here's a quick headline: they're a spin-out from Google, operating at the intersection of AI and quantum technologies. Yes — that intersection.But let's keep our feet on the ground for a second, because this story isn't just about tech that sounds cool. It's about solving the very real, very painful problems that security teams face every day.Marc laid out their mission clearly: Active Guard, their flagship platform, is built to simplify and modernize two massive pain points in enterprise security — cryptographic asset management and non-human identity management. Think: rotating certificates without manual effort. Managing secrets and keys across cloud-native infrastructure. Automating compliance reporting for quantum-readiness. No fluff — just value, right out of the box.And it's not just about plugging a new tool into your already overloaded stack. What impressed us is how SandboxAQ sees themselves as the unifying layer — enhancing interoperability across existing systems, extracting more intelligence from the tools you already use, and giving teams a unified view through a single pane of glass.And yes, we also touched on AI SecOps — because as AI becomes a standard part of infrastructure, so must security for it. Active Guard is already poised to give security teams visibility and control over this evolving layer.Want to see it in action? Booth 6578, North Expo Hall. Swag will be there. Demos will be live. Conversations will be real.We'll be there too — recording a deeper Brand Story episode On Location during the event.Until then, enjoy this preview — and get ready to meet the future of cybersecurity.⸻Keywords:sandboxaq, active guard, rsa conference 2025, quantum cybersecurity, ai secops, cryptographic asset management, non-human identity, cybersecurity automation, security compliance, rsa 2025, cybersecurity innovation, certificate lifecycle management, secrets management, security operations, quantum readiness, rsa sandbox, cybersecurity saas, devsecops, interoperability, digital transformation______________________Guest: Marc Manzano,, General Manager of the Cybersecurity Group at SandboxAQMarc Manzano on LinkedIn

ITSPmagazine | Technology. Cybersecurity. Society
Why “Permit by Exception” Might Be the Key to Business Resilience | A Brand Story with Rob Allen, Chief Product Officer at ThreatLocker | A RSAC Conference 2025 Brand Story Pre-Event Conversation

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 21, 2025 18:58


At this year's RSAC Conference, the team from ThreatLocker isn't just bringing tech—they're bringing a challenge. Rob Allen, Chief Product Officer at ThreatLocker, joins Sean Martin and Marco Ciappelli for a lively pre-conference episode that previews what attendees can expect at booth #854 in the South Expo Hall.From rubber ducky hacks to reframing how we think about Zero Trust, the conversation highlights the ways ThreatLocker moves beyond the industry's typical focus on reactive detection. Allen shares how most cybersecurity approaches still default to allowing access unless a threat is known, and why that mindset continues to leave organizations vulnerable. Instead, ThreatLocker's philosophy is to “deny by default and permit by exception”—a strategy that, when managed effectively, provides maximum protection without slowing down business operations.ThreatLocker's presence at the conference will feature live demos, short presentations, and hands-on challenges—including their popular Ducky Challenge, where participants test whether their endpoint defenses can prevent a rogue USB (disguised as a keyboard) from stealing their data. If your system passes, you win the rubber ducky. If it doesn't? They (temporarily) get your data. It's a simple but powerful reminder that what you think is secure might not be.The booth won't just be about tech. The team is focused on conversations—reconnecting with customers, engaging new audiences, and exploring how the community is responding to a threat landscape that's growing more sophisticated by the day. Allen emphasizes the importance of in-person dialogue, not only to share what ThreatLocker is building but to learn how security leaders are adapting and where gaps still exist.And yes, there will be merch—high-quality socks, t-shirts, and even a few surprise giveaways dropped at hotel doors (if you resist the temptation to open the envelope before visiting the booth).For those looking to rethink endpoint protection or better understand how proactive controls can complement detection-based tools, this episode is your preview into a very different kind of cybersecurity conversation—one that starts with a challenge and ends with community.Learn more about ThreatLocker: https://itspm.ag/threatlocker-r974Guest: Rob Allen, Chief Product Officer, ThreatLocker | https://www.linkedin.com/in/threatlockerrob/ResourcesLearn more and catch more stories from ThreatLocker: https://www.itspmagazine.com/directory/threatlockerLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsa-conference-usa-2025-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverage______________________Keywords: rsac conference, cybersecurity, endpoint, zero trust, rubber ducky, threat detection, data exfiltration, security strategy, deny by default, permit by exception, proactive security, security demos, usb attack, cyber resilience, network control, security mindset, rsac 2025, event coverage, on location, conference____________________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageTo see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastTo see and hear more Redefining Society stories on ITSPmagazine, visit:https://www.itspmagazine.com/redefining-society-podcastWant to tell your Brand Story Briefing as part of our event coverage? Learn More

ITSPmagazine | Technology. Cybersecurity. Society
AI, Security, and the Hybrid World: Akamai's Vision for RSAC 2025 With Rupesh Chokshi, SVP & GM Application Security Akamai | A RSAC Conference 2025 Brand Story Pre-Event Conversation

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 18, 2025 21:50


The RSA Conference has long served as a meeting point for innovation and collaboration in cybersecurity—and in this pre-RSAC episode, ITSPmagazine co-founders Marco Ciappelli and Sean Martin welcome Akamai's Rupesh Chokshi to the conversation. With RSAC 2025 on the horizon, they discuss Akamai's presence at the event and dig into the challenges and opportunities surrounding AI, threat intelligence, and enterprise security.Chokshi, who leads Akamai's Application Security business, describes a landscape marked by explosive growth in web and API attacks—and a parallel shift as enterprises embrace generative AI. The double-edged nature of AI is central to the discussion: while it offers breakthrough productivity and automation, it also creates new vulnerabilities. Akamai's dual focus, says Chokshi, is both using AI to strengthen defenses and securing AI-powered applications themselves.The conversation touches on the scale and sophistication of modern threats, including an eye-opening stat: Akamai is now tracking over 500 million large language model (LLM)-driven scraping requests per day. As these threats extend from e-commerce to healthcare and beyond, Chokshi emphasizes the need for layered defense strategies and real-time adaptability.Ciappelli brings a sociological lens to the AI discussion, noting the hype-to-reality shift the industry is experiencing. “We're no longer asking if AI will change the game,” he suggests. “We're asking how to implement it responsibly—and how to protect it.”At RSAC 2025, Akamai will showcase a range of innovations, including updates to its Guardicore platform and new App & API Protection Hybrid solutions. Their booth (6245) will feature interactive demos, theater sessions, and one-on-one briefings. The Akamai team will also release a new edition of their State of the Internet report, packed with actionable threat data and insights.The episode closes with a reminder: in a world that's both accelerating and fragmenting, cybersecurity must serve not just as a barrier—but as a catalyst. “Security,” says Chokshi, “has to enable innovation, not hinder it.”⸻Keywords: RSAC 2025, Akamai, cybersecurity, generative AI, API protection, web attacks, application security, LLM scraping, Guardicore, State of the Internet report, Zero Trust, hybrid digital world, enterprise resilience, AI security, threat intelligence, prompt injection, data privacy, RSA Conference, Sean Martin, Marco Ciappelli______________________Guest: Rupesh Chokshi, SVP & GM, Akamai https://www.linkedin.com/in/rupeshchokshi/Hosts:Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine:  https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast & Audio Signals Podcast | On ITSPmagazine: https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________This Episode's SponsorsAKAMAI:https://itspm.ag/akamailbwc____________________________ResourcesLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsa-conference-usa-2025-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverageRupesh Chokshi Session at RSAC 2025The New Attack Frontier: Research Shows Apps & APIs Are the Targets - [PART1-W09]____________________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageTo see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastTo see and hear more Redefining Society stories on ITSPmagazine, visit:https://www.itspmagazine.com/redefining-society-podcastWant to tell your Brand Story Briefing as part of our event coverage? Learn More

CIO Classified
CIO Leadership in AI Security and Innovation with Siroui Mushegian of Barracuda

CIO Classified

Play Episode Listen Later Apr 17, 2025 37:01


Siroui Mushegian, CIO at Barracuda, shares how she's building a smart, secure foundation for AI—-from setting up an AI council, to governing agents, and creating employee guidelines that protect innovation. She also shares how AI is transforming IT operations and unlocking new levels of productivity across the enterprise.About the Guest: Siroui Mushegian is the Chief Information Officer (CIO) at Barracuda. Siroui joined Barracuda most recently from BlackLine, where she was responsible for all aspects of BlackLine's internal corporate IT. Before BlackLine, she held executive IT leadership roles at PBS's WNET New York Public Media, the NBA, Ralph Lauren, and Time, Inc. Bringing more than 20 years of executive and IT leadership experience, Siroui has successfully built strong operational environments that eliminate technology silos, elevated the maturity and impact of technology within her enterprises and delivered measurable and scalable business outcomes. Siroui holds a Master of Business Administration in Management and Strategy from Fordham University's Gabelli School of Business and a bachelor's in mathematics and finance from University of Connecticut.Timestamps:*(04:10) -  Skills for Future CIOs*(07:00) -  Barracuda's AI and Automation Projects*(08:50) -  Tips for AI Security *(33:25) -   The Importance of Community and CollaborationGuest Highlights:“ A lot of people are worried they are going to work themselves right out of a job. It remains very important for us to keep our position as thought leaders to hold that mantle high.”“ Your partnerships with your colleagues and leaders across the enterprise will help you get more done than any AI agent will.”“ I love the concept of the education we're getting ready to roll out  in a curated way to people who are going to take these tools and come up with solutions that I could never in my life think of because I don't sit in their shoes every day.”Get Connected:Siroui Mushegian on LinkedInIan Faison on LinkedInResources:Learn more about Barracuda: barracuda.comHungry for more tech talk? Check out these past episodes:Ep 58 - AI-Driven Workplace TransformationEp 57 - The CIO Roadmap to Executive LeadershipEp 56 - Best Proactive Cybersecurity Strategies for CIOsLearn more about Caspian Studios: caspianstudios.comCan't get enough AI? Check out The New Automation Mindset Podcast for more in-depth conversations about strategies leadership in AI, automation, and orchestration. Brought to you by the automation experts at Workato. Start Listening: www.workato.com/podcast

alphalist.CTO Podcast - For CTOs and Technical Leaders
#120 - AI's Singularity & Commoditization: Navigating Hype vs. Reality with Georg Zoeller // Co-Founder @ C4AIL

alphalist.CTO Podcast - For CTOs and Technical Leaders

Play Episode Listen Later Apr 17, 2025 73:51 Transcription Available


In this episode, Tobi talks with Georg Zoeller, Co-Founder of the Centre for AI Leadership and mercenaries.ai, about the turbulent landscape of AI. Georg, with his background at Meta and deep expertise in AI strategy, cuts through the hype surrounding AI's capabilities and economic impact. They discuss the 'singularity' we're already in, driven by rapid, open-source AI development, and why this makes future predictions impossible. Georg argues that software engineering is being commoditized due to the vast amount of training data available (Stack Overflow, GitHub), making AI adept at code generation but raising profound security concerns like prompt injection. Explore: - Why Georg believes blindly adopting AI early is a 'terrible mistake' for most companies. - The fundamental security flaws in LLMs (prompt injection) and why they're currently unsolvable for open input spaces. - The questionable economics of AI: high costs, self-cannibalizing business models, and the reliance on performative fundraising. - How AI tools impact engineer productivity, shifting the bottleneck to decision-making and validation. - The geopolitical risks and diminishing trust associated with Big Tech's AI dominance. - Actionable advice for CTOs: Invest in understanding, focus on governance beyond the tech team, and consider the strategic value of local/open-source alternatives.

Today in Health IT
2 Minute Drill: The Growing Threat of Deepfakes: Legislation Gaps and AI Security With Drex DeFord

Today in Health IT

Play Episode Listen Later Apr 11, 2025 3:21 Transcription Available


Drex examines The alarming rise of intimate deepfakes targeting primarily women and children, with 18 states currently offering no legal protection against these digital sex crimes. Various state legislative efforts including Montana's focus on combating political deepfakes, particularly within 60 days of elections; and OpenAI's first investment in cybersecurity through a $43 million funding round for Adaptive Security, a company specializing in training organizations to recognize deepfake attacks and phishing threats.Remember, Stay a Little Paranoid X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer

The CU2.0 Podcast
CU 2.0 Podcast Episode 347 MDT's Pete Major on AI, Security, Tools for Business Mdembers + More

The CU2.0 Podcast

Play Episode Listen Later Apr 9, 2025 40:12


Send us a textI'd expected this to be an AI free show but, let's face it, that just isn't likely in 2025 but the good news is that in the show Pete Major,  vice president of fintech services at CUSO MDT, offers concrete AI use cases at work in MDT and he also, importantly, offers cautions about security and the leading AI tools.In a rush to stay abreast of the fast moving AI universe are some credit unions losing sight of the need to be very sure of the security of the tools they use? Maybe.Major provides tips on how to stay secure while still using AI tools..But there's a lot more in this show.We talk for instance about the need of CUs to keep security in mind when using any technology tools.  If there are flaws - and there have been some doozies in recent years - it's the credit union that will be saddled with the bulk of the blame.On a happier note Major discusses a suite of tools for small business members at credit unions - and, he says, demand for the tools is very hot.  Is offering good tools a path to winning more business members? Just maybe.We close the show pondering what the developments in Washington DC - anything from an end to credit union tax exemption to an end to NCUA - might mean for credit unions and also the rising CU interest in merging.There's a lot to unpack in this show.  Listen up.Like what you are hearing? Find out how you can help sponsor this podcast here. Very affordable sponsorship packages are available. Email rjmcgarvey@gmail.com  And like this podcast on whatever service you use to stream it. That matters.  Find out more about CU2.0 and the digital transformation of credit unions here. It's a journey every credit union needs to take. Pronto

Digital Marketing Therapy
Ep 296 | Using ChatGPT Correctly with Steven Lewis

Digital Marketing Therapy

Play Episode Listen Later Apr 8, 2025 46:11 Transcription Available


Are you ready to supercharge your nonprofit's digital marketing efforts? In this episode, I sit down with Steven Lewis, a seasoned marketer with 30 years of experience in copywriting and technology, to explore the game-changing potential of ChatGPT for small to medium-sized nonprofits. We dive deep into how this powerful AI tool can become your 24/7 marketing consultant, helping you craft compelling content, conduct market research, and even run virtual focus groups – all without breaking the bank. Unlocking ChatGPT's Potential for Nonprofits Steven shares invaluable insights on: - How to use ChatGPT as a thought partner and consultant - Crafting the perfect prompts to get the results you need - Developing a unique tone of voice for your organization - Creating synthetic personas for risk-free testing and feedback Key Takeaways: - ChatGPT isn't just for content creation – it's a versatile tool for strategy and research - Learn how to have meaningful “conversations” with the AI to refine your marketing approach - Discover how to leverage ChatGPT's vast knowledge base to understand your audience better - Find out how to use synthetic personas to test ideas without risking donor relationships Practical Applications for Your Nonprofit - Use ChatGPT to develop and refine your organization's tone of voice - Create virtual focus groups to test new ideas and campaigns - Generate data-driven insights to support your marketing decisions - Streamline your content creation process while maintaining authenticity This episode is packed with actionable advice for nonprofit leaders looking to make the most of AI technology in their digital marketing efforts. Whether you're a seasoned marketer or new to the world of AI, you'll find valuable strategies to elevate your nonprofit's online presence. Ready to revolutionize your nonprofit's digital marketing strategy? Listen to the full episode and discover how ChatGPT can become your secret weapon in reaching and engaging your audience more effectively than ever before. Want to skip ahead? Here are key moments: 09:30 Understanding ChatGPT: The Basics and Beyond ChatGPT is a large language model trained on vast amounts of data. Providing context helps shape ChatGPT's outputs. There is a lot of potential for ChatGPT to be a thought partner and consultant for businesses of all sizes. 24:34 Addressing Security Concerns and Developing Tone of Voice Be sure to balance proprietary information protection with leveraging ChatGPT's capabilities. Creating your tone of voice will help your prompts become even more effective. 35:57 Advanced ChatGPT Techniques: Synthetic Personas and Focus Groups Use ChatGPT to create synthetic personas for focus groups. This technique allows organizations to test ideas and content safely without risking real donor relationships. The approach provides valuable insights and data for decision-making. Don't miss out on this opportunity to learn how AI can transform your nonprofit's digital marketing efforts. Tune in now and take the first step towards a more efficient, effective, and data-driven marketing strategy. Steven Lewis Steven Lewis is a marketer with 30 years of experience in copywriting and technology. His course Make ChatGPT Your CMO shows business owners how to turn ChatGPT into a 24/7 marketing consultant that gives expert advice tailored to their business. Learn more at https://taleist.agency/ Connect with us on LinkedIn: https://www.linkedin.com/company/the-first-click Learn more about The First Click: https://thefirstclick.net Schedule a Digital Marketing Therapy Session: https://thefirstclick.net/officehours

Geeksblabla
#179 - Tech News & AMA #30 - AI , Security , Devices -

Geeksblabla

Play Episode Listen Later Apr 8, 2025 136:18


In this episode, we discuss What's new in the AI universe and the XZ Backdoor

Govcon Giants Podcast
They Solved the #1 AI Security Threat the Government Couldn't Fix!

Govcon Giants Podcast

Play Episode Listen Later Apr 7, 2025 8:07


In this episode we have an intriguing conversation with Jim, and Jerry. We discuss the challenges and innovative solutions in the realm of artificial intelligence (AI) and software development. Discover how this innovative approach opens doors for professionals from various fields to contribute to AI and no-code development efforts. Tune in to this captivating episode and learn how these cutting-edge technologies are transforming the landscape of business and technology. Don't miss out on this episode of The Daily Windup, where you'll find insights, inspiration, and practical applications in under 10 minutes!

Campus Technology Insider
Claude for Edu, AI Security Agents, & AI Literacy: Campus Technology News of the Week (4/4/25)

Campus Technology Insider

Play Episode Listen Later Apr 4, 2025 2:17


In this episode of Campus Technology Insider Podcast Shorts, host Rhea Kelly highlights top stories in education technology, including Anthropic's launch of Claude for Education and Microsoft's Security Copilot enhancement with 11 AI-powered security agents. Additionally, the Digital Education Council's comprehensive AI literacy framework aims to empower higher education communities with essential AI competencies. For more details on these stories, visit campustechnology.com. 00:00 Introduction and Host Welcome 00:17 Anthropic's Claude for Education 00:47 Microsoft's AI-Powered Cybersecurity Expansion 01:25 Digital Education Council's AI Literacy Framework 02:02 Conclusion and Further Resources Source links: Anthropic Launches Claude for Education Microsoft Adds New Agentic AI Tools to Security Copilot Digital Education Council Defines 5 Dimensions of AI Literacy  Campus Technology Insider Podcast Shorts are curated by humans and narrated by AI.

Business of Tech
MSP Regulations Shift: CMMC 2.0, FedRAMP Overhaul, UK Cyber Bill & AI Security Concerns

Business of Tech

Play Episode Listen Later Apr 2, 2025 15:30


Michael Duffy, President Donald Trump's nominee for Undersecretary of Defense for Acquisition and Sustainment, has committed to reviewing the Pentagon's Cybersecurity Maturity Model Certification (CMMC) 2.0 if confirmed. This revamped program, effective since December, mandates that defense contractors handling controlled, unclassified information comply with specific cybersecurity standards to qualify for Department of Defense contracts. Concerns have been raised about the burden these regulations may impose on smaller firms, with a report indicating that over 50% of respondents felt unprepared for the program's requirements. Duffy aims to balance security needs with regulatory burdens, recognizing the vulnerability of small and medium-sized businesses in the face of cyber threats.In addition to the CMMC developments, the General Services Administration (GSA) is set to unveil significant changes to the Federal Risk Authorization Management Program (FedRAMP). The new plan for 2025 focuses on establishing standards and policies rather than approving cloud authorization packages, which previously extended the process for up to 11 months. The GSA intends to automate at least 80% of current requirements, allowing cloud service providers to demonstrate compliance more efficiently, while reducing reliance on external support services.Across the Atlantic, the UK government has announced a comprehensive cybersecurity and resilience bill aimed at strengthening defenses against cyber threats. This legislation will bring more firms under regulatory oversight, specifically targeting managed service providers (MSPs) that provide core IT services and have extensive access to client systems. The proposed regulations will enhance incident reporting requirements and empower the Information Commissioner's Office to proactively identify and mitigate cyber risks, setting higher expectations for cybersecurity practices among MSPs.The episode also discusses the implications of recent developments in AI and cybersecurity. With companies like SolarWinds, CloudFlare, and Red Hat enhancing their offerings, the integration of AI into business operations raises concerns about security and compliance. The ease of generating fake documents using AI tools poses a significant risk to industries reliant on document verification. As the landscape evolves, IT service providers must adapt by advising clients on updated compliance practices and strengthening their cybersecurity measures to address these emerging threats. Four things to know today 00:00 New Regulatory Shifts for MSPs: CMMC 2.0, FedRAMP Overhaul, and UK Cyber Security Bill05:21 CISA Cuts and Signal on Gov Devices: What Could Go Wrong?08:15 AI Solutions Everywhere! SolarWinds, Cloudflare, and Red Hat Go All In11:37 OpenAI's Image Generation Capabilities Raise Fraud Worries: How Businesses Should Respond  Supported by:  https://www.huntress.com/mspradio/https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship  Join Dave April 22nd to learn about Marketing in the AI Era.  Signup here:  https://hubs.la/Q03dwWqg0 All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech

The Secure Developer
Authentication, Authorization, And The Future Of AI Security With Alex Salazar

The Secure Developer

Play Episode Listen Later Apr 1, 2025 38:36


Episode SummaryIn this episode of The Secure Developer, host Danny Allan sits down with Alex Salazar, founder and CEO of Arcade, to discuss the evolving landscape of authentication and authorization in an AI-driven world. Alex shares insights on the shift from traditional front-door security to back-end agent interactions, the challenges of securing AI-driven agents, and the role of identity in modern security frameworks. The conversation delves into the future of AI, agentic workflows, and how organizations can navigate authentication, authorization, and security in this new era.Show NotesDanny Allan welcomes Alex Salazar, an experienced security leader and CEO of Arcade, to explore the transformation of authentication and authorization in AI-powered environments. Drawing from his experience at Okta, Stormpath, and venture capital, Alex provides a unique perspective on securing interactions between AI agents and authenticated services.Key topics discussed include:The Evolution of Authentication & Authorization: Traditional models focused on front-door access (user logins, SSO), whereas AI-driven agents require secure back-end interactions.Agentic AI and Security Risks: How AI agents interact with services on behalf of users, and why identity becomes the new perimeter in security.OAuth and Identity Challenges: Adapting OAuth for AI agents, ensuring least-privilege access, and maintaining security compliance.AI Hallucinations & Risk Management: Strategies for mitigating LLM hallucinations, ensuring accuracy, and maintaining human oversight.The Future of AI & Agentic Workflows: Predictions on how AI will continue to evolve, the rise of specialized AI models, and the intersection of AI and physical automation.Alex and Danny also discuss the broader impact of AI on developer productivity, with insights into how companies can leverage AI responsibly to boost efficiency without compromising security.LinksArcade.dev - Make AI Actually Do ThingsOkta - IdentityOAuth - Authorization ProtocolLangChain - Applications that Can ReasonHugging Face - The AI Community Building the FutureSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn

Cyber Work
From CIA to CISO: AI security predictions and career strategies | Guest Ross Young

Cyber Work

Play Episode Listen Later Mar 31, 2025 51:33


Get your FREE Cybersecurity Salary Guide: https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastRoss Young, CISO in residence at Team8, joins this week's Cyber Work episode to share insights from his fascinating career journey from the CIA to cybersecurity leadership. With over a decade of experience across intelligence agencies and major companies, Young discusses the rapidly evolving AI security landscape, predicts how AI will transform security roles and offers valuable career advice for cybersecurity professionals at all levels. Learn how security professionals can stay relevant in an AI-driven future and why continuous learning is non-negotiable in this field.00:00 Intro00:27 Ross Young's journey in cybersecurity01:18 Cybersecurity job market insights02:12 Ross Young's educational path07:38 Experience at the CIA10:38 Transition to the private sector13:15 Current role at Team818:30 Daily life of a CISO in residence22:12 Impact of AI on cybersecurity25:23 Identifying phishing emails25:49 New risks with AI models27:08 Exploiting AI for malicious purposes30:55 Defending against AI exploits32:24 AI in security automation33:30 Common mistakes in AI implementation36:59 Future of cybersecurity with AI43:18 Advice for security professionals46:17 Career advice – View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastAbout InfosecInfosec's mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ's security awareness training. Learn more at infosecinstitute.com.

Cloud Security Podcast by Google
EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes?

Cloud Security Podcast by Google

Play Episode Listen Later Mar 31, 2025 23:11


Guest: Alex Polyakov, CEO at Adversa AI Topics: Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client? Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now?  What trips most clients,  classic security mistakes in AI systems or AI-specific mistakes? Are there truly new mistakes in AI systems or are they old mistakes in new clothing? I know it is not your job to fix it, but much of this is unfixable, right? Is it a good idea to use AI to secure AI? Resources: EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far AI Red Teaming Reasoning LLM US vs China: Jailbreak Deepseek, Qwen, O1, O3, Claude, Kimi Adversa AI blog Oops! 5 serious gen AI security mistakes to avoid Generative AI Fast Followership: Avoid These First Adopter Security Missteps

Business of Tech
The Future of AI Security: Risk Assessment and Management for Generative Applications with Sahil Agarwal

Business of Tech

Play Episode Listen Later Mar 30, 2025 17:15


Sahil Agarwal, co-founder and CEO of Enkrypt.ai, discusses the critical importance of security and compliance in the realm of artificial intelligence (AI) models. His company focuses on helping enterprises adopt generative AI while managing the associated risks. Agarwal explains that the mission of Enkrypt.ai has evolved from developing encryption algorithms to creating comprehensive solutions that provide ongoing management and monitoring of AI applications. This shift aims to ensure that businesses can safely integrate AI technologies without exposing themselves to brand, legal, or security risks.Agarwal highlights the dual approach of Enkrypt.ai, which includes an initial risk assessment followed by continuous monitoring and management. The risk assessment involves simulating attacks on AI systems to identify vulnerabilities, while the ongoing management ensures that any identified risks are mitigated effectively. This iterative process creates a feedback loop that enhances the security posture of generative applications, allowing businesses to operate with greater confidence.The conversation also touches on the economic challenges surrounding generative AI, where many companies invest heavily in projects that struggle to reach production due to unresolved security and compliance issues. Agarwal notes that while there is a democratization of AI technology, the real value lies in how enterprises apply these models. He emphasizes the need for businesses to adopt a proactive approach to security, particularly as they scale their use of AI agents and chatbots.Finally, Agarwal addresses the pressing issue of data leakage, particularly when using third-party AI models. He advises organizations to keep sensitive data on the client side and to choose trusted solutions to mitigate risks. By implementing robust security measures and maintaining a vigilant posture, businesses can harness the power of AI while safeguarding their proprietary information. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech

Cloud Wars Live with Bob Evans
How AI Agents Are Reshaping Enterprise Tech – Insights from Oracle, SAP, and More | Tinder on Customers

Cloud Wars Live with Bob Evans

Play Episode Listen Later Mar 26, 2025 23:41


Episode 50 | AI Agents in ActionThe Big Themes:The Rise of 'Agent Ratios': As companies roll out more AI agents, the "agent-to-human ratio" could become a useful AI maturity indicator. Currently, we're seeing early adoption — with Oracle reporting that only 5–10% of its customers have put agents into production. These early use cases focus on low-risk, easily-automated tasks. It's a cautious start, but the trajectory is upward. Bonnie points out that once the groundwork is laid, the pace of adoption will likely accelerate, yielding increased productivity.Four Smart Questions for Evaluating Enterprise AI Initiatives: To help customers decide whether to adopt AI capabilities, Bonnie offers four key questions: (1) Is it available to me? Not all customers have access to AI features; infrastructure matters. (2) Do I need or want it? Weigh the risk-reward tradeoff, especially in terms of time and internal resources. (3) Is my data protected? Ensure your vendor offers strong governance and compliance support. (4) What is the time to value?Knowing When to Leap and When to Wait on AI Adoption: Should companies wait or dive into AI now? Her advice: it depends. If your organization is in a fast-moving, innovation-driven sector, early adoption is essential to stay competitive. Waiting could mean falling behind. But for highly regulated industries or companies unused to rapid tech change, a cautious approach makes sense.

Secure Ventures with Kyle McNulty
Bricklayer | CEO Adam Vincent on AI Security Operations

Secure Ventures with Kyle McNulty

Play Episode Listen Later Mar 25, 2025 47:18


This episode is a recording of a live interview held on stage at Blu Ventures' Cyber Venture Forum in February. A huge shoutout and thank you to the Blu Ventures team for putting together an awesome event. Bricklayer is building an AI-based agent to assist with security operations workflows. Before Bricklayer, Adam founded ThreatConnect which he led for over a decade. In the conversation we discuss his learnings from his experience at ThreatConnect, acquiring vs. building a new capability, and how he thinks about competition in the AI SOC space.Website: bricklayer.aiSponsor: VulnCheck

Detection at Scale
Pangea's Oliver Friedrichs on Building Guardrails for the New AI Security Frontier

Detection at Scale

Play Episode Listen Later Mar 25, 2025 26:59


The security automation landscape is undergoing a revolutionary transformation as AI reasoning capabilities replace traditional rule-based playbooks. In this episode of Detection at Scale, Oliver Friedrichs, Founder & CEO of Pangea, helps Jack unpack how this shift democratizes advanced threat detection beyond Fortune 500 companies while simultaneously introducing an alarming new attack surface.  Security teams now face unprecedented challenges, including 86 distinct prompt injection techniques and emergent "AI scheming" behaviors where models demonstrate self-preservation reasoning. Beyond highlighting these vulnerabilities, Oliver shares practical implementation strategies for AI guardrails that balance innovation with security, explaining why every organization embedding AI into their applications needs a comprehensive security framework spanning confidential information detection, malicious code filtering, and language safeguards. Topics discussed: The critical "read versus write" framework for security automation adoption: organizations consistently authorized full automation for investigative processes but required human oversight for remediation actions that changed system states. Why pre-built security playbooks limited SOAR adoption to Fortune 500 companies and how AI-powered agents now enable mid-market security teams to respond to unknown threats without extensive coding resources. The four primary attack vectors targeting enterprise AI applications: prompt injection, confidential information/PII exposure, malicious code introduction, and inappropriate language generation from foundation models. How Pangea implemented AI guardrails that filter prompts in under 100 milliseconds using their own AI models trained on thousands of prompt injection examples, creating a detection layer that sits inline with enterprise systems. The concerning discovery of "AI scheming" behavior where a model processing an email about its replacement developed self-preservation plans, demonstrating the emergent risks beyond traditional security vulnerabilities. Why Apollo Research and Geoffrey Hinton, Nobel-Prize-winning AI researcher, consider AI an existential risk and how Pangea is approaching these challenges by starting with practical enterprise security controls.   Check out Pangea.com  

Security Unfiltered
AI Security Secrets Unveiled: NSA Tech, Zero Trust & 2025 Cyber Trends With Jason Rogers from Invary

Security Unfiltered

Play Episode Listen Later Mar 25, 2025 44:38 Transcription Available


Send us a textStruggling to secure AI in 2025? Join Joe and Invary CEO Jason Rogers as they unpack NSA-licensed tech, zero trust frameworks, and the future of cybersecurity. From satellite security to battling advanced threats, discover how Invary's cutting-edge solutions are reshaping the industry. Plus, hear Jason's startup journey and Joe's wild ride balancing a newborn with a PhD. Subscribe now for the latest cyber trends—don't miss this!Chapters00:00 Navigating Parenthood and Professional Life02:53 The Startup Mentality: Decision-Making and Adaptability06:13 Blending Technical Skills with Sales08:58 Background and Journey into Cybersecurity12:10 Establishing a Security Culture in Organizations14:51 Collaborating with Government Entities17:47 Understanding NSA Licensed Technology23:06 Understanding Application and Server Security25:01 Exploring Zero Trust Frameworks28:57 Bridging Government and Private Sector Security31:27 The Role of Security Professionals33:55 Innovations in Cybersecurity Technology38:05 Invariance in Security SystemsSupport the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast

Cybersecurity Sense
The Future of AI Security: Legacy AI, Emerging Risks & Business Impact

Cybersecurity Sense

Play Episode Listen Later Mar 18, 2025 39:02


AI Briefing Room
EP-236 Anthropic's $100m Ai Security Alert

AI Briefing Room

Play Episode Listen Later Mar 13, 2025 2:27


welcome to wall-e's tech briefing for thursday, march 13th! dive into today's top tech stories: anthropic's ceo calls for heightened security: dario amodei urges the u.s. government to increase protection against potential $100 million ai secret thefts, especially highlighting risks from china, advocating for collaboration with the ai sector. intel's new leadership: lip-bu tan takes over as ceo, aiming to refocus on engineering and customer accountability. his appointment boosts market confidence, evidenced by an 11% rise in after-hours trading. google deepmind's gemini robotics: launch of a suite of advanced ai models enhancing robotic interactions and versatility, along with the accessible gemini robotics-er model for further innovation in robotic control. wonder's strategic acquisition: expands by acquiring tastemade for $90 million, integrating diverse media content to align with its vision of becoming a mealtime solutions 'super app'. nvidia's gtc 2025 announced: highlights include ceo jensen huang's keynote on new gpu series and ai tech updates, focusing on the future of automotive, robotics, and ai innovations. stay tuned for tomorrow's tech updates!

Artificial Intelligence in Industry with Daniel Faggella
Why Red Teaming is Critical for AI Security - with Tomer Poran of ActiveFence

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Mar 12, 2025 28:58


Today's guest is Tomer Poran, Chief Evangelist and VP of Strategy at ActiveFence. ActiveFence is a technology company specializing in trust and safety solutions, helping platforms detect and prevent harmful content, malicious activity, and emerging threats online. Tomer joins today's podcast to explore the critical role of red teaming in AI safety and security. He breaks down the challenges enterprises face in deploying AI responsibly, the evolving nature of adversarial risks, and why organizations must adopt a proactive approach to testing AI systems. This episode is sponsored by ActiveFence. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.

The Brave Marketer
LIVE FROM AI SUMMIT: Lenovo is Shaping AI for Smart Cities and the Greater Good

The Brave Marketer

Play Episode Listen Later Mar 12, 2025 28:36


Dr. Jeff Esposito, Engineering Lead at Lenovo R&D, shares how his team is shaping the future of AI with innovations like the Hive Transformer and EdgeGuard. He emphasizes the importance of ethical innovation and building technologies that are intended to serve society's greater good. He also stresses the value of collective contributions and diverse perspectives in shaping a future where technology effectively addresses real-world challenges. Key Takeaways: AI's role in building smarter cities through Lenovo's collaborations with NVIDIA and other partners. How AI security is evolving with EdgeGuard and other cutting-edge protections. The role of hybrid AI in combining machine learning and symbolic logic for real-world applications. Corporate responsibility in AI development and the balance between open-source innovation and commercialization. Why diverse perspectives are essential in shaping AI that benefits everyone. Guest Bio: Dr. Jeff Esposito has over 40 patent submissions,  with a long background in research and development at Dell, Microsoft, and Lenovo. He lectures on advanced technological development at various US government research labs, and believes that technology is at its best when serving the greater good and social justice. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte  

The Deep Wealth Podcast - Extracting Your Business And Personal Deep Wealth
AI Scientist And Entrepreneur Shanif Dhanani Shares How To Leverage AI To Create A Market Disruption (#419)

The Deep Wealth Podcast - Extracting Your Business And Personal Deep Wealth

Play Episode Listen Later Mar 10, 2025 56:57 Transcription Available


Send us a textUnlock Proven Strategies for a Lucrative Business Exit—Subscribe to The Deep Wealth Podcast Today

Breakfast Leadership
AI Security Risks: Protecting Sensitive Data with Alec Crawford

Breakfast Leadership

Play Episode Listen Later Mar 7, 2025 24:00


AI Security in High-Risk Sectors In a recent conversation, Alec and I dove into the critical role of AI security, especially in high-risk sectors like healthcare and banking. Alec stressed that AI must be secure and aligned with business strategies while ensuring governance, risk management, regulatory compliance, and cybersecurity remain top priorities. I couldn't agree more—AI in the wrong hands or without proper safeguards is a ticking time bomb. Sensitive data needs protection, and businesses must stay ahead of evolving regulations. We also touched on the growing need for private AI solutions, given the rising threats of cyberattacks like prompt injections. Cybersecurity and AI in Organizations Our discussion expanded into cybersecurity and AI adoption within organizations. Unvetted AI solutions pose significant risks, making internal development and continuous monitoring essential. Alec's company, Artificial Intelligence Risk, Inc., deploys private AI within clients' firewalls, reinforcing security through governance and compliance measures. One key takeaway? Awareness is everything. Many organizations jump into AI without securing their systems first. I was particularly interested in the “aha moments” Alec's clients experience when they see AI-driven security solutions in action. AI Governance and Confidentiality Concerns Alec shared a governance issue where a company implemented Microsoft Copilot—only to discover it unintentionally exposed confidential employee data. This highlighted a major concern: AI needs strict guardrails. Alec advocated for a “belt and suspenders” approach—limiting system access, assigning AI agents to specific groups, and avoiding over-reliance on super users who could inadvertently misuse AI. The lesson? AI governance isn't optional; it's a necessity. AI Applications in Call Centers AI's potential spans across industries, and call centers are a prime example. Alec described a client who leveraged AI to analyze 150,000 call transcripts, leading to a 30% reduction in call length and an additional 30% drop in overall call volume—all thanks to AI-driven website improvements. Beyond customer service, AI is making waves in investment research, analyzing earnings calls and regulatory filings. I even shared a fun hypothetical—using AI to predict the Toronto Blue Jays' performance—proving that AI's applications go beyond business into fields like sports analytics. AI Adoption, Security, and Privacy Wrapping up, Alec and I discussed the double-edged sword of AI adoption. While AI presents massive opportunities, it also comes with security, ethical, and privacy risks. Alec emphasized the need for strong leadership in AI implementation, ensuring data quality remains a top priority. I pointed out that the fear of missing out (FOMO) on AI can lead companies to make reckless decisions—often at the cost of security. Alec's company specializes in AI security solutions that safeguard against data breaches and attacks on Large Language Models, reinforcing the importance of a strategic, security-first approach to AI adoption.   Alec Crawford is Founder & CEO of Artificial Intelligence Risk, Inc., a company that accelerates enterprise Gen AI adoption - safely. He has been working with AI since the 1980's when he built neutral networks from scratch for his Harvard senior thesis. He is a thought leader for Gen AI with a blog at aicrisk.com and podcast called AI Risk Reward. He has more than 30 years of experience on Wall Street with his last role being Partner and Chief Risk Officer for Investments at Lord Abbett. linkedin.com/in/aleccrawford Our Story Dedicated to shaping the future.   At AI Risk, Inc., we are dedicated to shaping the future of AI governance, risk management, and compliance. With AI poised to become a cornerstone of business operations, we recognize the need for software solutions that ensure its safety, reliability, and regulatory adherence. Learn more Our Journey ​ Founded in response to the burgeoning adoption of AI without proper safeguards, AI Risk, Inc. seeks to pioneer a new era of responsible AI usage. Our platform, AIR GRCC, empowers companies to manage AI effectively, mitigating risks and ensuring regulatory compliance across all AI models. ​ Why Choose AI Risk, Inc.? ​ Comprehensive Solutions: We offer an all-encompassing platform for AI governance, risk management, regulatory compliance, and cybersecurity. Expertise: With extensive experience across industries and global regulations, we provide tailored solutions to meet diverse business needs. Futureproofing: As AI regulations evolve, our platform remains updated and adaptable, ensuring businesses stay ahead of compliance requirements. Cybersecurity Focus: Recognizing the unique challenges of AI cybersecurity, we provide cutting-edge solutions to protect against threats and ensure data integrity. ​​ Get Started with AI Risk, Inc. ​ Whether you're a large corporation or a budding startup, AI Risk, Inc. is your partner in navigating the complexities of AI implementation securely and responsibly. Join us in shaping a future where AI drives innovation without compromising integrity or security.

Cloud Security Podcast
Securing AI Applications in the Cloud

Cloud Security Podcast

Play Episode Listen Later Mar 6, 2025 45:27


What does it take to secure AI-based applications in the cloud? In this episode, host Ashish Rajan sits down with Bar-el Tayouri, Head of Mend AI at Mend.io, to dive deep into the evolving world of AI security. From uncovering the hidden dangers of shadow AI to understanding the layers of an AI Bill of Materials (AIBOM), Bar-el breaks down the complexities of securing AI-driven systems. Learn about the risks of malicious models, the importance of red teaming, and how to balance innovation with security in a dynamic AI landscape.What is an AIBOM and why it mattersThe stages of AI adoption: experimentation to optimizationShadow AI: A factor of 10 more than you thinkPractical strategies for pre- and post-deployment securityThe future of AI security with agent swarms and beyondGuest Socials: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bar-El's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Cybersecurity PodcastQuestions asked:(00:00) Introduction(02:24) A bit about Bar-el(03:32) What is AIBOM?(12:58) What is an embedding model?(16:12) What should Leaders have in their AI Security Strategy?(19:00) Whats different about the AI Security Landscape?(23:50) Challenges with integrating security into AI based Applications(25:33) Has AI solved the disconnect between Security and Developers(28:39) Risk framework for AI Security(32:26) Dealing with threats for current AI Applications in production(36:51) Future of AI Security(41:24) The Fun Section

Convergence
Delegate Effectively and Safely to AI - As an Expert Consultant or a Dutiful Intern

Convergence

Play Episode Listen Later Mar 5, 2025 53:23


Most people are barely scratching the surface of what generative AI can do. While some fear it will replace their jobs, others dismiss it as a passing trend—but both extremes miss the point. In this episode, Ashok Sivanand breaks down the real opportunity AI presents: not as a replacement for human judgment, but as a powerful tool that can act as both a dutiful intern and an expert consultant. Learn how to integrate AI into your daily work, from automating tedious tasks to sharpening your strategic thinking, all while staying in control. Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Inside the episode... Why so few people are using generative AI daily—and why that needs to change The two key roles AI can play: the intern and the consultant How AI can help professionals streamline research, analysis, and decision-making Practical prompts and frameworks for getting the most out of AI tools The dangers of "AI autopilot" and why staying in the driver's seat is critical Security and privacy concerns: What every AI user should know The best AI tools for different use cases—beyond just ChatGPT How companies can encourage AI adoption without creating unnecessary friction Mentioned in this episode AI Tools: ChatGPT, Claude, Perplexity, Gemini, Copilot, Grok Amazon's six-page memo template for effective decision-making: https://medium.com/@info_14390/the-ultimate-guide-to-amazons-6-pager-memo-method-c4b683441593 Ready Signal for external market factor analysis: https://www.readysignal.com/ AI prompting frameworks from Geoff Woods of AI Leadership: https://www.youtube.com/watch?v=HToY8gDTk6E Andrej Karpathy's Deep Dive into LLMs: https://www.youtube.com/watch?v=7xTGNNLPyMI  Books by Carmine Gallo: The Presentation Secrets of Steve Jobs & Talk Like TED: https://www.amazon.com/Presentation-Secrets-Steve-Jobs-Insanely/dp/1491514310 Subscribe to the Convergence podcast wherever you get podcasts—including video episodes on YouTube at youtube.com/@convergencefmpodcast Learn something? Give the podcast a 5-star review and like the episode on YouTube. It's how the show grows. Follow the Pod Linkedin: https://www.linkedin.com/company/convergence-podcast/ X: https://twitter.com/podconvergence Instagram: @podconvergence

Risk Management Show
AI Security Risks - what every Risk Manager Must Know with Dr. Peter Garraghan

Risk Management Show

Play Episode Listen Later Mar 5, 2025 25:54


In this episode of the Risk Management Show podcast, we explore AI Security Risks and what every risk manager must know. Dr. Peter Garraghan, CEO and co-founder of Mind Guard and a professor of computer science at Lancaster University, shares his expertise on managing the evolving threat landscape in AI. With over €11M in research funding and 60+ published papers, he reveals why traditional cybersecurity tools often fail to address AI-specific vulnerabilities and how organizations can safely adopt AI while mitigating risks. We discuss AI's role in Risk Management, Cyber Security, and Sustainability, and provide actionable insights for Chief Risk Officers and compliance professionals. Dr. Garraghan outlines practical steps for minimizing risks, aligning AI with regulatory frameworks like GDPR, and leveraging tools like ISO 42001 and the EU AI Act. He also breaks down misconceptions about AI and its potential impact on businesses and society. If you want to be our guest or suggest a guest, send your email to info@globalriskconsult.com with the subject line "Podcast Guest Inquiry." Don't miss this essential conversation for anyone navigating AI and risk management!

ITSPmagazine | Technology. Cybersecurity. Society
Hackers, Policy, and the Future of Cybersecurity: Inside The Hackers' Almanack from DEF CON and the Franklin Project | A Conversation with Jake Braun | Redefining CyberSecurity with Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Mar 3, 2025 40:32


⬥GUEST⬥Jake Braun, Acting Principal Deputy National Cyber Director, The White House | On LinkedIn: https://www.linkedin.com/in/jake-braun-77372539/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martin⬥EPISODE NOTES⬥Cybersecurity is often framed as a battle between attackers and defenders, but what happens when hackers take on a different role—one of informing policy, protecting critical infrastructure, and even saving lives? That's the focus of the latest Redefining Cybersecurity podcast episode, where host Sean Martin speaks with Jake Braun, former Acting Principal Deputy National Cyber Director at the White House and current Executive Director of the Cyber Policy Initiative at the University of Chicago.Braun discusses The Hackers' Almanack, a project developed in partnership with DEF CON and the Franklin Project to document key cybersecurity findings that policymakers, industry leaders, and technologists should be aware of. This initiative captures some of the most pressing security challenges emerging from DEF CON's research community and translates them into actionable insights that could drive meaningful policy change.DEF CON, The Hackers' Almanack, and the Franklin ProjectDEF CON, one of the world's largest hacker conferences, brings together tens of thousands of security researchers each year. While the event is known for its groundbreaking technical discoveries, Braun explains that too often, these findings fail to make their way into the hands of policymakers who need them most. That's why The Hackers' Almanack was created—to serve as a bridge between the security research community and decision-makers who shape regulations and national security strategies.This effort is an extension of the Franklin Project, named after Benjamin Franklin, who embodied the intersection of science and civics. The initiative includes not only The Hackers' Almanack but also a volunteer-driven cybersecurity support network for under-resourced water utilities, a critical infrastructure sector under increasing attack.Ransomware: Hackers Filling the Gaps Where Governments Have StruggledOne of the most striking sections of The Hackers' Almanack examines the state of ransomware. Despite significant government efforts to disrupt ransomware groups, attacks remain as damaging as ever. Braun highlights the work of security researcher Vangelis Stykas, who successfully infiltrated ransomware gangs—not to attack them, but to gather intelligence and warn potential victims before they were hit.While governments have long opposed private-sector hacking in retaliation against cybercriminals, Braun raises an important question: Should independent security researchers be allowed to operate in this space if they can help prevent attacks? This isn't just about hacktivism—it's about whether traditional methods of law enforcement and national security are enough to combat the ransomware crisis.AI Security: No Standards, No Rules, Just ChaosArtificial intelligence is dominating conversations in cybersecurity, but according to Braun, the industry still hasn't figured out how to secure AI effectively. DEF CON's AI Village, which has been studying AI security for years, made a bold statement: AI red teaming, as it exists today, lacks clear definitions and standards. Companies are selling AI security assessments with no universally accepted benchmarks, leaving buyers to wonder what they're really getting.Braun argues that industry leaders, academia, and government must quickly come together to define what AI security actually means. Are we testing AI applications? The algorithms? The data sets? Without clarity, AI red teaming risks becoming little more than a marketing term, rather than a meaningful security practice.Biohacking: The Blurry Line Between Innovation and BioterrorismPerhaps the most controversial section of The Hackers' Almanack explores biohacking and its potential risks. Researchers at the Four Thieves Vinegar Collective demonstrated how AI and 3D printing could allow individuals to manufacture vaccines and medical devices at home—at a fraction of the cost of commercial options. While this raises exciting possibilities for healthcare accessibility, it also raises serious regulatory and ethical concerns.Current laws classify unauthorized vaccine production as bioterrorism, but Braun questions whether that definition should evolve. If underserved communities have no access to life-saving treatments, should they be allowed to manufacture their own? And if so, how can regulators ensure safety without stifling innovation?A Call to ActionThe Hackers' Almanack isn't just a technical report—it's a call for governments, industry leaders, and the security community to rethink how we approach cybersecurity, technology policy, and even healthcare. Braun and his team at the Franklin Project are actively recruiting volunteers, particularly those with cybersecurity expertise, to help protect vulnerable infrastructure like water utilities.For policymakers, the message is clear: Pay attention to what the hacker community is discovering. These findings aren't theoretical—they impact national security, public safety, and technological advancement in ways that require immediate action.Want to learn more? Listen to the full episode and explore The Hackers' Almanack to see how cybersecurity research is shaping the future.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥The DEF CON 32 Hackers' Almanack: https://thehackersalmanack.com/defcon32-hackers-almanackDEF CON Franklin Project: https://defconfranklin.com/ | On LinkedIn: https://www.linkedin.com/company/def-con-franklin/DEF CON: https://defcon.org/Cyber Policy Initiative: https://harris.uchicago.edu/research-impact/initiatives-partnerships/cyber-policy-initiative⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity: 

The Lawfare Podcast
Lawfare Daily: Alexandra Reeve Givens, Courtney Lang, and Nema Milaninia on the Paris AI Summit and the Pivot to AI Security

The Lawfare Podcast

Play Episode Listen Later Feb 25, 2025 48:17


Alexandra Reeve Givens, CEO of the Center for Democracy & Technology; Courtney Lang, Vice President of Policy for Trust, Data, and Technology at ITI and a Non-Resident Senior Fellow at the Atlantic Council GeoTech Center; and Nema Milaninia, a partner on the Special Matters & Government Investigations team at King & Spalding, join Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to discuss the Paris AI Action Summit and whether it marks a formal pivot away from AI safety to AI security and, if so, what an embrace of AI security means for domestic and international AI governance.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

ITSPmagazine | Technology. Cybersecurity. Society
The 2025 OWASP Top 10 for LLMs: What's Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros | Redefining CyberSecurity with Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Feb 13, 2025 47:58


⬥GUESTS⬥Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State University | On Linkedin: https://www.linkedin.com/in/sandydunnciso/Rock Lambros, CEO and founder of RockCyber | On LinkedIn | https://www.linkedin.com/in/rocklambros/Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinView This Show's Sponsors⬥EPISODE NOTES⬥The rise of large language models (LLMs) has reshaped industries, bringing both opportunities and risks. The latest OWASP Top 10 for LLMs aims to help organizations understand and mitigate these risks. In a recent episode of Redefining Cybersecurity, host Sean Martin sat down with Sandy Dunn and Rock Lambros to discuss the latest updates to this essential security framework.The OWASP Top 10 for LLMs: What It Is and Why It MattersOWASP has long been a trusted source for security best practices, and its LLM-specific Top 10 is designed to guide organizations in identifying and addressing key vulnerabilities in AI-driven applications. This initiative has rapidly gained traction, becoming a reference point for AI security governance, testing, and implementation. Organizations developing or integrating AI solutions are now evaluating their security posture against this list, ensuring safer deployment of LLM technologies.Key Updates for 2025The 2025 iteration of the OWASP Top 10 for LLMs introduces refinements and new focus areas based on industry feedback. Some categories have been consolidated for clarity, while new risks have been added to reflect emerging threats.• System Prompt Leakage (New) – Attackers may manipulate LLMs to extract system prompts, potentially revealing sensitive operational instructions and security mechanisms.• Vector and Embedding Risks (New) – Security concerns around vector databases and embeddings, which can lead to unauthorized data exposure or manipulation.Other notable changes include reordering certain risks based on real-world impact. Prompt Injection remains the top concern, while Sensitive Information Disclosure and Supply Chain Vulnerabilities have been elevated in priority.The Challenge of AI SecurityUnlike traditional software vulnerabilities, LLMs introduce non-deterministic behavior, making security testing more complex. Jailbreaking attacks—where adversaries bypass system safeguards through manipulative prompts—remain a persistent issue. Prompt injection attacks, where unauthorized instructions are inserted to manipulate output, are also difficult to fully eliminate.As Dunn explains, “There's no absolute fix. It's an architecture issue. Until we fundamentally redesign how we build LLMs, there will always be risk.”Beyond Compliance: A Holistic Approach to AI SecurityBoth Dunn and Lambros emphasize that organizations need to integrate AI security into their overall IT and cybersecurity strategy, rather than treating it as a separate issue. AI governance, supply chain integrity, and operational resilience must all be considered.Lambros highlights the importance of risk management over rigid compliance: “Organizations have to balance innovation with security. You don't have to lock everything down, but you need to understand where your vulnerabilities are and how they impact your business.”Real-World Impact and AdoptionThe OWASP Top 10 for LLMs has already been widely adopted, with companies incorporating it into their security frameworks. It has been translated into multiple languages and is serving as a global benchmark for AI security best practices.Additionally, initiatives like HackerPrompt 2.0 are helping security professionals stress-test AI models in real-world scenarios. OWASP is also facilitating industry collaboration through working groups on AI governance, threat intelligence, and agentic AI security.How to Get InvolvedFor those interested in contributing, OWASP provides open-access resources and welcomes participants to its AI security initiatives. Anyone can join the discussion, whether as an observer or an active contributor.As AI becomes more ingrained in business and society, frameworks like the OWASP Top 10 for LLMs are essential for guiding responsible innovation. To learn more, listen to the full episode and explore OWASP's latest AI security resources.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥OWASP GenAI: https://genai.owasp.org/Link to the 2025 version of the Top 10 for LLM Applications: https://genai.owasp.org/llm-top-10/Getting Involved: https://genai.owasp.org/contribute/OWASP LLM & Gen AI Security Summit at RSAC 2025: https://genai.owasp.org/event/rsa-conference-2025/AI Threat Mind Map: https://github.com/subzer0girl2/AI-Threat-Mind-MapGuide for Preparing and Responding to Deepfake Events: https://genai.owasp.org/resource/guide-for-preparing-and-responding-to-deepfake-events/AI Security Solution Cheat Sheet Q1-2025:https://genai.owasp.org/resource/ai-security-solution-cheat-sheet-q1-2025/HackAPrompt 2.0: https://www.hackaprompt.com/⬥ADDITIONAL INFORMATION⬥✨ To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist on YouTube: