POPULARITY
The RSA Conference has long served as a meeting point for innovation and collaboration in cybersecurity—and in this pre-RSAC episode, ITSPmagazine co-founders Marco Ciappelli and Sean Martin welcome Akamai's Rupesh Chokshi to the conversation. With RSAC 2025 on the horizon, they discuss Akamai's presence at the event and dig into the challenges and opportunities surrounding AI, threat intelligence, and enterprise security.Chokshi, who leads Akamai's Application Security business, describes a landscape marked by explosive growth in web and API attacks—and a parallel shift as enterprises embrace generative AI. The double-edged nature of AI is central to the discussion: while it offers breakthrough productivity and automation, it also creates new vulnerabilities. Akamai's dual focus, says Chokshi, is both using AI to strengthen defenses and securing AI-powered applications themselves.The conversation touches on the scale and sophistication of modern threats, including an eye-opening stat: Akamai is now tracking over 500 million large language model (LLM)-driven scraping requests per day. As these threats extend from e-commerce to healthcare and beyond, Chokshi emphasizes the need for layered defense strategies and real-time adaptability.Ciappelli brings a sociological lens to the AI discussion, noting the hype-to-reality shift the industry is experiencing. “We're no longer asking if AI will change the game,” he suggests. “We're asking how to implement it responsibly—and how to protect it.”At RSAC 2025, Akamai will showcase a range of innovations, including updates to its Guardicore platform and new App & API Protection Hybrid solutions. Their booth (6245) will feature interactive demos, theater sessions, and one-on-one briefings. The Akamai team will also release a new edition of their State of the Internet report, packed with actionable threat data and insights.The episode closes with a reminder: in a world that's both accelerating and fragmenting, cybersecurity must serve not just as a barrier—but as a catalyst. “Security,” says Chokshi, “has to enable innovation, not hinder it.”⸻Keywords: RSAC 2025, Akamai, cybersecurity, generative AI, API protection, web attacks, application security, LLM scraping, Guardicore, State of the Internet report, Zero Trust, hybrid digital world, enterprise resilience, AI security, threat intelligence, prompt injection, data privacy, RSA Conference, Sean Martin, Marco Ciappelli______________________Guest: Rupesh Chokshi, SVP & GM, Akamai https://www.linkedin.com/in/rupeshchokshi/Hosts:Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast & Audio Signals Podcast | On ITSPmagazine: https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________This Episode's SponsorsAKAMAI:https://itspm.ag/akamailbwc____________________________ResourcesLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsa-conference-usa-2025-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverageRupesh Chokshi Session at RSAC 2025The New Attack Frontier: Research Shows Apps & APIs Are the Targets - [PART1-W09]____________________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageTo see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastTo see and hear more Redefining Society stories on ITSPmagazine, visit:https://www.itspmagazine.com/redefining-society-podcastWant to tell your Brand Story Briefing as part of our event coverage? Learn More
In this episode, Tobi talks with Georg Zoeller, Co-Founder of the Centre for AI Leadership and mercenaries.ai, about the turbulent landscape of AI. Georg, with his background at Meta and deep expertise in AI strategy, cuts through the hype surrounding AI's capabilities and economic impact. They discuss the 'singularity' we're already in, driven by rapid, open-source AI development, and why this makes future predictions impossible. Georg argues that software engineering is being commoditized due to the vast amount of training data available (Stack Overflow, GitHub), making AI adept at code generation but raising profound security concerns like prompt injection. Explore: - Why Georg believes blindly adopting AI early is a 'terrible mistake' for most companies. - The fundamental security flaws in LLMs (prompt injection) and why they're currently unsolvable for open input spaces. - The questionable economics of AI: high costs, self-cannibalizing business models, and the reliance on performative fundraising. - How AI tools impact engineer productivity, shifting the bottleneck to decision-making and validation. - The geopolitical risks and diminishing trust associated with Big Tech's AI dominance. - Actionable advice for CTOs: Invest in understanding, focus on governance beyond the tech team, and consider the strategic value of local/open-source alternatives.
Drex examines The alarming rise of intimate deepfakes targeting primarily women and children, with 18 states currently offering no legal protection against these digital sex crimes. Various state legislative efforts including Montana's focus on combating political deepfakes, particularly within 60 days of elections; and OpenAI's first investment in cybersecurity through a $43 million funding round for Adaptive Security, a company specializing in training organizations to recognize deepfake attacks and phishing threats.Remember, Stay a Little Paranoid X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
Send us a textI'd expected this to be an AI free show but, let's face it, that just isn't likely in 2025 but the good news is that in the show Pete Major, vice president of fintech services at CUSO MDT, offers concrete AI use cases at work in MDT and he also, importantly, offers cautions about security and the leading AI tools.In a rush to stay abreast of the fast moving AI universe are some credit unions losing sight of the need to be very sure of the security of the tools they use? Maybe.Major provides tips on how to stay secure while still using AI tools..But there's a lot more in this show.We talk for instance about the need of CUs to keep security in mind when using any technology tools. If there are flaws - and there have been some doozies in recent years - it's the credit union that will be saddled with the bulk of the blame.On a happier note Major discusses a suite of tools for small business members at credit unions - and, he says, demand for the tools is very hot. Is offering good tools a path to winning more business members? Just maybe.We close the show pondering what the developments in Washington DC - anything from an end to credit union tax exemption to an end to NCUA - might mean for credit unions and also the rising CU interest in merging.There's a lot to unpack in this show. Listen up.Like what you are hearing? Find out how you can help sponsor this podcast here. Very affordable sponsorship packages are available. Email rjmcgarvey@gmail.com And like this podcast on whatever service you use to stream it. That matters. Find out more about CU2.0 and the digital transformation of credit unions here. It's a journey every credit union needs to take. Pronto
Are you ready to supercharge your nonprofit's digital marketing efforts? In this episode, I sit down with Steven Lewis, a seasoned marketer with 30 years of experience in copywriting and technology, to explore the game-changing potential of ChatGPT for small to medium-sized nonprofits. We dive deep into how this powerful AI tool can become your 24/7 marketing consultant, helping you craft compelling content, conduct market research, and even run virtual focus groups – all without breaking the bank. Unlocking ChatGPT's Potential for Nonprofits Steven shares invaluable insights on: - How to use ChatGPT as a thought partner and consultant - Crafting the perfect prompts to get the results you need - Developing a unique tone of voice for your organization - Creating synthetic personas for risk-free testing and feedback Key Takeaways: - ChatGPT isn't just for content creation – it's a versatile tool for strategy and research - Learn how to have meaningful “conversations” with the AI to refine your marketing approach - Discover how to leverage ChatGPT's vast knowledge base to understand your audience better - Find out how to use synthetic personas to test ideas without risking donor relationships Practical Applications for Your Nonprofit - Use ChatGPT to develop and refine your organization's tone of voice - Create virtual focus groups to test new ideas and campaigns - Generate data-driven insights to support your marketing decisions - Streamline your content creation process while maintaining authenticity This episode is packed with actionable advice for nonprofit leaders looking to make the most of AI technology in their digital marketing efforts. Whether you're a seasoned marketer or new to the world of AI, you'll find valuable strategies to elevate your nonprofit's online presence. Ready to revolutionize your nonprofit's digital marketing strategy? Listen to the full episode and discover how ChatGPT can become your secret weapon in reaching and engaging your audience more effectively than ever before. Want to skip ahead? Here are key moments: 09:30 Understanding ChatGPT: The Basics and Beyond ChatGPT is a large language model trained on vast amounts of data. Providing context helps shape ChatGPT's outputs. There is a lot of potential for ChatGPT to be a thought partner and consultant for businesses of all sizes. 24:34 Addressing Security Concerns and Developing Tone of Voice Be sure to balance proprietary information protection with leveraging ChatGPT's capabilities. Creating your tone of voice will help your prompts become even more effective. 35:57 Advanced ChatGPT Techniques: Synthetic Personas and Focus Groups Use ChatGPT to create synthetic personas for focus groups. This technique allows organizations to test ideas and content safely without risking real donor relationships. The approach provides valuable insights and data for decision-making. Don't miss out on this opportunity to learn how AI can transform your nonprofit's digital marketing efforts. Tune in now and take the first step towards a more efficient, effective, and data-driven marketing strategy. Steven Lewis Steven Lewis is a marketer with 30 years of experience in copywriting and technology. His course Make ChatGPT Your CMO shows business owners how to turn ChatGPT into a 24/7 marketing consultant that gives expert advice tailored to their business. Learn more at https://taleist.agency/ Connect with us on LinkedIn: https://www.linkedin.com/company/the-first-click Learn more about The First Click: https://thefirstclick.net Schedule a Digital Marketing Therapy Session: https://thefirstclick.net/officehours
In this episode, we discuss What's new in the AI universe and the XZ Backdoor
In this episode we have an intriguing conversation with Jim, and Jerry. We discuss the challenges and innovative solutions in the realm of artificial intelligence (AI) and software development. Discover how this innovative approach opens doors for professionals from various fields to contribute to AI and no-code development efforts. Tune in to this captivating episode and learn how these cutting-edge technologies are transforming the landscape of business and technology. Don't miss out on this episode of The Daily Windup, where you'll find insights, inspiration, and practical applications in under 10 minutes!
In this episode of Campus Technology Insider Podcast Shorts, host Rhea Kelly highlights top stories in education technology, including Anthropic's launch of Claude for Education and Microsoft's Security Copilot enhancement with 11 AI-powered security agents. Additionally, the Digital Education Council's comprehensive AI literacy framework aims to empower higher education communities with essential AI competencies. For more details on these stories, visit campustechnology.com. 00:00 Introduction and Host Welcome 00:17 Anthropic's Claude for Education 00:47 Microsoft's AI-Powered Cybersecurity Expansion 01:25 Digital Education Council's AI Literacy Framework 02:02 Conclusion and Further Resources Source links: Anthropic Launches Claude for Education Microsoft Adds New Agentic AI Tools to Security Copilot Digital Education Council Defines 5 Dimensions of AI Literacy Campus Technology Insider Podcast Shorts are curated by humans and narrated by AI.
Michael Duffy, President Donald Trump's nominee for Undersecretary of Defense for Acquisition and Sustainment, has committed to reviewing the Pentagon's Cybersecurity Maturity Model Certification (CMMC) 2.0 if confirmed. This revamped program, effective since December, mandates that defense contractors handling controlled, unclassified information comply with specific cybersecurity standards to qualify for Department of Defense contracts. Concerns have been raised about the burden these regulations may impose on smaller firms, with a report indicating that over 50% of respondents felt unprepared for the program's requirements. Duffy aims to balance security needs with regulatory burdens, recognizing the vulnerability of small and medium-sized businesses in the face of cyber threats.In addition to the CMMC developments, the General Services Administration (GSA) is set to unveil significant changes to the Federal Risk Authorization Management Program (FedRAMP). The new plan for 2025 focuses on establishing standards and policies rather than approving cloud authorization packages, which previously extended the process for up to 11 months. The GSA intends to automate at least 80% of current requirements, allowing cloud service providers to demonstrate compliance more efficiently, while reducing reliance on external support services.Across the Atlantic, the UK government has announced a comprehensive cybersecurity and resilience bill aimed at strengthening defenses against cyber threats. This legislation will bring more firms under regulatory oversight, specifically targeting managed service providers (MSPs) that provide core IT services and have extensive access to client systems. The proposed regulations will enhance incident reporting requirements and empower the Information Commissioner's Office to proactively identify and mitigate cyber risks, setting higher expectations for cybersecurity practices among MSPs.The episode also discusses the implications of recent developments in AI and cybersecurity. With companies like SolarWinds, CloudFlare, and Red Hat enhancing their offerings, the integration of AI into business operations raises concerns about security and compliance. The ease of generating fake documents using AI tools poses a significant risk to industries reliant on document verification. As the landscape evolves, IT service providers must adapt by advising clients on updated compliance practices and strengthening their cybersecurity measures to address these emerging threats. Four things to know today 00:00 New Regulatory Shifts for MSPs: CMMC 2.0, FedRAMP Overhaul, and UK Cyber Security Bill05:21 CISA Cuts and Signal on Gov Devices: What Could Go Wrong?08:15 AI Solutions Everywhere! SolarWinds, Cloudflare, and Red Hat Go All In11:37 OpenAI's Image Generation Capabilities Raise Fraud Worries: How Businesses Should Respond Supported by: https://www.huntress.com/mspradio/https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship Join Dave April 22nd to learn about Marketing in the AI Era. Signup here: https://hubs.la/Q03dwWqg0 All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
Episode SummaryIn this episode of The Secure Developer, host Danny Allan sits down with Alex Salazar, founder and CEO of Arcade, to discuss the evolving landscape of authentication and authorization in an AI-driven world. Alex shares insights on the shift from traditional front-door security to back-end agent interactions, the challenges of securing AI-driven agents, and the role of identity in modern security frameworks. The conversation delves into the future of AI, agentic workflows, and how organizations can navigate authentication, authorization, and security in this new era.Show NotesDanny Allan welcomes Alex Salazar, an experienced security leader and CEO of Arcade, to explore the transformation of authentication and authorization in AI-powered environments. Drawing from his experience at Okta, Stormpath, and venture capital, Alex provides a unique perspective on securing interactions between AI agents and authenticated services.Key topics discussed include:The Evolution of Authentication & Authorization: Traditional models focused on front-door access (user logins, SSO), whereas AI-driven agents require secure back-end interactions.Agentic AI and Security Risks: How AI agents interact with services on behalf of users, and why identity becomes the new perimeter in security.OAuth and Identity Challenges: Adapting OAuth for AI agents, ensuring least-privilege access, and maintaining security compliance.AI Hallucinations & Risk Management: Strategies for mitigating LLM hallucinations, ensuring accuracy, and maintaining human oversight.The Future of AI & Agentic Workflows: Predictions on how AI will continue to evolve, the rise of specialized AI models, and the intersection of AI and physical automation.Alex and Danny also discuss the broader impact of AI on developer productivity, with insights into how companies can leverage AI responsibly to boost efficiency without compromising security.LinksArcade.dev - Make AI Actually Do ThingsOkta - IdentityOAuth - Authorization ProtocolLangChain - Applications that Can ReasonHugging Face - The AI Community Building the FutureSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
Get your FREE Cybersecurity Salary Guide: https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastRoss Young, CISO in residence at Team8, joins this week's Cyber Work episode to share insights from his fascinating career journey from the CIA to cybersecurity leadership. With over a decade of experience across intelligence agencies and major companies, Young discusses the rapidly evolving AI security landscape, predicts how AI will transform security roles and offers valuable career advice for cybersecurity professionals at all levels. Learn how security professionals can stay relevant in an AI-driven future and why continuous learning is non-negotiable in this field.00:00 Intro00:27 Ross Young's journey in cybersecurity01:18 Cybersecurity job market insights02:12 Ross Young's educational path07:38 Experience at the CIA10:38 Transition to the private sector13:15 Current role at Team818:30 Daily life of a CISO in residence22:12 Impact of AI on cybersecurity25:23 Identifying phishing emails25:49 New risks with AI models27:08 Exploiting AI for malicious purposes30:55 Defending against AI exploits32:24 AI in security automation33:30 Common mistakes in AI implementation36:59 Future of cybersecurity with AI43:18 Advice for security professionals46:17 Career advice – View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcastAbout InfosecInfosec's mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ's security awareness training. Learn more at infosecinstitute.com.
Guest: Alex Polyakov, CEO at Adversa AI Topics: Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client? Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now? What trips most clients, classic security mistakes in AI systems or AI-specific mistakes? Are there truly new mistakes in AI systems or are they old mistakes in new clothing? I know it is not your job to fix it, but much of this is unfixable, right? Is it a good idea to use AI to secure AI? Resources: EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far AI Red Teaming Reasoning LLM US vs China: Jailbreak Deepseek, Qwen, O1, O3, Claude, Kimi Adversa AI blog Oops! 5 serious gen AI security mistakes to avoid Generative AI Fast Followership: Avoid These First Adopter Security Missteps
Sahil Agarwal, co-founder and CEO of Enkrypt.ai, discusses the critical importance of security and compliance in the realm of artificial intelligence (AI) models. His company focuses on helping enterprises adopt generative AI while managing the associated risks. Agarwal explains that the mission of Enkrypt.ai has evolved from developing encryption algorithms to creating comprehensive solutions that provide ongoing management and monitoring of AI applications. This shift aims to ensure that businesses can safely integrate AI technologies without exposing themselves to brand, legal, or security risks.Agarwal highlights the dual approach of Enkrypt.ai, which includes an initial risk assessment followed by continuous monitoring and management. The risk assessment involves simulating attacks on AI systems to identify vulnerabilities, while the ongoing management ensures that any identified risks are mitigated effectively. This iterative process creates a feedback loop that enhances the security posture of generative applications, allowing businesses to operate with greater confidence.The conversation also touches on the economic challenges surrounding generative AI, where many companies invest heavily in projects that struggle to reach production due to unresolved security and compliance issues. Agarwal notes that while there is a democratization of AI technology, the real value lies in how enterprises apply these models. He emphasizes the need for businesses to adopt a proactive approach to security, particularly as they scale their use of AI agents and chatbots.Finally, Agarwal addresses the pressing issue of data leakage, particularly when using third-party AI models. He advises organizations to keep sensitive data on the client side and to choose trusted solutions to mitigate risks. By implementing robust security measures and maintaining a vigilant posture, businesses can harness the power of AI while safeguarding their proprietary information. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
Episode 50 | AI Agents in ActionThe Big Themes:The Rise of 'Agent Ratios': As companies roll out more AI agents, the "agent-to-human ratio" could become a useful AI maturity indicator. Currently, we're seeing early adoption — with Oracle reporting that only 5–10% of its customers have put agents into production. These early use cases focus on low-risk, easily-automated tasks. It's a cautious start, but the trajectory is upward. Bonnie points out that once the groundwork is laid, the pace of adoption will likely accelerate, yielding increased productivity.Four Smart Questions for Evaluating Enterprise AI Initiatives: To help customers decide whether to adopt AI capabilities, Bonnie offers four key questions: (1) Is it available to me? Not all customers have access to AI features; infrastructure matters. (2) Do I need or want it? Weigh the risk-reward tradeoff, especially in terms of time and internal resources. (3) Is my data protected? Ensure your vendor offers strong governance and compliance support. (4) What is the time to value?Knowing When to Leap and When to Wait on AI Adoption: Should companies wait or dive into AI now? Her advice: it depends. If your organization is in a fast-moving, innovation-driven sector, early adoption is essential to stay competitive. Waiting could mean falling behind. But for highly regulated industries or companies unused to rapid tech change, a cautious approach makes sense.
This episode is a recording of a live interview held on stage at Blu Ventures' Cyber Venture Forum in February. A huge shoutout and thank you to the Blu Ventures team for putting together an awesome event. Bricklayer is building an AI-based agent to assist with security operations workflows. Before Bricklayer, Adam founded ThreatConnect which he led for over a decade. In the conversation we discuss his learnings from his experience at ThreatConnect, acquiring vs. building a new capability, and how he thinks about competition in the AI SOC space.Website: bricklayer.aiSponsor: VulnCheck
The security automation landscape is undergoing a revolutionary transformation as AI reasoning capabilities replace traditional rule-based playbooks. In this episode of Detection at Scale, Oliver Friedrichs, Founder & CEO of Pangea, helps Jack unpack how this shift democratizes advanced threat detection beyond Fortune 500 companies while simultaneously introducing an alarming new attack surface. Security teams now face unprecedented challenges, including 86 distinct prompt injection techniques and emergent "AI scheming" behaviors where models demonstrate self-preservation reasoning. Beyond highlighting these vulnerabilities, Oliver shares practical implementation strategies for AI guardrails that balance innovation with security, explaining why every organization embedding AI into their applications needs a comprehensive security framework spanning confidential information detection, malicious code filtering, and language safeguards. Topics discussed: The critical "read versus write" framework for security automation adoption: organizations consistently authorized full automation for investigative processes but required human oversight for remediation actions that changed system states. Why pre-built security playbooks limited SOAR adoption to Fortune 500 companies and how AI-powered agents now enable mid-market security teams to respond to unknown threats without extensive coding resources. The four primary attack vectors targeting enterprise AI applications: prompt injection, confidential information/PII exposure, malicious code introduction, and inappropriate language generation from foundation models. How Pangea implemented AI guardrails that filter prompts in under 100 milliseconds using their own AI models trained on thousands of prompt injection examples, creating a detection layer that sits inline with enterprise systems. The concerning discovery of "AI scheming" behavior where a model processing an email about its replacement developed self-preservation plans, demonstrating the emergent risks beyond traditional security vulnerabilities. Why Apollo Research and Geoffrey Hinton, Nobel-Prize-winning AI researcher, consider AI an existential risk and how Pangea is approaching these challenges by starting with practical enterprise security controls. Check out Pangea.com
Send us a textStruggling to secure AI in 2025? Join Joe and Invary CEO Jason Rogers as they unpack NSA-licensed tech, zero trust frameworks, and the future of cybersecurity. From satellite security to battling advanced threats, discover how Invary's cutting-edge solutions are reshaping the industry. Plus, hear Jason's startup journey and Joe's wild ride balancing a newborn with a PhD. Subscribe now for the latest cyber trends—don't miss this!Chapters00:00 Navigating Parenthood and Professional Life02:53 The Startup Mentality: Decision-Making and Adaptability06:13 Blending Technical Skills with Sales08:58 Background and Journey into Cybersecurity12:10 Establishing a Security Culture in Organizations14:51 Collaborating with Government Entities17:47 Understanding NSA Licensed Technology23:06 Understanding Application and Server Security25:01 Exploring Zero Trust Frameworks28:57 Bridging Government and Private Sector Security31:27 The Role of Security Professionals33:55 Innovations in Cybersecurity Technology38:05 Invariance in Security SystemsSupport the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast
Cybersecurity is evolving, and so is our podcast!
welcome to wall-e's tech briefing for thursday, march 13th! dive into today's top tech stories: anthropic's ceo calls for heightened security: dario amodei urges the u.s. government to increase protection against potential $100 million ai secret thefts, especially highlighting risks from china, advocating for collaboration with the ai sector. intel's new leadership: lip-bu tan takes over as ceo, aiming to refocus on engineering and customer accountability. his appointment boosts market confidence, evidenced by an 11% rise in after-hours trading. google deepmind's gemini robotics: launch of a suite of advanced ai models enhancing robotic interactions and versatility, along with the accessible gemini robotics-er model for further innovation in robotic control. wonder's strategic acquisition: expands by acquiring tastemade for $90 million, integrating diverse media content to align with its vision of becoming a mealtime solutions 'super app'. nvidia's gtc 2025 announced: highlights include ceo jensen huang's keynote on new gpu series and ai tech updates, focusing on the future of automotive, robotics, and ai innovations. stay tuned for tomorrow's tech updates!
Today's guest is Tomer Poran, Chief Evangelist and VP of Strategy at ActiveFence. ActiveFence is a technology company specializing in trust and safety solutions, helping platforms detect and prevent harmful content, malicious activity, and emerging threats online. Tomer joins today's podcast to explore the critical role of red teaming in AI safety and security. He breaks down the challenges enterprises face in deploying AI responsibly, the evolving nature of adversarial risks, and why organizations must adopt a proactive approach to testing AI systems. This episode is sponsored by ActiveFence. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
Dr. Jeff Esposito, Engineering Lead at Lenovo R&D, shares how his team is shaping the future of AI with innovations like the Hive Transformer and EdgeGuard. He emphasizes the importance of ethical innovation and building technologies that are intended to serve society's greater good. He also stresses the value of collective contributions and diverse perspectives in shaping a future where technology effectively addresses real-world challenges. Key Takeaways: AI's role in building smarter cities through Lenovo's collaborations with NVIDIA and other partners. How AI security is evolving with EdgeGuard and other cutting-edge protections. The role of hybrid AI in combining machine learning and symbolic logic for real-world applications. Corporate responsibility in AI development and the balance between open-source innovation and commercialization. Why diverse perspectives are essential in shaping AI that benefits everyone. Guest Bio: Dr. Jeff Esposito has over 40 patent submissions, with a long background in research and development at Dell, Microsoft, and Lenovo. He lectures on advanced technological development at various US government research labs, and believes that technology is at its best when serving the greater good and social justice. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
The Deep Wealth Podcast - Extracting Your Business And Personal Deep Wealth
Send us a textUnlock Proven Strategies for a Lucrative Business Exit—Subscribe to The Deep Wealth Podcast Today
AI Security in High-Risk Sectors In a recent conversation, Alec and I dove into the critical role of AI security, especially in high-risk sectors like healthcare and banking. Alec stressed that AI must be secure and aligned with business strategies while ensuring governance, risk management, regulatory compliance, and cybersecurity remain top priorities. I couldn't agree more—AI in the wrong hands or without proper safeguards is a ticking time bomb. Sensitive data needs protection, and businesses must stay ahead of evolving regulations. We also touched on the growing need for private AI solutions, given the rising threats of cyberattacks like prompt injections. Cybersecurity and AI in Organizations Our discussion expanded into cybersecurity and AI adoption within organizations. Unvetted AI solutions pose significant risks, making internal development and continuous monitoring essential. Alec's company, Artificial Intelligence Risk, Inc., deploys private AI within clients' firewalls, reinforcing security through governance and compliance measures. One key takeaway? Awareness is everything. Many organizations jump into AI without securing their systems first. I was particularly interested in the “aha moments” Alec's clients experience when they see AI-driven security solutions in action. AI Governance and Confidentiality Concerns Alec shared a governance issue where a company implemented Microsoft Copilot—only to discover it unintentionally exposed confidential employee data. This highlighted a major concern: AI needs strict guardrails. Alec advocated for a “belt and suspenders” approach—limiting system access, assigning AI agents to specific groups, and avoiding over-reliance on super users who could inadvertently misuse AI. The lesson? AI governance isn't optional; it's a necessity. AI Applications in Call Centers AI's potential spans across industries, and call centers are a prime example. Alec described a client who leveraged AI to analyze 150,000 call transcripts, leading to a 30% reduction in call length and an additional 30% drop in overall call volume—all thanks to AI-driven website improvements. Beyond customer service, AI is making waves in investment research, analyzing earnings calls and regulatory filings. I even shared a fun hypothetical—using AI to predict the Toronto Blue Jays' performance—proving that AI's applications go beyond business into fields like sports analytics. AI Adoption, Security, and Privacy Wrapping up, Alec and I discussed the double-edged sword of AI adoption. While AI presents massive opportunities, it also comes with security, ethical, and privacy risks. Alec emphasized the need for strong leadership in AI implementation, ensuring data quality remains a top priority. I pointed out that the fear of missing out (FOMO) on AI can lead companies to make reckless decisions—often at the cost of security. Alec's company specializes in AI security solutions that safeguard against data breaches and attacks on Large Language Models, reinforcing the importance of a strategic, security-first approach to AI adoption. Alec Crawford is Founder & CEO of Artificial Intelligence Risk, Inc., a company that accelerates enterprise Gen AI adoption - safely. He has been working with AI since the 1980's when he built neutral networks from scratch for his Harvard senior thesis. He is a thought leader for Gen AI with a blog at aicrisk.com and podcast called AI Risk Reward. He has more than 30 years of experience on Wall Street with his last role being Partner and Chief Risk Officer for Investments at Lord Abbett. linkedin.com/in/aleccrawford Our Story Dedicated to shaping the future. At AI Risk, Inc., we are dedicated to shaping the future of AI governance, risk management, and compliance. With AI poised to become a cornerstone of business operations, we recognize the need for software solutions that ensure its safety, reliability, and regulatory adherence. Learn more Our Journey Founded in response to the burgeoning adoption of AI without proper safeguards, AI Risk, Inc. seeks to pioneer a new era of responsible AI usage. Our platform, AIR GRCC, empowers companies to manage AI effectively, mitigating risks and ensuring regulatory compliance across all AI models. Why Choose AI Risk, Inc.? Comprehensive Solutions: We offer an all-encompassing platform for AI governance, risk management, regulatory compliance, and cybersecurity. Expertise: With extensive experience across industries and global regulations, we provide tailored solutions to meet diverse business needs. Futureproofing: As AI regulations evolve, our platform remains updated and adaptable, ensuring businesses stay ahead of compliance requirements. Cybersecurity Focus: Recognizing the unique challenges of AI cybersecurity, we provide cutting-edge solutions to protect against threats and ensure data integrity. Get Started with AI Risk, Inc. Whether you're a large corporation or a budding startup, AI Risk, Inc. is your partner in navigating the complexities of AI implementation securely and responsibly. Join us in shaping a future where AI drives innovation without compromising integrity or security.
What does it take to secure AI-based applications in the cloud? In this episode, host Ashish Rajan sits down with Bar-el Tayouri, Head of Mend AI at Mend.io, to dive deep into the evolving world of AI security. From uncovering the hidden dangers of shadow AI to understanding the layers of an AI Bill of Materials (AIBOM), Bar-el breaks down the complexities of securing AI-driven systems. Learn about the risks of malicious models, the importance of red teaming, and how to balance innovation with security in a dynamic AI landscape.What is an AIBOM and why it mattersThe stages of AI adoption: experimentation to optimizationShadow AI: A factor of 10 more than you thinkPractical strategies for pre- and post-deployment securityThe future of AI security with agent swarms and beyondGuest Socials: Bar-El's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Cybersecurity PodcastQuestions asked:(00:00) Introduction(02:24) A bit about Bar-el(03:32) What is AIBOM?(12:58) What is an embedding model?(16:12) What should Leaders have in their AI Security Strategy?(19:00) Whats different about the AI Security Landscape?(23:50) Challenges with integrating security into AI based Applications(25:33) Has AI solved the disconnect between Security and Developers(28:39) Risk framework for AI Security(32:26) Dealing with threats for current AI Applications in production(36:51) Future of AI Security(41:24) The Fun Section
Most people are barely scratching the surface of what generative AI can do. While some fear it will replace their jobs, others dismiss it as a passing trend—but both extremes miss the point. In this episode, Ashok Sivanand breaks down the real opportunity AI presents: not as a replacement for human judgment, but as a powerful tool that can act as both a dutiful intern and an expert consultant. Learn how to integrate AI into your daily work, from automating tedious tasks to sharpening your strategic thinking, all while staying in control. Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Inside the episode... Why so few people are using generative AI daily—and why that needs to change The two key roles AI can play: the intern and the consultant How AI can help professionals streamline research, analysis, and decision-making Practical prompts and frameworks for getting the most out of AI tools The dangers of "AI autopilot" and why staying in the driver's seat is critical Security and privacy concerns: What every AI user should know The best AI tools for different use cases—beyond just ChatGPT How companies can encourage AI adoption without creating unnecessary friction Mentioned in this episode AI Tools: ChatGPT, Claude, Perplexity, Gemini, Copilot, Grok Amazon's six-page memo template for effective decision-making: https://medium.com/@info_14390/the-ultimate-guide-to-amazons-6-pager-memo-method-c4b683441593 Ready Signal for external market factor analysis: https://www.readysignal.com/ AI prompting frameworks from Geoff Woods of AI Leadership: https://www.youtube.com/watch?v=HToY8gDTk6E Andrej Karpathy's Deep Dive into LLMs: https://www.youtube.com/watch?v=7xTGNNLPyMI Books by Carmine Gallo: The Presentation Secrets of Steve Jobs & Talk Like TED: https://www.amazon.com/Presentation-Secrets-Steve-Jobs-Insanely/dp/1491514310 Subscribe to the Convergence podcast wherever you get podcasts—including video episodes on YouTube at youtube.com/@convergencefmpodcast Learn something? Give the podcast a 5-star review and like the episode on YouTube. It's how the show grows. Follow the Pod Linkedin: https://www.linkedin.com/company/convergence-podcast/ X: https://twitter.com/podconvergence Instagram: @podconvergence
In this episode of the Risk Management Show podcast, we explore AI Security Risks and what every risk manager must know. Dr. Peter Garraghan, CEO and co-founder of Mind Guard and a professor of computer science at Lancaster University, shares his expertise on managing the evolving threat landscape in AI. With over €11M in research funding and 60+ published papers, he reveals why traditional cybersecurity tools often fail to address AI-specific vulnerabilities and how organizations can safely adopt AI while mitigating risks. We discuss AI's role in Risk Management, Cyber Security, and Sustainability, and provide actionable insights for Chief Risk Officers and compliance professionals. Dr. Garraghan outlines practical steps for minimizing risks, aligning AI with regulatory frameworks like GDPR, and leveraging tools like ISO 42001 and the EU AI Act. He also breaks down misconceptions about AI and its potential impact on businesses and society. If you want to be our guest or suggest a guest, send your email to info@globalriskconsult.com with the subject line "Podcast Guest Inquiry." Don't miss this essential conversation for anyone navigating AI and risk management!
⬥GUEST⬥Jake Braun, Acting Principal Deputy National Cyber Director, The White House | On LinkedIn: https://www.linkedin.com/in/jake-braun-77372539/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martin⬥EPISODE NOTES⬥Cybersecurity is often framed as a battle between attackers and defenders, but what happens when hackers take on a different role—one of informing policy, protecting critical infrastructure, and even saving lives? That's the focus of the latest Redefining Cybersecurity podcast episode, where host Sean Martin speaks with Jake Braun, former Acting Principal Deputy National Cyber Director at the White House and current Executive Director of the Cyber Policy Initiative at the University of Chicago.Braun discusses The Hackers' Almanack, a project developed in partnership with DEF CON and the Franklin Project to document key cybersecurity findings that policymakers, industry leaders, and technologists should be aware of. This initiative captures some of the most pressing security challenges emerging from DEF CON's research community and translates them into actionable insights that could drive meaningful policy change.DEF CON, The Hackers' Almanack, and the Franklin ProjectDEF CON, one of the world's largest hacker conferences, brings together tens of thousands of security researchers each year. While the event is known for its groundbreaking technical discoveries, Braun explains that too often, these findings fail to make their way into the hands of policymakers who need them most. That's why The Hackers' Almanack was created—to serve as a bridge between the security research community and decision-makers who shape regulations and national security strategies.This effort is an extension of the Franklin Project, named after Benjamin Franklin, who embodied the intersection of science and civics. The initiative includes not only The Hackers' Almanack but also a volunteer-driven cybersecurity support network for under-resourced water utilities, a critical infrastructure sector under increasing attack.Ransomware: Hackers Filling the Gaps Where Governments Have StruggledOne of the most striking sections of The Hackers' Almanack examines the state of ransomware. Despite significant government efforts to disrupt ransomware groups, attacks remain as damaging as ever. Braun highlights the work of security researcher Vangelis Stykas, who successfully infiltrated ransomware gangs—not to attack them, but to gather intelligence and warn potential victims before they were hit.While governments have long opposed private-sector hacking in retaliation against cybercriminals, Braun raises an important question: Should independent security researchers be allowed to operate in this space if they can help prevent attacks? This isn't just about hacktivism—it's about whether traditional methods of law enforcement and national security are enough to combat the ransomware crisis.AI Security: No Standards, No Rules, Just ChaosArtificial intelligence is dominating conversations in cybersecurity, but according to Braun, the industry still hasn't figured out how to secure AI effectively. DEF CON's AI Village, which has been studying AI security for years, made a bold statement: AI red teaming, as it exists today, lacks clear definitions and standards. Companies are selling AI security assessments with no universally accepted benchmarks, leaving buyers to wonder what they're really getting.Braun argues that industry leaders, academia, and government must quickly come together to define what AI security actually means. Are we testing AI applications? The algorithms? The data sets? Without clarity, AI red teaming risks becoming little more than a marketing term, rather than a meaningful security practice.Biohacking: The Blurry Line Between Innovation and BioterrorismPerhaps the most controversial section of The Hackers' Almanack explores biohacking and its potential risks. Researchers at the Four Thieves Vinegar Collective demonstrated how AI and 3D printing could allow individuals to manufacture vaccines and medical devices at home—at a fraction of the cost of commercial options. While this raises exciting possibilities for healthcare accessibility, it also raises serious regulatory and ethical concerns.Current laws classify unauthorized vaccine production as bioterrorism, but Braun questions whether that definition should evolve. If underserved communities have no access to life-saving treatments, should they be allowed to manufacture their own? And if so, how can regulators ensure safety without stifling innovation?A Call to ActionThe Hackers' Almanack isn't just a technical report—it's a call for governments, industry leaders, and the security community to rethink how we approach cybersecurity, technology policy, and even healthcare. Braun and his team at the Franklin Project are actively recruiting volunteers, particularly those with cybersecurity expertise, to help protect vulnerable infrastructure like water utilities.For policymakers, the message is clear: Pay attention to what the hacker community is discovering. These findings aren't theoretical—they impact national security, public safety, and technological advancement in ways that require immediate action.Want to learn more? Listen to the full episode and explore The Hackers' Almanack to see how cybersecurity research is shaping the future.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥The DEF CON 32 Hackers' Almanack: https://thehackersalmanack.com/defcon32-hackers-almanackDEF CON Franklin Project: https://defconfranklin.com/ | On LinkedIn: https://www.linkedin.com/company/def-con-franklin/DEF CON: https://defcon.org/Cyber Policy Initiative: https://harris.uchicago.edu/research-impact/initiatives-partnerships/cyber-policy-initiative⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity:
⬥GUEST⬥Jake Braun, Acting Principal Deputy National Cyber Director, The White House | On LinkedIn: https://www.linkedin.com/in/jake-braun-77372539/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martin⬥EPISODE NOTES⬥Cybersecurity is often framed as a battle between attackers and defenders, but what happens when hackers take on a different role—one of informing policy, protecting critical infrastructure, and even saving lives? That's the focus of the latest Redefining Cybersecurity podcast episode, where host Sean Martin speaks with Jake Braun, former Acting Principal Deputy National Cyber Director at the White House and current Executive Director of the Cyber Policy Initiative at the University of Chicago.Braun discusses The Hackers' Almanack, a project developed in partnership with DEF CON and the Franklin Project to document key cybersecurity findings that policymakers, industry leaders, and technologists should be aware of. This initiative captures some of the most pressing security challenges emerging from DEF CON's research community and translates them into actionable insights that could drive meaningful policy change.DEF CON, The Hackers' Almanack, and the Franklin ProjectDEF CON, one of the world's largest hacker conferences, brings together tens of thousands of security researchers each year. While the event is known for its groundbreaking technical discoveries, Braun explains that too often, these findings fail to make their way into the hands of policymakers who need them most. That's why The Hackers' Almanack was created—to serve as a bridge between the security research community and decision-makers who shape regulations and national security strategies.This effort is an extension of the Franklin Project, named after Benjamin Franklin, who embodied the intersection of science and civics. The initiative includes not only The Hackers' Almanack but also a volunteer-driven cybersecurity support network for under-resourced water utilities, a critical infrastructure sector under increasing attack.Ransomware: Hackers Filling the Gaps Where Governments Have StruggledOne of the most striking sections of The Hackers' Almanack examines the state of ransomware. Despite significant government efforts to disrupt ransomware groups, attacks remain as damaging as ever. Braun highlights the work of security researcher Vangelis Stykas, who successfully infiltrated ransomware gangs—not to attack them, but to gather intelligence and warn potential victims before they were hit.While governments have long opposed private-sector hacking in retaliation against cybercriminals, Braun raises an important question: Should independent security researchers be allowed to operate in this space if they can help prevent attacks? This isn't just about hacktivism—it's about whether traditional methods of law enforcement and national security are enough to combat the ransomware crisis.AI Security: No Standards, No Rules, Just ChaosArtificial intelligence is dominating conversations in cybersecurity, but according to Braun, the industry still hasn't figured out how to secure AI effectively. DEF CON's AI Village, which has been studying AI security for years, made a bold statement: AI red teaming, as it exists today, lacks clear definitions and standards. Companies are selling AI security assessments with no universally accepted benchmarks, leaving buyers to wonder what they're really getting.Braun argues that industry leaders, academia, and government must quickly come together to define what AI security actually means. Are we testing AI applications? The algorithms? The data sets? Without clarity, AI red teaming risks becoming little more than a marketing term, rather than a meaningful security practice.Biohacking: The Blurry Line Between Innovation and BioterrorismPerhaps the most controversial section of The Hackers' Almanack explores biohacking and its potential risks. Researchers at the Four Thieves Vinegar Collective demonstrated how AI and 3D printing could allow individuals to manufacture vaccines and medical devices at home—at a fraction of the cost of commercial options. While this raises exciting possibilities for healthcare accessibility, it also raises serious regulatory and ethical concerns.Current laws classify unauthorized vaccine production as bioterrorism, but Braun questions whether that definition should evolve. If underserved communities have no access to life-saving treatments, should they be allowed to manufacture their own? And if so, how can regulators ensure safety without stifling innovation?A Call to ActionThe Hackers' Almanack isn't just a technical report—it's a call for governments, industry leaders, and the security community to rethink how we approach cybersecurity, technology policy, and even healthcare. Braun and his team at the Franklin Project are actively recruiting volunteers, particularly those with cybersecurity expertise, to help protect vulnerable infrastructure like water utilities.For policymakers, the message is clear: Pay attention to what the hacker community is discovering. These findings aren't theoretical—they impact national security, public safety, and technological advancement in ways that require immediate action.Want to learn more? Listen to the full episode and explore The Hackers' Almanack to see how cybersecurity research is shaping the future.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥The DEF CON 32 Hackers' Almanack: https://thehackersalmanack.com/defcon32-hackers-almanackDEF CON Franklin Project: https://defconfranklin.com/ | On LinkedIn: https://www.linkedin.com/company/def-con-franklin/DEF CON: https://defcon.org/Cyber Policy Initiative: https://harris.uchicago.edu/research-impact/initiatives-partnerships/cyber-policy-initiative⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity:
Can small, agile teams outpace massive, well-funded engineering orgs in AI and open-source innovation?Co-hosts Alex Kehaya & Bidhan Roy, Founder of Bagel Network joins Greg Osuri, Founder of Akash Network, for a deep dive into the open-source AI stack, decentralization, and the engineering principles behind lean, high-impact teams.Greg shares how Akash is revolutionizing cloud computing with decentralized infrastructure, the power of Zero Knowledge Proofs (ZKPs) for AI model validation, and why small, focused developer teams consistently outperform bloated, overfunded projects.Key Dev Insights:✅ Scaling open-source AI with decentralized computing✅ ZKPs & AI security—why cryptographic proofs are the future of model validation✅ Building with constraints—why limited funding fuels better engineering decisions✅ Community-driven dev—how Akash leverages contributors for rapid iterationJoin us for an engineering-first discussion on the future of decentralized AI and why lean, open-source teams are leading the way.Website: https://akash.network/Show LinksThe Index X ChannelYouTube
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology; Courtney Lang, Vice President of Policy for Trust, Data, and Technology at ITI and a Non-Resident Senior Fellow at the Atlantic Council GeoTech Center; and Nema Milaninia, a partner on the Special Matters & Government Investigations team at King & Spalding, join Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to discuss the Paris AI Action Summit and whether it marks a formal pivot away from AI safety to AI security and, if so, what an embrace of AI security means for domestic and international AI governance.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
In the enterprise security news, Change Healthcare's HIPAA fine is vanishingly small How worried should we be about the threat of AI models? What about the threat of DeepSeek? And the threat of employees entering sensitive data into GenAI prompts? The myth of trillion-dollar cybercrime losses are alive and well! Kagi Privacy Pass gives you the best of both worlds: high quality web searches AND privacy/anonymity Thanks to the UK for letting everyone know about end-to-end encryption for iCloud! What is the most UNHINGED thing you've ever seen a security team push on employees? All that and more, on this episode of Enterprise Security Weekly. Show Notes: https://securityweekly.com/esw-395
In the enterprise security news, Change Healthcare's HIPAA fine is vanishingly small How worried should we be about the threat of AI models? What about the threat of DeepSeek? And the threat of employees entering sensitive data into GenAI prompts? The myth of trillion-dollar cybercrime losses are alive and well! Kagi Privacy Pass gives you the best of both worlds: high quality web searches AND privacy/anonymity Thanks to the UK for letting everyone know about end-to-end encryption for iCloud! What is the most UNHINGED thing you've ever seen a security team push on employees? All that and more, on this episode of Enterprise Security Weekly. Show Notes: https://securityweekly.com/esw-395
In this week's episode of the K12 Tech Talk podcast, the team dives into some pressing issues. We start with a discussion on how tariffs might affect project pricing for the coming year, exploring potential impacts on schools' budgets and strategies they might employ to mitigate these effects. The conversation then shifts to Google's new Chrome security feature powered by AI. We debate the implications of this feature in terms of privacy and security, particularly in school environments, and whether or not to implement this at a district level. The centerpiece of the episode is a concerning discussion about the potential threat to the E-Rate program. With pending litigation that could have significant impacts on funding and tech infrastructure in schools, we provide insights into how schools are preparing for potential outcomes. -------------------- NTP Lightspeed ClassLink SaferWatch Fortinet -------------------- Email us at k12techtalk@gmail.com OR info@k12techtalkpodcast.com Call us at 314-329-0363 Join the K12TechPro Community Buy some swag X @k12techtalkpod Visit our LinkedIn Music by Colt Ball Disclaimer: The views and work done by Josh, Chris, and Mark are solely their own and do not reflect the opinions or positions of sponsors or any respective employers or organizations associated with the guys. K12 Tech Talk itself does not endorse or validate the ideas, views, or statements expressed by Josh, Chris, and Mark's individual views and opinions are not representative of K12 Tech Talk. Furthermore, any references or mention of products, services, organizations, or individuals on K12 Tech Talk should not be considered as endorsements related to any employer or organization associated with the guys.
ทิศทางใหม่ของ AI ที่ AWS กำลังเดิมพันนั้นคืออะไร? ท่ามกลางการแข่งขันที่ดุเดือดของบริษัทยักษ์ใหญ่ด้านเทคโนโลยีระดับโลก แต่ละองค์กรต่างเร่งสร้างจุดแข็งของตนเองในยุค AI เพื่อคว้าโอกาสและรักษาความได้เปรียบในการแข่งขัน หนึ่งในนั้นคือ AWS ที่กำลังพลิกโฉมวงการ Cloud และ AI เข้าด้วยกัน The Secret Sauce ได้รับเกียรติจาก Amazon Web Services (AWS) ให้เข้าร่วมงาน AWS re:Invent 2024 งานสัมมนาระดับโลกด้าน Cloud Computing ที่จัดขึ้นที่ลาสเวกัส สหรัฐอเมริกา โดยได้สัมภาษณ์ผู้บริหารระดับสูงของ AWS เพื่อเจาะลึกกลยุทธ์สำคัญในการขับเคลื่อนเทคโนโลยี AI และความปลอดภัยบนแพลตฟอร์ม Cloud ที่ทรงอิทธิพลที่สุดในโลก โดยอีเวนต์นี้ไม่เพียงเป็นการเปิดตัวเทคโนโลยีใหม่เท่านั้น แต่ยังสะท้อนให้เห็นถึงทิศทางของ AWS ที่กำลังมุ่งหน้าไปสู่ Cyber Security หรือการวางรากฐานของยุค AI ที่มีความปลอดภัย ความมั่นคง และเป็นระบบมากขึ้น AWS กำลังสร้างอะไร? โอกาสของไทยในสนาม Cloud และ AI คืออะไร? ติดตามทุกมุมมองจากบทสัมภาษณ์เอ็กซ์คลูซีฟกับผู้บริหาร AWS ได้ใน The Secret Sauce เอพิโสดนี้
Palo Alto Networks's CEO Nikesh Arora dispels DeepSeek hype by detailing all of the guardrails enterprises need to have in place to give AI agents “arms and legs.” No matter the model, deploying applications for precision-use cases means superimposing better controls. Arora emphasizes that the real challenge isn't just blocking threats but matching the accelerated pace of AI-powered attacks, requiring a fundamental shift from prevention-focused to real-time detection and response systems. CISOs are risk managers, but legacy companies competing with more risk-tolerant startups need to move quickly and embrace change. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital Mentioned in this episode: Cortex XSIAM: Security operations and incident remediation platform from Palo Alto Networks
⬥GUESTS⬥Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State University | On Linkedin: https://www.linkedin.com/in/sandydunnciso/Rock Lambros, CEO and founder of RockCyber | On LinkedIn | https://www.linkedin.com/in/rocklambros/Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinView This Show's Sponsors⬥EPISODE NOTES⬥The rise of large language models (LLMs) has reshaped industries, bringing both opportunities and risks. The latest OWASP Top 10 for LLMs aims to help organizations understand and mitigate these risks. In a recent episode of Redefining Cybersecurity, host Sean Martin sat down with Sandy Dunn and Rock Lambros to discuss the latest updates to this essential security framework.The OWASP Top 10 for LLMs: What It Is and Why It MattersOWASP has long been a trusted source for security best practices, and its LLM-specific Top 10 is designed to guide organizations in identifying and addressing key vulnerabilities in AI-driven applications. This initiative has rapidly gained traction, becoming a reference point for AI security governance, testing, and implementation. Organizations developing or integrating AI solutions are now evaluating their security posture against this list, ensuring safer deployment of LLM technologies.Key Updates for 2025The 2025 iteration of the OWASP Top 10 for LLMs introduces refinements and new focus areas based on industry feedback. Some categories have been consolidated for clarity, while new risks have been added to reflect emerging threats.• System Prompt Leakage (New) – Attackers may manipulate LLMs to extract system prompts, potentially revealing sensitive operational instructions and security mechanisms.• Vector and Embedding Risks (New) – Security concerns around vector databases and embeddings, which can lead to unauthorized data exposure or manipulation.Other notable changes include reordering certain risks based on real-world impact. Prompt Injection remains the top concern, while Sensitive Information Disclosure and Supply Chain Vulnerabilities have been elevated in priority.The Challenge of AI SecurityUnlike traditional software vulnerabilities, LLMs introduce non-deterministic behavior, making security testing more complex. Jailbreaking attacks—where adversaries bypass system safeguards through manipulative prompts—remain a persistent issue. Prompt injection attacks, where unauthorized instructions are inserted to manipulate output, are also difficult to fully eliminate.As Dunn explains, “There's no absolute fix. It's an architecture issue. Until we fundamentally redesign how we build LLMs, there will always be risk.”Beyond Compliance: A Holistic Approach to AI SecurityBoth Dunn and Lambros emphasize that organizations need to integrate AI security into their overall IT and cybersecurity strategy, rather than treating it as a separate issue. AI governance, supply chain integrity, and operational resilience must all be considered.Lambros highlights the importance of risk management over rigid compliance: “Organizations have to balance innovation with security. You don't have to lock everything down, but you need to understand where your vulnerabilities are and how they impact your business.”Real-World Impact and AdoptionThe OWASP Top 10 for LLMs has already been widely adopted, with companies incorporating it into their security frameworks. It has been translated into multiple languages and is serving as a global benchmark for AI security best practices.Additionally, initiatives like HackerPrompt 2.0 are helping security professionals stress-test AI models in real-world scenarios. OWASP is also facilitating industry collaboration through working groups on AI governance, threat intelligence, and agentic AI security.How to Get InvolvedFor those interested in contributing, OWASP provides open-access resources and welcomes participants to its AI security initiatives. Anyone can join the discussion, whether as an observer or an active contributor.As AI becomes more ingrained in business and society, frameworks like the OWASP Top 10 for LLMs are essential for guiding responsible innovation. To learn more, listen to the full episode and explore OWASP's latest AI security resources.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥OWASP GenAI: https://genai.owasp.org/Link to the 2025 version of the Top 10 for LLM Applications: https://genai.owasp.org/llm-top-10/Getting Involved: https://genai.owasp.org/contribute/OWASP LLM & Gen AI Security Summit at RSAC 2025: https://genai.owasp.org/event/rsa-conference-2025/AI Threat Mind Map: https://github.com/subzer0girl2/AI-Threat-Mind-MapGuide for Preparing and Responding to Deepfake Events: https://genai.owasp.org/resource/guide-for-preparing-and-responding-to-deepfake-events/AI Security Solution Cheat Sheet Q1-2025:https://genai.owasp.org/resource/ai-security-solution-cheat-sheet-q1-2025/HackAPrompt 2.0: https://www.hackaprompt.com/⬥ADDITIONAL INFORMATION⬥✨ To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist on YouTube:
⬥GUESTS⬥Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State University | On Linkedin: https://www.linkedin.com/in/sandydunnciso/Rock Lambros, CEO and founder of RockCyber | On LinkedIn | https://www.linkedin.com/in/rocklambros/Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinView This Show's Sponsors⬥EPISODE NOTES⬥The rise of large language models (LLMs) has reshaped industries, bringing both opportunities and risks. The latest OWASP Top 10 for LLMs aims to help organizations understand and mitigate these risks. In a recent episode of Redefining Cybersecurity, host Sean Martin sat down with Sandy Dunn and Rock Lambros to discuss the latest updates to this essential security framework.The OWASP Top 10 for LLMs: What It Is and Why It MattersOWASP has long been a trusted source for security best practices, and its LLM-specific Top 10 is designed to guide organizations in identifying and addressing key vulnerabilities in AI-driven applications. This initiative has rapidly gained traction, becoming a reference point for AI security governance, testing, and implementation. Organizations developing or integrating AI solutions are now evaluating their security posture against this list, ensuring safer deployment of LLM technologies.Key Updates for 2025The 2025 iteration of the OWASP Top 10 for LLMs introduces refinements and new focus areas based on industry feedback. Some categories have been consolidated for clarity, while new risks have been added to reflect emerging threats.• System Prompt Leakage (New) – Attackers may manipulate LLMs to extract system prompts, potentially revealing sensitive operational instructions and security mechanisms.• Vector and Embedding Risks (New) – Security concerns around vector databases and embeddings, which can lead to unauthorized data exposure or manipulation.Other notable changes include reordering certain risks based on real-world impact. Prompt Injection remains the top concern, while Sensitive Information Disclosure and Supply Chain Vulnerabilities have been elevated in priority.The Challenge of AI SecurityUnlike traditional software vulnerabilities, LLMs introduce non-deterministic behavior, making security testing more complex. Jailbreaking attacks—where adversaries bypass system safeguards through manipulative prompts—remain a persistent issue. Prompt injection attacks, where unauthorized instructions are inserted to manipulate output, are also difficult to fully eliminate.As Dunn explains, “There's no absolute fix. It's an architecture issue. Until we fundamentally redesign how we build LLMs, there will always be risk.”Beyond Compliance: A Holistic Approach to AI SecurityBoth Dunn and Lambros emphasize that organizations need to integrate AI security into their overall IT and cybersecurity strategy, rather than treating it as a separate issue. AI governance, supply chain integrity, and operational resilience must all be considered.Lambros highlights the importance of risk management over rigid compliance: “Organizations have to balance innovation with security. You don't have to lock everything down, but you need to understand where your vulnerabilities are and how they impact your business.”Real-World Impact and AdoptionThe OWASP Top 10 for LLMs has already been widely adopted, with companies incorporating it into their security frameworks. It has been translated into multiple languages and is serving as a global benchmark for AI security best practices.Additionally, initiatives like HackerPrompt 2.0 are helping security professionals stress-test AI models in real-world scenarios. OWASP is also facilitating industry collaboration through working groups on AI governance, threat intelligence, and agentic AI security.How to Get InvolvedFor those interested in contributing, OWASP provides open-access resources and welcomes participants to its AI security initiatives. Anyone can join the discussion, whether as an observer or an active contributor.As AI becomes more ingrained in business and society, frameworks like the OWASP Top 10 for LLMs are essential for guiding responsible innovation. To learn more, listen to the full episode and explore OWASP's latest AI security resources.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥OWASP GenAI: https://genai.owasp.org/Link to the 2025 version of the Top 10 for LLM Applications: https://genai.owasp.org/llm-top-10/Getting Involved: https://genai.owasp.org/contribute/OWASP LLM & Gen AI Security Summit at RSAC 2025: https://genai.owasp.org/event/rsa-conference-2025/AI Threat Mind Map: https://github.com/subzer0girl2/AI-Threat-Mind-MapGuide for Preparing and Responding to Deepfake Events: https://genai.owasp.org/resource/guide-for-preparing-and-responding-to-deepfake-events/AI Security Solution Cheat Sheet Q1-2025:https://genai.owasp.org/resource/ai-security-solution-cheat-sheet-q1-2025/HackAPrompt 2.0: https://www.hackaprompt.com/⬥ADDITIONAL INFORMATION⬥✨ To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist on YouTube:
In episode 122 of Cybersecurity Where You Are, Sean Atkinson is joined by Rian Davis, Associate Hybrid Threat Intelligence Analyst at the Center for Internet Security® (CIS®); and Timothy Davis, Lead Cyber Threat Intelligence (CTI) Analyst at CIS. Together, they discuss security and utility considerations surrounding the DeepSeek AI model.Here are some highlights from our episode:01:31. What enterprises and individuals can do before they start deploying foreign-developed, open-source large language models (LLMs)08:48. How DeepSeek fits into evolving adversarial tactics and techniques involving AI25:15. The impact on threat assessments and where we see controls built around AI31:45. Parting thoughts on approaching newer technologies like DeepSeekResourcesDeepSeek hit by cyberattack as users flock to Chinese AI startupA 9th telecoms firm has been hit by a massive Chinese espionage campaign, the White House saysTikTok: Influence Ops, Data Practices Threaten U.S. SecurityWiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat HistoryEpisode 89: How Threat Actors Are Using GenAI as an EnablerODNI Releases 2024 Annual Threat Assessment of the U.S. Intelligence CommunityThe Strava Heat Map and the End of SecretsMan who exploded Cybertruck in Las Vegas used ChatGPT in planning, police sayEpisode 120: How Contextual Awareness Drives AI GovernanceIf you have some feedback or an idea for an upcoming episode of Cybersecurity Where You Are, let us know by emailing podcast@cisecurity.org.
In this episode, CIO's Jon Gordon provides an update on Chinese tariffs on the US coming into effect, Fed speak & US AI security policy under President Trump.
In this episode, Dana Johnson, a dental practice management expert, discusses the integration of technology and AI in dental practices with Dan Easty from Sunset Technologies. They explore the importance of security when implementing new technologies, the necessity of vetting AI companies, and the significance of keeping software updated. The conversation emphasizes the need for dental practices to be proactive in their approach to technology and patient data protection, while also looking forward to the future advancements in AI within the industry. Takeaways ➡Dana Johnson is a dental practice management expert and founder of Novonee. ➡Technology plays a crucial role in optimizing dental practices. ➡AI has been used in dental imaging and insurance for years. ➡It's essential to vet AI companies for security and compliance. ➡Practices should stay updated with their software to ensure security. ➡Upgrading hardware and software is vital for compliance and efficiency. ➡Curiosity about technology can lead to better questions and solutions. ➡Transparency from AI companies is crucial for trust and security. ➡AI has the potential to revolutionize the dental industry. ➡Practices should seek professional guidance when implementing new technologies. Chapters 00:00 Introduction to Dental Practice Management 03:12 The Role of Technology in Dental Practices 06:02 Understanding AI in Dental Administration 08:52 Vetting AI Companies for Security 12:00 The Importance of Software Updates 15:05 Navigating Upgrades and Compliance 18:02 The Future of AI in Dentistry 20:51 Final Thoughts and Resources Please rate, review and share this episode with your colleagues. Book a call with Dayna: https://calendly.com/dayna-johnson/discovery-call
Is AI about to take your job—or supercharge your career?This week, the AI landscape just shifted again with game-changing advancements from OpenAI, DeepSeek, and Google's Gemini. The result? More powerful models, better automation, and serious implications for the workforce. From millions of jobs at risk to humanoid robots entering factories, the pace of change is staggering.So what should business leaders do to stay ahead? The answer lies in AI education, strategic implementation, and understanding the risks and opportunities ahead. In this episode, we break down the biggest AI developments of the week and how they impact your industry.In this AI news session, you'll discover:The shocking AI job impact—Could 300M jobs really be at risk?OpenAI's new releases and why their most powerful model is now freeDeepSeek's security failures—Why this open-source AI is raising red flags worldwideGoogle's AI dominance—How Gemini 2.0 just took over the top AI rankingsThe rise of humanoid robots and what it means for blue-collar workWhy business leaders must invest in AI training now (and how to start)Don't get left behind in the AI revolution. Take action today by joining the AI Business Transformation Course starting February 17th. Sign up now https://multiplai.ai/ai-course/ About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Free AI Consultation: https://multiplai.ai/book-a-call/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Cybersecurity Today: DeepSeek AI Disruptions, Nvidia Breach, and TalkTalk Hack Revisited In this weekend edition of Cybersecurity Today, our panel reviews the most significant cybersecurity stories of the past month. This episode features Laura Payne from White Tuque, David Shipley from Beauceron Security, and Dana Proctor from IBM. Key topics include the sudden emergence of DeepSeek AI, Nvidia's vulnerabilities and their effect on stock prices, and TalkTalk's latest data breach. Additionally, the discussion covers the soaring API security vulnerabilities reported by Wallarm and the UK's potential legislative action on ransomware payments. Stay tuned for expert insights and analysis on these pressing issues in the world of cybersecurity. 00:00 Introduction and Panel Welcome 00:41 DeepSeek AI Disruption 02:09 Security Concerns and Reactions 04:06 NVIDIA's Vulnerabilities and AI Security 07:15 Economic and Geopolitical Implications 12:13 AI in Business and Security Practices 20:57 Open Source AI and Cybersecurity Risks 25:37 Responsibility in Data Management 26:25 AI's Unstoppable Progress 26:53 API Security Concerns 28:41 Non-Human Identities and API Challenges 30:36 The State of Cybersecurity Awareness 35:05 Legislative Hopes and Cybersecurity 37:25 TalkTalk Breach Revisited 44:10 Ransomware Legislation Proposals 45:34 Shoutout to Cyber Police 47:04 Closing Remarks and Audience Engagement
Here's the AI usage policy Jason developed with Rightworks https://ai.rightworks.com/policyAnd their comparison of the security of leading AI assistants https://ai.rightworks.com/resources/product-comparison
In this episode, host Ashish Rajan spoke to Mike Privette, founder of Return on Security, to explore the landscape of cybersecurity as we look toward 2025. Mike shared his unique insights on the economics of cybersecurity, breaking down industry trends, and discussing how AI is revolutionizing areas like governance, risk, compliance (GRC), and data loss prevention (DLP). They dive into the convergence of cloud security and application security, the rise of startups, and the ever-present "cat-and-mouse game" of adapting to investor and buyer needs. Guest Socials: Mike's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security Podcast- Youtube - Cloud Security Newsletter - Cloud Security BootCamp If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Cybersecurity Podcast Questions asked: (00:00) Introduction (00:27) A bit about Mike (00:49) The story behind Return On Security (01:40) How big is the cybersecurity landscape? (04:36) Cybersecurity Trends from 2024 (07:03) AI Security in 2024 (08:10) Cybersecurity Trends in 2025 (13:16) Trends to look at when starting a company (16:18) Trends for Startups (17:37) Do new vendors enter the cybersecurity market? (18:53) Whats a healthy cybersecurity industry? (20:12) The world of startup acquisitions (22:29) The Fun Section
About the CISO Circuit SeriesSean Martin and Michael Piacente join forces roughly once per month (or so, depending on schedules) to discuss everything from looking for a new job, entering the field, finding the right work/life balance, examining the risks and rewards in the role, building and supporting your team, the value of the community, relevant newsworthy items, and so much more. Join us to help us understand the role of the CISO so that we can collectively find a path to Redefining CyberSecurity for business and society. If you have a topic idea or a comment on an episode, feel free to contact Sean Martin.____________________________Guests: Heather Hinton, CISO-in-Residence, Professional Association of CISOsOn LinkedIn | https://www.linkedin.com/in/heather-hinton-9731911/____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martinMichael Piacente, Managing Partner and Cofounder of Hitch PartnersOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/michael-piacente____________________________This Episode's SponsorsImperva | https://itspm.ag/imperva277117988LevelBlue | https://itspm.ag/levelblue266f6cThreatLocker | https://itspm.ag/threatlocker-r974___________________________Episode NotesIn this episode of the CISO Circuit Series, part of the Redefining Cybersecurity Podcast on ITSPmagazine, hosts Sean Martin and Michael Piacente welcomed Heather Hinton, seasoned cybersecurity leader, to discuss the evolving responsibilities and recognition of Chief Information Security Officers (CISOs). Their conversation explored the transformative work of the Professional Association of CISOs (PAC), an organization dedicated to establishing standards, accreditation, and support for cybersecurity leaders globally.This episode addressed three critical questions shaping the modern CISO role:How can CISOs build trust within their organizations?What is PAC doing to elevate cybersecurity as a recognized profession?How can CISOs prepare for increasing scrutiny and legal risks?Building Trust: A CISO's Key ResponsibilityHeather Hinton, whose career includes leadership roles like VP and CISO for IBM Cloud and PagerDuty, underscores that trust is foundational for a CISO's success. Beyond technical expertise, a CISO must demonstrate leadership, strategic thinking, and effective communication with boards, executives, and teams. Hinton highlights that cybersecurity should not be perceived as merely a technical function but as a critical enabler of business objectives.The PAC accreditation process reinforces this perspective by formalizing the skills needed to build trust. From fostering collaboration to aligning security strategies with organizational goals, PAC equips CISOs with tools to establish credibility and demonstrate value from day one.Elevating Cybersecurity as a Recognized ProfessionMichael Piacente, Managing Partner at Hitch Partners and co-host of the CISO Circuit Series, emphasizes PAC's role in professionalizing cybersecurity. By introducing a Code of Professional Conduct, structured accreditation programs, and robust career development resources, PAC is raising the bar for the profession. Hinton and Piacente explain that PAC's ultimate vision is to make membership and accreditation standard for CISO roles, akin to certifications we've come to expect and rely upon for doctors or lawyers.This vision reflects a growing recognition of cybersecurity as a discipline critical not only to organizations but to society as a whole. PAC's advocacy extends to shaping global policies, setting professional standards, and fostering an environment where CISOs are equipped to handle emerging challenges like hybrid warfare and AI-driven threats.Preparing for Legal Risks and Industry ChallengesThe conversation also delves into the increasing legal and regulatory scrutiny CISOs face. Piacente and Hinton stress the importance of having clear job descriptions, liability protections, and professional resources—areas where PAC is driving significant progress. By providing legal and mental health support, along with peer-driven mentorship, PAC empowers CISOs to navigate these challenges with confidence.Hinton notes that PAC is also a critical voice in addressing broader systemic risks, advocating for policies that protect CISOs while ensuring they are well-positioned to protect their organizations and society.Looking AheadWith goals to expand its membership to 1,000 and scale its accreditation programs by 2025, PAC is setting the foundation for a more unified and professionalized cybersecurity community. Hinton envisions PAC becoming a global authority, advising governments and organizations on cybersecurity standards and policies while fostering collaboration among professionals.For those aspiring to advance cybersecurity as a recognized profession, PAC offers a platform to shape the future of the field. Learn more about PAC and how to join at TheCISO.org.____________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:
In this episode of the Cloud Security Podcast, host Ashish Rajan speaks to James Berthoty, founder of Latio.Tech and an engineer-driven analyst, for a discussion on cloud security tools. In this episode James breaks down CNAPP and what it really means for engineers, if kubernetes secuity is the new baseline for cloud security and runtime security vs vulnerability management. Guest Socials: James's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security Podcast- Youtube - Cloud Security Newsletter - Cloud Security BootCamp If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Cybersecurity Podcast Questions asked: (00:00) Introduction (02:26) A bit about James (03:20) What in Cloud Security in 2025? (04:51) What is CNAPP? (07:01) Differentiating a vulnerability from misconfiguration (11:51) Vulnerability Management in Cloud (15:38) Is Kubernetes becoming the default? (21:50) Is there a good way to do platformization? (24:16) Should CNAPP include Kubernetes? (28:07) What is AI Security in 2025? (35:06) Tool Acronyms for 2025 (37:27) Fun Questions
In today's episode, we're thrilled to have Niv Braun, co-founder and CEO of Noma Security, join us as we tackle some pressing issues in AI security.With the rapid adoption of generative AI technologies, the landscape of data security is evolving at breakneck speed. We'll explore the increasing need to secure systems that handle sensitive AI data and pipelines, the rise of AI security careers, and the looming threats of adversarial attacks, model "hallucinations," and more. Niv will share his insights on how companies like Noma Security are working tirelessly to mitigate these risks without hindering innovation.We'll also dive into real-world incidents, such as compromised open-source models and the infamous PyTorch breach, to illustrate the critical need for improved security measures. From the importance of continuous monitoring to the development of safer formats and the adoption of a zero trust approach, this episode is packed with valuable advice for organizations navigating the complex world of AI security.So, whether you're a data scientist, AI engineer, or simply an enthusiast eager to learn more about the intersection of AI and security, this episode promises to offer a wealth of information and practical tips to help you stay ahead in this rapidly changing field. Tune in and join the conversation as we uncover the state of AI security and what it means for the future of technology.Quotable Moments00:00 Security spotlight shifts to data and AI.03:36 Protect against misconfigurations, adversarial attacks, new risks.09:17 Compromised model with undetectable data leaks.12:07 Manual parsing needed for valid, malicious code detection.15:44 Concerns over Agiface models may affect jobs.20:00 Combines self-developed and third-party AI models.20:55 Ensure models don't use sensitive or unauthorized data.25:55 Zero Trust: mindset, philosophy, implementation, security framework.30:51 LLM attacks will have significantly higher impact.34:23 Need better security awareness, exposed secrets risk.35:50 Be organized with visibility and governance.39:51 Red teaming for AI security and safety.44:33 Gen AI primarily used by consumers, not businesses.47:57 Providing model guardrails and runtime protection services.50:53 Ensure flexible, configurable architecture for varied needs.52:35 AI, security, innovation discussed by Niamh Braun.
“The workflow doesn't go away, but the interface and the capability set of what constitutes a SaaS application is going to be very different than what we see today,” says Chris Young, Microsoft's executive vice president of business development, strategy and ventures. In this episode of Tech Disruptors, Young joins Bloomberg Intelligence senior technology analyst Anurag Rana in a discussion covering a wide range of topics, from generative AI to cybersecurity to autonomous vehicles. The two examine just what Microsoft is looking for when it makes its investments, as well as what Microsoft itself is doing in many of these topics. Find this and other Bloomberg Intelligence podcasts at BI PODCASTS .
Kyle Bhiro is the 25-year-old co-founder of PensarAI, an open-source AI security platform addressing the challenges of weaponized AI. In this conversation, we explore Kyle's mission to combat AI vulnerabilities, democratize security solutions for developers, and foster open-source innovation. We also dive into his entrepreneurial journey, overcoming imposter syndrome, securing venture funding, and navigating the future of AI. EPISODE LINKS: Website: https://www.pensarai.com/ Twitter: https://x.com/kylebhiro LinkedIn: https://www.linkedin.com/in/kylebhiro/ TIMESTAMPS: 00:00:00 Intro and background 00:01:21 Pensar 00:02:55 Trends in AI and Startup Landscape 00:06:51 Building for Customers and Open Source 00:10:34 Pensar Use Cases 00:16:10 The Vision for Pensar 00:20:58 Getting Advisors: The Bold Approach 00:21:41 Leveraging Existing Relationships 00:27:24 Fundraising Tips and Experiences 00:28:48 The Role of Accelerators 00:34:32 The Future of AI 00:38:27 Closing CONNECT: Website: https://hoo.be/elijahmurray YouTube: https://www.youtube.com/@elijahmurray Twitter: https://twitter.com/elijahmurray Instagram: https://www.instagram.com/elijahmurray LinkedIn: https://www.linkedin.com/in/elijahmurray/ Apple Podcasts: https://podcasts.apple.com/us/podcast/the-long-game-w-elijah-murray/ Spotify: https://podcasters.spotify.com/pod/show/elijahmurray RSS: https://anchor.fm/s/3e31c0c/podcast/rss
GitHub has done the research, brought the receipts, and knows just what to do to get more developers into the flow state. Is it legit or hype? We'll dig in. Plus, making the case that Rails is better low code than low code, and we help someone go from Pizza to Rust.